Manually installing on Amazon Web Services (AWS) with ODF storage

This document walks you through the installation of Guardium® Data Security Center on AWS with ODF storage.

Before you begin

About this task

Creating the Red Hat OpenShift cluster on AWS

Procedure

  1. Generate an SSH private key and add it to the agent:
    1. Create or use an SSH key that is configured for authentication without a password.
      For example, on a computer that uses a Linux® operating system, run this command to create this type of SSH key:
      # ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
    2. Start the ssh-agent process as a background task:
      # eval "$(ssh-agent -s)"

      The following example shows a successful output:

      Agent pid 31874
    3. Add your SSH private key to the ssh-agent:
      # ssh-add <path>/<file_name>
  2. Obtain the installation program:
    1. Access the Red Hat® OpenShift Cluster Manager.
    2. Select your installation type and then obtain a pull secret:
    3. Download the OpenShift installation program for your operating system.
    4. Open the OpenShift installer tar file by running this command:
      # tar xvf openshift-install-mac-4.14.27.tar.gz
  3. Create the install-config.yaml file.
    # ./openshift-install create install-config --dir=<Directory>

    When you create the file, use these parameters:

    • SSH Public Key: /Users/user/.ssh/id_rsa.pub
    • Platform: AWS
    • For credentials:
      INFO Credentials loaded from the "default" profile in file "/Users/user/.aws/credentials"
    • Region: us-west-1
    • Base Domain: guardiumdatasecuritycenter.com
    • Cluster Name: sys-ins-con
    • Paste the pull secret that you obtained in this step.

    Edit the file (issue the cat install-config.yaml command) according to the System requirements and prerequisites:

    Note: This yaml file example is a test configuration only. For production environments, consult the Hardware cluster requirements to determine how many worker nodes you need.
  4. Deploy the cluster:
    $ ./openshift-install create cluster --dir=<installation_directory> \ --log-level=info

    The following example shows a successful output:

    #./openshift-install create cluster --dir=/Users/myusername/ocp-4.8       
    INFO Credentials loaded from the "default" profile in file "/Users/myusername/.aws/credentials" 
    INFO Consuming Install Config from target directory 
    INFO Creating infrastructure resources...         
    INFO Waiting up to 20m0s for the Kubernetes API at https://api.gi-aws48.guardium-insights.com:6443... 
    INFO API v1.21.1+051ac4f up                       
    INFO Waiting up to 30m0s for bootstrapping to complete... 
    INFO Destroying the bootstrap resources...        
    INFO Waiting up to 40m0s for the cluster at https://api.gi-aws48.guardium-insights.com:6443 to initialize... 
    INFO Waiting up to 10m0s for the openshift-console route to be created... 
    INFO Install complete!                            
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/myusername/ocp-4.8/auth/kubeconfig' 
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.gi-aws48.guardiumnsights.com 
    INFO Login to the console with user: "kubeadmin", and password: "CqD7a-Q3Ztk-DvJa-VRkcZ" 
    INFO Time elapsed: 37m45s

What to do next

For more information, see Installing a cluster on AWS and Configuring an AWS account .

Installing Openshift Data Foundation (previously OpenShift Container Storage) storage class

Procedure

  1. In the web console, select Operators > OperatorHub.
  2. Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for OpenShift Data Foundation operator.
  3. Click Install.
  4. In the Install Operator page, these options are selected by default:
    1. Update Channel: stable-4.14
    2. Installation Mode: A specific namespace on the cluster
    3. Installed Namespace Operator namespace openshift-storage
      Note: If the openshift-storage namespace does not exist, it is created during the operator installation.
    4. Select Approval Strategy: Automatic or Manual
    5. Click Install.

    To verify the installation, confirm that a green checkmark is in the Status column for the OpenShift Container Storage operator:

  5. To create the OpenShift Container Storage cluster, follow the instructions in Deploy OpenShift Data Foundation using dynamic storage devices .

    Select only those worker nodes on which you don’t want to run Db2®. Select at least 3 worker nodes.

    To verify that the storage class is installed, issue the oc get sc command. The following example shows a successful output:

    NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gp2 (default)                 kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   3d18h
    gp2-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3d18h
    ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   3d
    managed csi or ceph-rbd       openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   3d
    openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  3d

Logging in to the cluster

Procedure

Log in to your cluster according to the OpenShift instructions.
INFO Access the OpenShift web-console here: https://myOpenShift.guardium-data-security-center.com 
INFO Login to the console with user: "kubeadmin", and password: "3Xo7n-qCc78-pQfIh-Lf7vE"
For example,
oc login --token=sha256~7rUpGhaFp-lEY3UDH4VBZjIsbIYxkXFemHiI-0MJS50 --server=https://myOpenShift.guardium-data-security-center.com:6443

Installing IBM Cloud Pak foundational services on Guardium Data Security Center

IBM Guardium Data Security Center is deployed on IBM Cloud Pak foundational services with OpenShift Container Platform.

Before you begin

If the SKIP_INSTALL_ICS parameter in the configuration file is set to the default value of false, you can proceed directly to Online and offline/air gap installation of Guardium Data Security Center by using automated (all-in-one) installation script.

If you are installing Guardium Data Security Center manually or if SKIP_INSTALL_ICS is set to true, install IBM Cloud Pak foundational services beforehand by following the procedure.

About this task

If you are installing Guardium Data Security Center version 3.6.x, install Cloud Pak foundational services version 4.6.6.

If you currently have IBM® Common Services version 4.5.x, you can upgrade to IBM Common Services version 4.6.x by using the case bundle.

Important: If you already downloaded Cloud Pak foundational services for use with another product, you might not need to download it again. If you have the correct version for the Guardium Data Security Center version that you want to install, you can skip this task.

Procedure

  1. Log in to your Red Hat OpenShift cluster instance.
    oc login -u <KUBE_USER> -p <KUBE_PASS> [--insecure-skip-tls-verify=true]
    For example,
    oc login api.example.ibm.com:6443 -u kubeadmin -p xxxxx-xxxxx-xxxxx-xxxxx
  2. Create a namespace for Cloud Pak foundational services. Use the same namespace where you install Guardium Data Security Center.
    export NAMESPACE=<GI NAMESPACE>
    oc create namespace ${NAMESPACE}
  3. Choose the CASE version that you want to use.
    export CASE_ARCHIVE=ibm-guardium-data-security-center-<Case version>.tgz
    For example, to use version 2.6.0, specify the 2.6.0 bundle file as shown in the following command.
    export CASE_ARCHIVE=ibm-guardium-data-security-center-2.6.0.tgz
  4. Install the IBM Certificate Manager and IBM Common Services.
    1. Create a namespace ibm-cert-manager for the IBM Certificate Manager.
      oc create namespace ibm-cert-manager
    2. Set the environment variable for the --inventory parameter.
      export CERT_MANAGER_INVENTORY_SETUP=ibmCertManagerOperatorSetup
    3. Install the IBM Certificate Manager catalog.
      oc ibm-pak launch $CASE_NAME \
      --version $CASE_VERSION \
      --action install-catalog \
      --inventory $CERT_MANAGER_INVENTORY_SETUP \
      --namespace openshift-marketplace \
      --args "--inputDir ${LOCAL_CASE_DIR}"
  5. Check the pod and catalog source status.
    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace

    The following output is an example of the output that results from running the command.

    NAME                                      READY    STATUS     RESTARTS   AGE
    ibm-cert-manager-catalog-bxjjb            1/1      Running    0          49s
    
    NAME                            DISPLAY               TYPE   PUBLISHER   AGE
    ibm-cert-manager-catalog    ibm-cert-manager-4.2.1    grpc   IBM         52s
  6. Install the IBM Certificate Manager operators.
    oc ibm-pak launch $CASE_NAME \
       --version $CASE_VERSION \
       --inventory $CERT_MANAGER_INVENTORY_SETUP \
       --action install-operator \
       --namespace ibm-cert-manager \
       --args "--inputDir ${LOCAL_CASE_DIR}"
    Verify that the IBM Certificate Manager CSV is in the Succeeded phase.
    oc get csv -n ibm-cert-manager
    oc get pod -n ibm-cert-manager
    The following example shows the output of the commands.
    NAME                                               DISPLAY                       VERSION               REPLACES   PHASE 
    aws-efs-csi-driver-operator.v4.14.0-202403060538   AWS EFS CSI Driver Operator   4.14.0-202403060538              Succeeded 
    ibm-cert-manager-operator.v4.2.1                   IBM Cert Manager              4.2.1                            Succeeded 
    oc get pods -n ibm-cert-manager  
      
    NAME                              READY   STATUS    RESTARTS   AGE 
    cert-manager-cainjector-c9dd8     1/1     Running   0          97s 
    cert-manager-controller-54fb      1/1     Running   0          97s 
    cert-manager-webhook-5dc          1/1     Running   0          96s 
    ibm-cert-manager-operator-75c8    1/1     Running   0          106s
  7. Install the IBM Cloud Pak foundational services catalog.
    export ICS_INVENTORY_SETUP=ibmCommonServiceOperatorSetup
    
       oc ibm-pak launch $CASE_NAME \
         --version $CASE_VERSION \
         --action install-catalog \
         --inventory $ICS_INVENTORY_SETUP \
         --namespace $NAMESPACE \
         --args "--registry icr.io --recursive \
         --inputDir ${LOCAL_CASE_DIR}"
  8. Check the pod and catalog source status of the opencloud-operators by using the following commands.
    oc get pods -n openshift-marketplace;
    oc get catalogsource -n openshift-marketplace
    The following example shows the output of the commands.
    opencloud-operators-zmtmv                                         1/1     Running     0          25s 
    opencloud-operators        IBMCS Operators          grpc   IBM         46s 
  9. Export the following environment variables.
    export CP_REPO_USER=<cp user>
    export CP_REPO_PASS=<cp password>
  10. Install the Cloud Pak foundational services operators.
    export ICS_SIZE=small;
    
    oc ibm-pak launch $CASE_NAME \
       --version $CASE_VERSION \
       --action install-operator \
       --inventory $ICS_INVENTORY_SETUP \
       --namespace $NAMESPACE \
       --args "--size ${ICS_SIZE} --registry cp.icr.io --user ${CP_REPO_USER} --pass ${CP_REPO_PASS} --secret ibm-entitlement-key --recursive --inputDir ${LOCAL_CASE_DIR}"
  11. Verify that the CSV is in Succeeded state:
    oc get csv -n $NAMESPACE
    The following example shows the output of the command.
    oc get pods -n ${NAMESPACE}
    
    NAME                                                            READY    STATUS    RESTARTS   AGE
    common-service-db-1                                               1/1    Running       0      4h2m
    common-web-ui-75fb7fcbff-rpx9w                                    1/1    Running       0      4h3m
    create-postgres-license-config-vvzj8                              0/1   Completed      0      4h3m
    ibm-common-service-operator-7b9f6c49bc-ffl9f                      1/1    Running       0      4h7m
    ibm-commonui-operator-86c45f5df9-grm27                            1/1    Running       0      4h3m
    ibm-iam-operator-76969bf99b-lbd85                                 1/1    Running       0      4h4m
    ibm-zen-operator-69c4bf46f8-9vt5x                                 1/1    Running       0      4h4m
    oidc-client-registration-hcmxf                                    0/1   Completed      0      4h3m
    operand-deployment-lifecycle-manager-5d4fff9f89-75vkf             1/1    Running       0      4h5m
    platform-auth-service-6d7c654fc6-sj5gg                            1/1    Running       0      4h1m
    platform-identity-management-8dccc6b84-rt47d                      1/1    Running       0      4h1m
    platform-identity-provider-5d74f7d65d-h7l7l                       1/1    Running       0      4h1m
    postgresql-operator-controller-manager-1-18-12-6b9b4fb545-d6stw   1/1    Running       0      4h3m
  12. Verify that the operandrequest is available:
    oc get opreq -n $NAMESPACE
    The following example shows the output of the command.
    NAME                         AGE    PHASE     CREATED AT
    common-service               4h3m   Running   2024-08-27T09:27:50Z
    ibm-iam-request              4h2m   Running   2024-08-27T09:28:36Z
    postgresql-operator-request  4h2m   Running   2024-08-27T09:29:00Z
  13. Verify that all the Cloud Pak foundational services pods are in the Running or Completed state by using the following command.
    oc get pods -n ${NAMESPACE}
    The following example shows the output of the command.
    oc get pods -n ${NAMESPACE}
    
    NAME                                                              READY   STATUS    RESTARTS   AGE
    common-service-db-1                                               1/1     Running      0       4h2m
    common-web-ui-75fb7fcbff-rpx9w                                    1/1     Running      0       4h3m
    create-postgres-license-config-vvzj8                              0/1     Completed    0       4h3m
    ibm-common-service-operator-7b9f6c49bc-ffl9f                      1/1     Running      0       4h7m
    ibm-commonui-operator-86c45f5df9-grm27                            1/1     Running      0       4h3m
    ibm-iam-operator-76969bf99b-lbd85                                 1/1     Running      0       4h4m
    ibm-zen-operator-69c4bf46f8-9vt5x                                 1/1     Running      0       4h4m
    oidc-client-registration-hcmxf                                    0/1     Completed    0       4h3m
    operand-deployment-lifecycle-manager-5d4fff9f89-75vkf             1/1     Running      0       4h5m
    platform-auth-service-6d7c654fc6-sj5gg                            1/1     Running      0       4h1m
    platform-identity-management-8dccc6b84-rt47d                      1/1     Running      0       4h1m
    platform-identity-provider-5d74f7d65d-h7l7l                       1/1     Running      0       4h1m
    postgresql-operator-controller-manager-1-18-12-6b9b4fb545-d6stw   1/1     Running      0       4h3m
    After you complete the verification, install the Guardium Data Security Center operators. This process takes approximately 20 minutes.
  14. The default username to access the console is cpadmin. To retrieve the password, use these commands:
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_username}' -n $NAMESPACE | base64 -d | awk '{print $1}'
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' -n $NAMESPACE | base64 -d | awk '{print $1}'

    The output that you receive, for example EwK9dj_example_password_lZSzVsA, is the password that is used for accessing the console. To change the default username (cpadmin) or password, see Changing the cluster administrator access credentials.

  15. To retrieve the cp-console route and credentials, use the following command.
    oc get route cp-console -n $NAMESPACE

What to do next

After you install the Cloud Pak foundational services, you can continue with the installation of Guardium Data Security Center.

Installing Guardium Data Security Center

Procedure

  1. The default username to access the console is cpadmin. To retrieve the password, use these commands:
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_username}' -n $NAMESPACE | base64 -d | awk '{print $1}'
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' -n $NAMESPACE | base64 -d | awk '{print $1}'

    The output that you receive, for example EwK9dj_example_password_lZSzVsA, is the password that is used for accessing the console. To change the default username (cpadmin) or password, see Changing the cluster administrator access credentials.

  2. Set these environment variables:
    
      export NAMESPACE=staging
      export ICS_USER=admin
      export ICS_PASS=<ics_password>
      export CP_REPO_USER=cp
      export CP_REPO_PASS=<entitlement_key>

    Where <ics_password> is the password that you retrieved to access the console and <entitlement_key> is the entitlement key, as described in Obtain your entitlement key.

  3. Create a namespace for the Guardium Data Security Center instance. This namespace must be 10 or fewer characters in length.
    oc create namespace ${NAMESPACE}
    oc project ${NAMESPACE}
  4. Run the pre-install script. This script sets up secrets and parameters for the Guardium Data Security Center instance.
    export GI_INVENTORY_SETUP=install
    oc ibm-pak launch $CASE_NAME \
    --version $CASE_VERSION \
    --inventory $GI_INVENTORY_SETUP \
    --action pre-install \
    --namespace $NAMESPACE \
    --args "-n ${NAMESPACE} -h <DB worker host> -l true -t false"
    <DB_worker_host> is the worker node name on which you want to host Db2.
  5. Install the catalog.
    oc ibm-pak launch $CASE_NAME \
       --version $CASE_VERSION \
       --inventory $GI_INVENTORY_SETUP \
       --action install-catalog \
       --namespace openshift-marketplace \
       --args "--inputDir ${LOCAL_CASE_DIR}" \
    --tolerance 1
    To verify that the catalogs are installed, issue this command:
    oc get pod -n openshift-marketplace
    The output is similar to:
    NAME                                               READY   STATUS    RESTARTS   AGE
    ibm-cloud-databases-redis-operator-catalog-x2rr4   1/1     Running   0          41s
    ibm-db2uoperator-catalog-mzvd7                     1/1     Running   0          73s
    ibm-guardium-insights-operator-catalog-n8qkr       1/1     Running   0          16s
  6. Install the operator.
    oc ibm-pak launch $CASE_NAME
     --version $CASE_VERSION
     --inventory $GI_INVENTORY_SETUP
     --action install-operator
     --namespace $NAMESPACE
     --args "--registry cp.icr.io --user ${CP_REPO_USER} --pass ${CP_REPO_PASS} --secret ibm-entitlement-key --inputDir ${LOCAL_CASE_DIR} --tolerance 1"
    To verify that the operators are installed, issue this command:
    oc get pods –n staging
    The output is similar to:
    NAME                                                  READY   STATUS    RESTARTS   AGE
    db2u-day2-ops-controller-manager-5488d5c844-8z568     1/1     Running   0          2m59s
    db2u-operator-manager-5fc886d4bc-mvg98                1/1     Running   0          2m59s
    ibm-cloud-databases-redis-operator-6d668d7b88-p69hm   1/1     Running   0          74s
    mongodb-kubernetes-operator-856bc86746-8vsrg          1/1     Running   0          49s
  7. Issue this command:
    oc get storageclass

    The output is similar to:

    NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    managed-premium (default)     kubernetes.io/azure-disk                Delete          WaitForFirstConsumer   true                   2d13h
    ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   2d11h
    ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   2d11h
    openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  2d11h
  8. Create a .yaml file. For the storageClassName, use the RWX/FileSystem storageClassName.
    apiVersion: gi.ds.isc.ibm.com/v1
    kind: GuardiumDataSecurityCenter
    metadata:
      #name: This must be 10 or less characters
      name: staging
      #Provide the name of the namespace in which you want to install the CR.
      namespace: staging
    spec:
      version: 3.6.0
      license:
        accept: true
        licenseType: "L-QABB-9QRLFB"
      guardiumGlobal:
        backupsupport:
          enabled: true
          name: <GI_Backup_PVC>
          storageClassName: ocs-storagecluster-cephfs
          size: 500Gi
        dev: “false”
        licenseAccept: true
        # Guardium Insights template size can be defined as below using the size parameter
        size: values-small
        image:
          insightsPullSecret: ibm-entitlement-key
          repository: cp.icr.io/cp/ibm-guardium-data-security-center
        instance:
          ingress:
            # hostName: Change this, ex: staging.apps.gi-devops-ocp46-41.cp.fyre.ibm.com
            hostName: <host_name>
            # domainName:  Change this
            domainName: <domain_name>
          ics:
            namespace: ibm-common-services
            registry: common-service
        #storageClassName: Change this to a ReadWriteMany StorageClass
        storageClassName: "ocs-storagecluster-cephfs"
        #storageClassNameRWO: Must be a ReadWriteOnce StorageClass
        storageClassNameRWO: "ocs-storagecluster-ceph-rbd"
      capabilities:
      - name: quantum-safe
        enabled: true
        configurations: {}
      - name: platform
        enabled: true
        configurations:
          dependency-db2:
            image:
              insightsPullSecret: "ibm-entitlement-key"
            db2:
            size: 2
            resources:
              requests:
                cpu: "6"
                memory: "48Gi"
              limits:
                cpu: "6"
                memory: "48Gi"
            storage:
            - name: meta
              spec:
                storageClassName: "ocs-storagecluster-cephfs"
                accessModes:
                - ReadWriteMany
                resources:
                  requests:
                    storage: "1000Gi"
              type: create
            - name: data
              spec:
                storageClassName: "ocs-storagecluster-ceph-rbd"
                accessModes:
                - ReadWriteOnce
                resources:
                  requests:
                    storage: "4000Gi"
              type: template
            mln:
              distribution: 0:0
              total: 2
          dependency-kafka:
            kafka:
              storage:
                type: persistent-claim
                size: 250Gi
                class: "ocs-storagecluster-ceph-rbd"
            zookeeper:
              storage:
                type: persistent-claim
                size: 20Gi
                class: "ocs-storagecluster-ceph-rbd"
          mini-snif:
            persistentVolumesClaims:
              mini-snif-shared:
                storageClassName: "ocs-storagecluster-cephfs"
          universal-connector-manager:
            persistentVolumesClaims:
              universal-connector-manager-shared:
                storageClassName: "ocs-storagecluster-cephfs"
          settings-datasources:
            persistentVolumesClaims:
              settings-datasources:
                storageClassName: "ocs-storagecluster-cephfs"
          ticketing:
            persistentVolumesClaims:
              ticketing-keystore:
                storageClassName: "ocs-storagecluster-cephfs"
          connections:
            insightsEnv:
              FEATURE_STAP_STREAMING: "false"
    
  9. Apply the .yaml file:
    oc apply -f file.yaml

    The output is similar to:

    NAME      TYPE      STATUS   REASON        MESSAGE                 DESIRED_VERSION   INSTALLED_VERSION
    staging   Running   True     Reconciling   Starting to Reconcile   3.6.0
    Tip: The displayed versions in the output vary based on the Guardium Data Security Center version that you want to install and the current version on your system.
  10. Wait for approximately one hour and then validate the Guardium Data Security Center installation:
    oc get guardiumdatasecuritycenter

    The output is similar to:

    After completion, the output is similar to:
    NAME      TYPE    STATUS   REASON      MESSAGE                    DESIRED_VERSION   INSTALLED_VERSION
    staging   Ready   True     Completed   Completed Reconciliation   3.6.0            3.6.0
    Tip: The displayed versions in the output vary based on the Guardium Data Security Center version that you want to install and the current version on your system.

    And issue this command to verify that the StorageClass has a Bound status:

    oc get pvc

    The output is similar to:

    NAME                                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    c-gi-sample-db2-meta                                      Bound    pvc-2f13e641-5aae-40a7-9b0d-95f4fc6a8143   1000Gi     RWX            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-db2-db2u-0                               Bound    pvc-443c804f-7f70-4693-b7ed-ba1013930b4a   4000Gi     RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-db2-db2u-1                               Bound    pvc-27e3d353-489a-43b6-8ae9-d500d9d91cf4   4000Gi     RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-m-0                                Bound    pvc-2a803a06-5eae-412f-81b1-2767d8e36e85   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-m-1                                Bound    pvc-ff0ca61c-1209-4d20-96fc-78887abad27d   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-s-0                                Bound    pvc-b1ddb497-d13d-48e2-bba2-f5141def9484   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-s-1                                Bound    pvc-688772f5-ebb9-4619-bf70-b5ee561ab158   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-gi-sample-kafka-0                                    Bound    pvc-32339849-adb6-4677-b1bf-998643b5c4d3   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-gi-sample-kafka-1                                    Bound    pvc-0c479b89-4c62-4754-a5af-12c885afe553   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-gi-sample-zookeeper-0                                Bound    pvc-e436f03b-fe40-4b79-81a3-fb6c76a7a953   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-gi-sample-zookeeper-1                                Bound    pvc-8bb4ac61-332d-4500-948a-b1858f6cd555   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-kafka-0                                      Bound    pvc-56664beb-48c7-49fd-81b9-5557c4c1fbb7   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-kafka-1                                      Bound    pvc-dd36280e-364b-4c74-b225-0b72fc1e3af7   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-zookeeper-0                                  Bound    pvc-3b4edb7e-fad9-4ff9-849d-02ed38219329   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-zookeeper-1                                  Bound    pvc-9ccc3feb-1ae9-4a9d-a970-5d374c8ee2da   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-volume-gi-sample-mongodb-0                           Bound    pvc-275b737f-a380-41e1-a232-3389599c2448   100Gi      RWO            ocs-storagecluster-cephfs     31d
    data-volume-gi-sample-mongodb-1                           Bound    pvc-dd8b7f46-8b9b-4099-b0bd-b1f55652b2c9   100Gi      RWO            ocs-storagecluster-cephfs     31d
    gi-sampledjm6enctbcion3yyrvfum9-mini-snif-shared          Bound    pvc-f9becd16-7158-41f9-b1f1-93308337d2b5   50Gi       RWX            ocs-storagecluster-cephfs     31d
    logs-volume-gi-sample-mongodb-0                           Bound    pvc-43d29f78-4559-44e2-8927-ab4895807aee   100Gi      RWO            ocs-storagecluster-cephfs     31d
    logs-volume-gi-sample-mongodb-1                           Bound    pvc-256617cc-ec37-4581-8145-f6bbe0b162c6   100Gi      RWO            ocs-storagecluster-cephfs     31d
    mini-snif-i-gi-sampledjm6enctbcion3yyrvfum9-mini-snif-0   Bound    pvc-37e9748f-2306-4abe-b273-2de4c988a326   50Gi       RWO            ocs-storagecluster-cephfs     31d
    settings-datasources                                      Bound    pvc-ec2b30a6-0e10-4931-84d2-59dba5799d10   50Mi       RWX            ocs-storagecluster-cephfs     31d
    ticketing-keystore                                        Bound    pvc-4ca2e152-3767-4956-90af-c1a15adde109   2Mi        RWX            ocs-storagecluster-cephfs     31d
    universal-connector-manager-shared                        Bound    pvc-a2016cf2-df8a-4a6a-9de9-bcce9cdb1704   50Gi       RWX            ocs-storagecluster-cephfs     31d

    Finally, verify that you can log in to the Guardium Data Security Center user interface.