Manually installing on IBM Cloud with ibmc-file-gold-gid storage

You can install Guardium® Data Security Center on IBM Cloud with ibmc-file-gold-gid.

Before you begin

Before you proceed with the installation, complete these steps:
  1. Verify that your environment meets the System requirements and prerequisites and Hardware cluster requirements.
  2. Prepare for installation.
  3. Log in to the OpenShift® command-line interface.
  4. Downloading the Guardium Data Security Center CASE file and set up your environment for dependencies.

About this task

Complete the following steps to manually install Guardium Data Security Center on IBM Cloud with ibmc-file-gold-gid:
  1. Creating an OpenShift cluster.
  2. Configuring IBM Cloud file storage (ibmc-file-gold-gid storage class).
  3. Logging in to the cluster.
  4. Installing IBM Cloud Pak® foundational services.
  5. Installing Guardium Data Security Center.
To uninstall Guardium Data Security Center, see: Uninstalling Guardium Data Security Center.

Creating an OpenShift cluster

Create a Red Hat OpenShift cluster on IBM Cloud with classic infrastructure.

Procedure

  1. In the IBM Cloud dashboard, click OpenShift.
  2. Click OpenShift Clusters > Create Cluster
    1. Enter the Orchestration service: Select the OpenShift version, such as OpenShift 4.7.30.
    2. For OCP entitlement, select Apply my Cloud Pak OCP entitlement to this worker pool.
    3. For Infrastructure, select Classic.
    4. In the Location section, select your Worker zone, such as Dallas 12:
      OpenShift cluster location
    5. In the Worker pool section, select Change flavor:
      • Enter a unique Worker pool name. If you do not enter a name, it assumes the name of default.
      • Set the Master service endpoint to both private and public endpoints.
      • Under Resource Details > Cluster name, enter a cluster name (for example, Mycluster-dal12).
      OpenShift worker pool
  3. Click the create option to provision the cluster.
  4. After the cluster is created, you can review its details by clicking State. You can also check the Ingress Subdomain. Copy the Master URL for later use.
  5. Click OpenShift web console to redirect to and open the OCP Console:
    OpenShift web console button
  6. In the OpenShift web console menu bar, click your IAM profile (IAM#user.name@email.com) and select Copy Login Command to generate the oc login command for authenticating.
    The following command is an example of the command that you generate:
    oc login token=sha256~ynpvzKe8jaVipXXXXriqqt4g3T7VHB2sVlsZqOiHBY --server=https://my_containers.cloud.ibm.com:31944

Configuring IBM Cloud file storage (ibmc-file-gold-gid storage class)

About this task

To learn how to configure IBM Cloud file storage, follow the steps in this topic.

Logging in to the cluster

Procedure

Log in to your cluster according to the OpenShift instructions.
INFO Access the OpenShift web-console here: https://myOpenShift.guardium-data-security-center.com 
INFO Login to the console with user: "kubeadmin", and password: "3Xo7n-qCc78-pQfIh-Lf7vE"
For example,
oc login --token=sha256~7rUpGhaFp-lEY3UDH4VBZjIsbIYxkXFemHiI-0MJS50 --server=https://myOpenShift.guardium-data-security-center.com:6443

Installing IBM Cloud Pak foundational services on Guardium Data Security Center

IBM Guardium Data Security Center is deployed on IBM Cloud Pak foundational services with OpenShift Container Platform.

Before you begin

If the SKIP_INSTALL_ICS parameter in the configuration file is set to the default value of false, you can proceed directly to Online and offline/air gap installation of Guardium Data Security Center by using automated (all-in-one) installation script.

If you are installing Guardium Data Security Center manually or if SKIP_INSTALL_ICS is set to true, install IBM Cloud Pak foundational services beforehand by following the procedure.

About this task

If you are installing Guardium Data Security Center version 3.6.x, install Cloud Pak foundational services version 4.6.6.

If you currently have IBM® Common Services version 4.5.x, you can upgrade to IBM Common Services version 4.6.x by using the case bundle.

Important: If you already downloaded Cloud Pak foundational services for use with another product, you might not need to download it again. If you have the correct version for the Guardium Data Security Center version that you want to install, you can skip this task.

Procedure

  1. Log in to your Red Hat OpenShift cluster instance.
    oc login -u <KUBE_USER> -p <KUBE_PASS> [--insecure-skip-tls-verify=true]
    For example,
    oc login api.example.ibm.com:6443 -u kubeadmin -p xxxxx-xxxxx-xxxxx-xxxxx
  2. Create a namespace for Cloud Pak foundational services. Use the same namespace where you install Guardium Data Security Center.
    export NAMESPACE=<GI NAMESPACE>
    oc create namespace ${NAMESPACE}
  3. Choose the CASE version that you want to use.
    export CASE_ARCHIVE=ibm-guardium-data-security-center-<Case version>.tgz
    For example, to use version 2.6.0, specify the 2.6.0 bundle file as shown in the following command.
    export CASE_ARCHIVE=ibm-guardium-data-security-center-2.6.0.tgz
  4. Install the IBM Certificate Manager and IBM Common Services.
    1. Create a namespace ibm-cert-manager for the IBM Certificate Manager.
      oc create namespace ibm-cert-manager
    2. Set the environment variable for the --inventory parameter.
      export CERT_MANAGER_INVENTORY_SETUP=ibmCertManagerOperatorSetup
    3. Install the IBM Certificate Manager catalog.
      oc ibm-pak launch $CASE_NAME \
      --version $CASE_VERSION \
      --action install-catalog \
      --inventory $CERT_MANAGER_INVENTORY_SETUP \
      --namespace openshift-marketplace \
      --args "--inputDir ${LOCAL_CASE_DIR}"
  5. Check the pod and catalog source status.
    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace

    The following output is an example of the output that results from running the command.

    NAME                                      READY    STATUS     RESTARTS   AGE
    ibm-cert-manager-catalog-bxjjb            1/1      Running    0          49s
    
    NAME                            DISPLAY               TYPE   PUBLISHER   AGE
    ibm-cert-manager-catalog    ibm-cert-manager-4.2.1    grpc   IBM         52s
  6. Install the IBM Certificate Manager operators.
    oc ibm-pak launch $CASE_NAME \
       --version $CASE_VERSION \
       --inventory $CERT_MANAGER_INVENTORY_SETUP \
       --action install-operator \
       --namespace ibm-cert-manager \
       --args "--inputDir ${LOCAL_CASE_DIR}"
    Verify that the IBM Certificate Manager CSV is in the Succeeded phase.
    oc get csv -n ibm-cert-manager
    oc get pod -n ibm-cert-manager
    The following example shows the output of the commands.
    NAME                                               DISPLAY                       VERSION               REPLACES   PHASE 
    aws-efs-csi-driver-operator.v4.14.0-202403060538   AWS EFS CSI Driver Operator   4.14.0-202403060538              Succeeded 
    ibm-cert-manager-operator.v4.2.1                   IBM Cert Manager              4.2.1                            Succeeded 
    oc get pods -n ibm-cert-manager  
      
    NAME                              READY   STATUS    RESTARTS   AGE 
    cert-manager-cainjector-c9dd8     1/1     Running   0          97s 
    cert-manager-controller-54fb      1/1     Running   0          97s 
    cert-manager-webhook-5dc          1/1     Running   0          96s 
    ibm-cert-manager-operator-75c8    1/1     Running   0          106s
  7. Install the IBM Cloud Pak foundational services catalog.
    export ICS_INVENTORY_SETUP=ibmCommonServiceOperatorSetup
    
       oc ibm-pak launch $CASE_NAME \
         --version $CASE_VERSION \
         --action install-catalog \
         --inventory $ICS_INVENTORY_SETUP \
         --namespace $NAMESPACE \
         --args "--registry icr.io --recursive \
         --inputDir ${LOCAL_CASE_DIR}"
  8. Check the pod and catalog source status of the opencloud-operators by using the following commands.
    oc get pods -n openshift-marketplace;
    oc get catalogsource -n openshift-marketplace
    The following example shows the output of the commands.
    opencloud-operators-zmtmv                                         1/1     Running     0          25s 
    opencloud-operators        IBMCS Operators          grpc   IBM         46s 
  9. Export the following environment variables.
    export CP_REPO_USER=<cp user>
    export CP_REPO_PASS=<cp password>
  10. Install the Cloud Pak foundational services operators.
    export ICS_SIZE=small;
    
    oc ibm-pak launch $CASE_NAME \
       --version $CASE_VERSION \
       --action install-operator \
       --inventory $ICS_INVENTORY_SETUP \
       --namespace $NAMESPACE \
       --args "--size ${ICS_SIZE} --registry cp.icr.io --user ${CP_REPO_USER} --pass ${CP_REPO_PASS} --secret ibm-entitlement-key --recursive --inputDir ${LOCAL_CASE_DIR}"
  11. Verify that the CSV is in Succeeded state:
    oc get csv -n $NAMESPACE
    The following example shows the output of the command.
    oc get pods -n ${NAMESPACE}
    
    NAME                                                            READY    STATUS    RESTARTS   AGE
    common-service-db-1                                               1/1    Running       0      4h2m
    common-web-ui-75fb7fcbff-rpx9w                                    1/1    Running       0      4h3m
    create-postgres-license-config-vvzj8                              0/1   Completed      0      4h3m
    ibm-common-service-operator-7b9f6c49bc-ffl9f                      1/1    Running       0      4h7m
    ibm-commonui-operator-86c45f5df9-grm27                            1/1    Running       0      4h3m
    ibm-iam-operator-76969bf99b-lbd85                                 1/1    Running       0      4h4m
    ibm-zen-operator-69c4bf46f8-9vt5x                                 1/1    Running       0      4h4m
    oidc-client-registration-hcmxf                                    0/1   Completed      0      4h3m
    operand-deployment-lifecycle-manager-5d4fff9f89-75vkf             1/1    Running       0      4h5m
    platform-auth-service-6d7c654fc6-sj5gg                            1/1    Running       0      4h1m
    platform-identity-management-8dccc6b84-rt47d                      1/1    Running       0      4h1m
    platform-identity-provider-5d74f7d65d-h7l7l                       1/1    Running       0      4h1m
    postgresql-operator-controller-manager-1-18-12-6b9b4fb545-d6stw   1/1    Running       0      4h3m
  12. Verify that the operandrequest is available:
    oc get opreq -n $NAMESPACE
    The following example shows the output of the command.
    NAME                         AGE    PHASE     CREATED AT
    common-service               4h3m   Running   2024-08-27T09:27:50Z
    ibm-iam-request              4h2m   Running   2024-08-27T09:28:36Z
    postgresql-operator-request  4h2m   Running   2024-08-27T09:29:00Z
  13. Verify that all the Cloud Pak foundational services pods are in the Running or Completed state by using the following command.
    oc get pods -n ${NAMESPACE}
    The following example shows the output of the command.
    oc get pods -n ${NAMESPACE}
    
    NAME                                                              READY   STATUS    RESTARTS   AGE
    common-service-db-1                                               1/1     Running      0       4h2m
    common-web-ui-75fb7fcbff-rpx9w                                    1/1     Running      0       4h3m
    create-postgres-license-config-vvzj8                              0/1     Completed    0       4h3m
    ibm-common-service-operator-7b9f6c49bc-ffl9f                      1/1     Running      0       4h7m
    ibm-commonui-operator-86c45f5df9-grm27                            1/1     Running      0       4h3m
    ibm-iam-operator-76969bf99b-lbd85                                 1/1     Running      0       4h4m
    ibm-zen-operator-69c4bf46f8-9vt5x                                 1/1     Running      0       4h4m
    oidc-client-registration-hcmxf                                    0/1     Completed    0       4h3m
    operand-deployment-lifecycle-manager-5d4fff9f89-75vkf             1/1     Running      0       4h5m
    platform-auth-service-6d7c654fc6-sj5gg                            1/1     Running      0       4h1m
    platform-identity-management-8dccc6b84-rt47d                      1/1     Running      0       4h1m
    platform-identity-provider-5d74f7d65d-h7l7l                       1/1     Running      0       4h1m
    postgresql-operator-controller-manager-1-18-12-6b9b4fb545-d6stw   1/1     Running      0       4h3m
    After you complete the verification, install the Guardium Data Security Center operators. This process takes approximately 20 minutes.
  14. The default username to access the console is cpadmin. To retrieve the password, use these commands:
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_username}' -n $NAMESPACE | base64 -d | awk '{print $1}'
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' -n $NAMESPACE | base64 -d | awk '{print $1}'

    The output that you receive, for example EwK9dj_example_password_lZSzVsA, is the password that is used for accessing the console. To change the default username (cpadmin) or password, see Changing the cluster administrator access credentials.

  15. To retrieve the cp-console route and credentials, use the following command.
    oc get route cp-console -n $NAMESPACE

What to do next

After you install the Cloud Pak foundational services, you can continue with the installation of Guardium Data Security Center.

Installing Guardium Data Security Center

Procedure

  1. The default username to access the console is cpadmin. To retrieve the password, use these commands:
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_username}' -n $NAMESPACE | base64 -d | awk '{print $1}'
    oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' -n $NAMESPACE | base64 -d | awk '{print $1}'

    The output that you receive, for example EwK9dj_example_password_lZSzVsA, is the password that is used for accessing the console. To change the default username (cpadmin) or password, see Changing the cluster administrator access credentials.

  2. Set these environment variables:
    
      export NAMESPACE=staging
      export ICS_USER=admin
      export ICS_PASS=<ics_password>
      export CP_REPO_USER=cp
      export CP_REPO_PASS=<entitlement_key>

    Where <ics_password> is the password that you retrieved to access the console and <entitlement_key> is the entitlement key, as described in Obtain your entitlement key.

  3. Create a namespace for the Guardium Data Security Center instance. This namespace must be 10 or fewer characters in length.
    oc create namespace ${NAMESPACE}
    oc project ${NAMESPACE}
  4. Run the pre-install script. This script sets up secrets and parameters for the Guardium Data Security Center instance.
    export GI_INVENTORY_SETUP=install
    oc ibm-pak launch $CASE_NAME \
    --version $CASE_VERSION \
    --inventory $GI_INVENTORY_SETUP \
    --action pre-install \
    --namespace $NAMESPACE \
    --args "-n ${NAMESPACE} -h <DB worker host> -l true -t false"
    <DB_worker_host> is the worker node name on which you want to host Db2®.
  5. Install the catalog.
    oc ibm-pak launch $CASE_NAME \
       --version $CASE_VERSION \
       --inventory $GI_INVENTORY_SETUP \
       --action install-catalog \
       --namespace openshift-marketplace \
       --args "--inputDir ${LOCAL_CASE_DIR}" \
    --tolerance 1
    To verify that the catalogs are installed, issue this command:
    oc get pod -n openshift-marketplace
    The output is similar to:
    NAME                                               READY   STATUS    RESTARTS   AGE
    ibm-cloud-databases-redis-operator-catalog-x2rr4   1/1     Running   0          41s
    ibm-db2uoperator-catalog-mzvd7                     1/1     Running   0          73s
    ibm-guardium-insights-operator-catalog-n8qkr       1/1     Running   0          16s
  6. Install the operator.
    oc ibm-pak launch $CASE_NAME
     --version $CASE_VERSION
     --inventory $GI_INVENTORY_SETUP
     --action install-operator
     --namespace $NAMESPACE
     --args "--registry cp.icr.io --user ${CP_REPO_USER} --pass ${CP_REPO_PASS} --secret ibm-entitlement-key --inputDir ${LOCAL_CASE_DIR} --tolerance 1"
    To verify that the operators are installed, issue this command:
    oc get pods –n staging
    The output is similar to:
    NAME                                                  READY   STATUS    RESTARTS   AGE
    db2u-day2-ops-controller-manager-5488d5c844-8z568     1/1     Running   0          2m59s
    db2u-operator-manager-5fc886d4bc-mvg98                1/1     Running   0          2m59s
    ibm-cloud-databases-redis-operator-6d668d7b88-p69hm   1/1     Running   0          74s
    mongodb-kubernetes-operator-856bc86746-8vsrg          1/1     Running   0          49s
  7. Issue this command:
    oc get storageclass

    The output is similar to:

    NAME                        PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    default                     ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-block-bronze           ibm.io/ibmc-block   Delete          Immediate           true                   49d
    ibmc-block-custom           ibm.io/ibmc-block   Delete          Immediate           true                   49d
    ibmc-block-retain-bronze    ibm.io/ibmc-block   Retain          Immediate           true                   49d
    ibmc-block-retain-custom    ibm.io/ibmc-block   Retain          Immediate           true                   49d
    ibmc-block-retain-gold      ibm.io/ibmc-block   Retain          Immediate           true                   49d
    ibmc-block-retain-silver    ibm.io/ibmc-block   Retain          Immediate           true                   49d
    ibmc-block-silver           ibm.io/ibmc-block   Delete          Immediate           true                   49d
    ibmc-file-bronze            ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-file-bronze-gid        ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-file-custom            ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-file-gold              ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-file-gold-gid          ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-file-retain-bronze     ibm.io/ibmc-file    Retain          Immediate           false                  49d
    ibmc-file-retain-custom     ibm.io/ibmc-file    Retain          Immediate           false                  49d
    ibmc-file-retain-gold       ibm.io/ibmc-file    Retain          Immediate           false                  49d
    ibmc-file-retain-silver     ibm.io/ibmc-file    Retain          Immediate           false                  49d
    ibmc-file-silver            ibm.io/ibmc-file    Delete          Immediate           false                  49d
    ibmc-file-silver-gid        ibm.io/ibmc-file    Delete          Immediate           false                  49d
  8. Create a file.yaml file similar to this example:
    
    apiVersion: gi.ds.isc.ibm.com/v1
    kind: GuardiumDataSecurityCenter
    metadata:
      # name: This must be 10 or less characters
      name: staging
      # Provide the name of the namespace in which you want to install the CR.
      namespace: staging
    spec:
      version: 3.6.0
      license:
        accept: true
        # GDSC Suite license
        licenseType: "L-QABB-9QRLFB"
      guardiumGlobal:
        backupsupport:
          enabled: true
          name: <GI_Backup_PVC>
          storageClassName: managed-nfs-storage
          size: 500Gi
        dev: “false”
        licenseAccept: true
        # Guardium Insights template size can be defined as below using the size parameter
        size: values-small
        instance:
          ingress:
            # hostName: Change this, ex: staging.apps.gi-devops-ocp46-41.cp.fyre.ibm.com
            hostName: <host_name>
            # domainName:  Change this
            domainName: <domain_name>
          ics:
            namespace: ibm-common-services
            registry: common-service
        # storageClassName: Change this to a ReadWriteMany StorageClass
        storageClassName: “ocs-storagecluster-cephfs”
        # storageClassNameRWO: Must be a ReadWriteOnce StorageClass
        storageClassNameRWO: "ocs-storagecluster-ceph-rbd"
      capabilities:
        - name: quantum-safe
          enabled: true
          configurations: {}
        - name: platorm
          enabled: true
          configurations: {}
            connections:
              insightsEnv:
                FEATURE_STAP_STREAMING: "false"
            dependency-db2:
              image:
                insightsPullSecret: “ibm-entitlement-key”
              db2:
              size: 2
              resources:
                requests:
                  cpu: “6"
                  memory: “48Gi”
                limits:
                  cpu: “6"
                  memory: “48Gi”
              storage:
              - name: meta
                spec:
                  storageClassName: “ocs-storagecluster-cephfs”
                  accessModes:
                  - ReadWriteMany
                  resources:
                    requests:
                      storage: “1000Gi”
                type: create
              - name: data
                spec:
                  storageClassName: "ocs-storagecluster-ceph-rbd"
                  accessModes:
                  - ReadWriteOnce
                  resources:
                    requests:
                      storage: “4000Gi”
                type: template
              mln:
                distribution: 0:0
                total: 2
            dependency-kafka:
              kafka:
                storage:
                  type: persistent-claim
                  size: 250Gi
                  class: "ocs-storagecluster-ceph-rbd"
              zookeeper:
                storage:
                  type: persistent-claim
                  size: 20Gi
                  class: "ocs-storagecluster-ceph-rbd"
            mini-snif:
              persistentVolumesClaims:
                mini-snif-shared:
                  storageClassName: “ocs-storagecluster-cephfs”
            universal-connector-manager:
              persistentVolumesClaims:
                universal-connector-manager-shared:
                  storageClassName: “ocs-storagecluster-cephfs”
            settings-datasources:
              persistentVolumesClaims:
                settings-datasources:
                  storageClassName: “ocs-storagecluster-cephfs”
            ticketing:
              persistentVolumesClaims:
                ticketing-keystore:
                  storageClassName: “ocs-storagecluster-cephfs”
            dependency-s3:
              storageClassName: ocs-storagecluster-ceph-rbd
            dependency-security:
            networkPolicy:
              egresses:
                egress-required-allow:
                  egress:
                  - to:
                    - ipBlock:
                        cidr: 0.0.0.0/0
                  - ports:
                    - port: 5353
                      protocol: UDP
                    - port: 5353
                      protocol: TCP
                    - port: 53
                      protocol: UDP
                    - port: 53
                      protocol: TCP
                    - port: 443
                      protocol: UDP
                    - port: 443
                      protocol: TCP

    In this file, replace <host_name> and <domain_name> with your environment's host and domain name.

  9. Apply the .yaml file:
    oc apply -f file.yaml

    The output is similar to:

    NAME      TYPE      STATUS   REASON        MESSAGE                 DESIRED_VERSION   INSTALLED_VERSION
    staging   Running   True     Reconciling   Starting to Reconcile   3.6.0
    Tip: The displayed versions in the output vary based on the Guardium Data Security Center version that you want to install and the current version on your system.
  10. Wait for approximately one hour and then validate the Guardium Data Security Center installation:
    oc get guardiumdatasecuritycenter

    The output is similar to:

    After completion, the output is similar to:
    NAME      TYPE    STATUS   REASON      MESSAGE                    DESIRED_VERSION   INSTALLED_VERSION
    staging   Ready   True     Completed   Completed Reconciliation   3.6.0            3.6.0
    Tip: The displayed versions in the output vary based on the Guardium Data Security Center version that you want to install and the current version on your system.

    And issue this command to verify that the StorageClass has a Bound status:

    oc get pvc

    The output is similar to:

    NAME                                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    c-gi-sample-db2-meta                                      Bound    pvc-2f13e641-5aae-40a7-9b0d-95f4fc6a8143   1000Gi     RWX            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-db2-db2u-0                               Bound    pvc-443c804f-7f70-4693-b7ed-ba1013930b4a   4000Gi     RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-db2-db2u-1                               Bound    pvc-27e3d353-489a-43b6-8ae9-d500d9d91cf4   4000Gi     RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-m-0                                Bound    pvc-2a803a06-5eae-412f-81b1-2767d8e36e85   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-m-1                                Bound    pvc-ff0ca61c-1209-4d20-96fc-78887abad27d   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-s-0                                Bound    pvc-b1ddb497-d13d-48e2-bba2-f5141def9484   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-c-gi-sample-redis-s-1                                Bound    pvc-688772f5-ebb9-4619-bf70-b5ee561ab158   20Gi       RWO            ocs-storagecluster-cephfs     31d
    data-gi-sample-kafka-0                                    Bound    pvc-32339849-adb6-4677-b1bf-998643b5c4d3   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-gi-sample-kafka-1                                    Bound    pvc-0c479b89-4c62-4754-a5af-12c885afe553   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-gi-sample-zookeeper-0                                Bound    pvc-e436f03b-fe40-4b79-81a3-fb6c76a7a953   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-gi-sample-zookeeper-1                                Bound    pvc-8bb4ac61-332d-4500-948a-b1858f6cd555   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-kafka-0                                      Bound    pvc-56664beb-48c7-49fd-81b9-5557c4c1fbb7   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-kafka-1                                      Bound    pvc-dd36280e-364b-4c74-b225-0b72fc1e3af7   250Gi      RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-zookeeper-0                                  Bound    pvc-3b4edb7e-fad9-4ff9-849d-02ed38219329   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-staging-zookeeper-1                                  Bound    pvc-9ccc3feb-1ae9-4a9d-a970-5d374c8ee2da   20Gi       RWO            ocs-storagecluster-ceph-rbd   31d
    data-volume-gi-sample-mongodb-0                           Bound    pvc-275b737f-a380-41e1-a232-3389599c2448   100Gi      RWO            ocs-storagecluster-cephfs     31d
    data-volume-gi-sample-mongodb-1                           Bound    pvc-dd8b7f46-8b9b-4099-b0bd-b1f55652b2c9   100Gi      RWO            ocs-storagecluster-cephfs     31d
    gi-sampledjm6enctbcion3yyrvfum9-mini-snif-shared          Bound    pvc-f9becd16-7158-41f9-b1f1-93308337d2b5   50Gi       RWX            ocs-storagecluster-cephfs     31d
    logs-volume-gi-sample-mongodb-0                           Bound    pvc-43d29f78-4559-44e2-8927-ab4895807aee   100Gi      RWO            ocs-storagecluster-cephfs     31d
    logs-volume-gi-sample-mongodb-1                           Bound    pvc-256617cc-ec37-4581-8145-f6bbe0b162c6   100Gi      RWO            ocs-storagecluster-cephfs     31d
    mini-snif-i-gi-sampledjm6enctbcion3yyrvfum9-mini-snif-0   Bound    pvc-37e9748f-2306-4abe-b273-2de4c988a326   50Gi       RWO            ocs-storagecluster-cephfs     31d
    settings-datasources                                      Bound    pvc-ec2b30a6-0e10-4931-84d2-59dba5799d10   50Mi       RWX            ocs-storagecluster-cephfs     31d
    ticketing-keystore                                        Bound    pvc-4ca2e152-3767-4956-90af-c1a15adde109   2Mi        RWX            ocs-storagecluster-cephfs     31d
    universal-connector-manager-shared                        Bound    pvc-a2016cf2-df8a-4a6a-9de9-bcce9cdb1704   50Gi       RWX            ocs-storagecluster-cephfs     31d

    Finally, verify that you can log in to the Guardium Data Security Center user interface.