Running the all-in-one-script

Automate the installation of Guardium® Data Security Center by running the all-in-one installation script.

Procedure

  1. Define the location of the custom resource (CR) file:
    export LOCAL_INSTALL_DIR=<CR file location>

    And then run the following command to start the installation process of Guardium Data Security Center and its dependencies. This command takes approximately 15 to 20 minutes to complete.

    cd $LOCAL_CASE_DIR/$CASE_NAME/inventory/automateInstall/files
    oc ibm-pak launch $CASE_NAME \
    --version $CASE_VERSION \
    --namespace ${NAMESPACE} \
    --inventory automateInstall \
    --action autoInstall \
    --tolerance 1 | tee -a ${LOCAL_INSTALL_DIR}/installation.log
  2. A sample Guardium Data Security Center CR file is available in following location: $LOCAL_CASE_DIR/ibm-guardium-data-security-center/inventory/guardiumdatasecuritycenterOperator/files/samples/gi-custom-AWS.yaml.
    When prompted with If you want to continue with the provided yaml file for Guardium Data Security Center CR creation (yes/no)?, you have two options.
    1. If you enter yes, the system output is similar to the following message.
      APPLYING
      guardiumdatasecuritycenter.gi.ds.isc.ibm.com/staging created
      -----IBM Security Guardium Data Security Center Auto-Installation Successfully Completed----------
      
    2. If you enter no, you can install Guardium Data Security Center manually by creating a .yaml file.
      The following example shows sample .yaml file contents.
      apiVersion: gi.ds.isc.ibm.com/v1
      kind: GuardiumInsights
      metadata:
        #name: This must be 10 or less characters
        name: Staging
        #Provide the name of the namespace in which you want to install the CR.
        namespace: staging
      spec:
        version: 3.4.0
        license:
          accept: true
          licenseType: "L-QABB-9QRLFB”
        connections:
           insightsEnv:
             FEATURE_STAP_STREAMING: "false"
        guardiumInsightsGlobal:
          backupsupport:
             enabled: true
             name: <GI_Backup_PVC>
             storageClassName: managed-nfs-storage
             size: 500Gi
          guardiumInsightsVersion: 3.4.0
          dev: “false”
          licenseAccept: true
          size: values-small
          image:
            insightsPullSecret: ibm-entitlement-key
            repository: cp.icr.io/cp/ibm-guardium-data-security-center
          insights:
            ingress:
              hostName: staging.apps.gi-ocp47.guardium-insights.com
              domainName: api.gi-ocp47.guardium-insights.com
            ics:
              namespace: ibm-common-services
              registry: common-service
          storageClassName: ocs-storagecluster-cephfs
        dependency-db2:
          image:
            insightsPullSecret: ibm-entitlement-key
          db2:
           size: 2
           resources:
             requests:
               cpu: “6”
               memory: “48Gi”
             limits:
               cpu: “6”
               memory: “48Gi”
           storage:
           - name: meta
             spec:
               storageClassName: “ocs-storagecluster-cephfs”
               accessModes:
               - ReadWriteMany
               resources:
                 requests:
                   storage: “1000Gi”
             type: create
           - name: data
             spec:
               storageClassName: “ocs-storagecluster-cephfs”
               accessModes:
               - ReadWriteOnce
               resources:
                 requests:
                   storage: “4000Gi”
             type: template
           mln:
             distribution: 0:0
             total: 2
        dependency-kafka:
          kafka:
            storage:
              type: persistent-claim
              size: 250Gi
              class: “ocs-storagecluster-ceph-rbd”
          zookeeper:
            storage:
              type: persistent-claim
              size: 20Gi
              class: “ocs-storagecluster-ceph-rbd”
        mini-snif:
          persistentVolumesClaims:
            mini-snif-shared:
              storageClassName: “ocs-storagecluster-cephfs”
        universal-connector-manager:
          persistentVolumesClaims:
            universal-connector-manager-shared:
              storageClassName: “ocs-storagecluster-cephfs”
        settings-datasources:
          persistentVolumesClaims:
            settings-datasources:
              storageClassName: “ocs-storagecluster-cephfs”
        ticketing:
          persistentVolumesClaims:
            ticketing-keystore:
              storageClassName: “ocs-storagecluster-cephfs”
      
      After you create the .yaml file, apply it:
      
      oc apply -f <filename.yaml>
      

What to do next

Verify that the installation is successful.