Automated installation on Amazon Web Services (AWS)

Creating the Red Hat OpenShift cluster on AWS

Procedure

  1. Generate an SSH private key and add it to the agent:
    1. Create or use an SSH key that is configured for authentication without a password.
      For example, on a computer that uses a Linux® operating system, run this command to create this type of SSH key:
      # ssh-keygen -t ed25519 -N '' -f <path>/<file_name>
    2. Start the ssh-agent process as a background task:
      # eval "$(ssh-agent -s)"

      The following example shows a successful output:

      Agent pid 31874
    3. Add your SSH private key to the ssh-agent:
      # ssh-add <path>/<file_name>
  2. Obtain the installation program:
    1. Access the Red Hat® OpenShift Cluster Manager.
    2. Select your installation type and then obtain a pull secret:
    3. Download the OpenShift installation program for your operating system. For more information, see https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/.
    4. Extract the installation program.
      For example, on a computer that uses a Linux operating system, run this command:
      # tar xvf openshift-install-linux.tar.gz
  3. Create the install-config.yaml file.
    # ./openshift-install create install-config --dir=<Directory>

    When you create the file, use these parameters:

    • SSH Public Key: /Users/user/.ssh/id_rsa.pub
    • Platform: AWS
    • For credentials:
      INFO Credentials loaded from the "default" profile in file "/Users/user/.aws/credentials"
    • Region: us-west-1
    • Base Domain: guardiuminsights.com
    • Cluster Name: sys-ins-con
    • Paste the pull secret that you obtained in this step.

    Edit the file (issue the cat install-config.yaml command) according to the System requirements and prerequisites:

    Note: This yaml file example is a test configuration only. For production environments, consult the Hardware cluster requirements to determine how many worker nodes you need.
  4. Deploy the cluster:
    $ ./openshift-install create cluster --dir=<installation_directory> \ --log-level=info

    The following example shows a successful output:

    #./openshift-install create cluster --dir=/Users/myusername/ocp-4.8       
    INFO Credentials loaded from the "default" profile in file "/Users/myusername/.aws/credentials" 
    INFO Consuming Install Config from target directory 
    INFO Creating infrastructure resources...         
    INFO Waiting up to 20m0s for the Kubernetes API at https://api.gi-aws48.guardium-insights.com:6443... 
    INFO API v1.21.1+051ac4f up                       
    INFO Waiting up to 30m0s for bootstrapping to complete... 
    INFO Destroying the bootstrap resources...        
    INFO Waiting up to 40m0s for the cluster at https://api.gi-aws48.guardium-insights.com:6443 to initialize... 
    INFO Waiting up to 10m0s for the openshift-console route to be created... 
    INFO Install complete!                            
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/myusername/ocp-4.8/auth/kubeconfig' 
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.gi-aws48.guardiumnsights.com 
    INFO Login to the console with user: "kubeadmin", and password: "CqD7a-Q3Ztk-DvJa-VRkcZ" 
    INFO Time elapsed: 37m45s

What to do next

For more information, see Installing a cluster on AWS and Configuring an AWS account .

Installing Openshift Data Foundation (previously OpenShift Container Storage) storage class

Procedure

  1. In the web console, select Operators > OperatorHub.
  2. Click Install.
  3. In the Install Operator page, these options are selected by default:
    1. Update Channel: stable-4.14
    2. Installation Mode: A specific namespace on the cluster
    3. Installed Namespace Operator namespace openshift-storage
      Note: If the openshift-storage namespace does not exist, it is created during the operator installation.
    4. Select Approval Strategy: Automatic or Manual
    5. Click Install.

    To verify the installation, confirm that a green checkmark is in the Status column for the OpenShift Container Storage operator:

  4. To create the OpenShift Container Storage cluster, follow the instructions in Deploy OpenShift Data Foundation using dynamic storage devices .

    Select only those worker nodes on which you don’t want to run Db2®. Select at least 3 worker nodes.

    To verify that the storage class is installed, issue the oc get sc command. The following example shows a successful output:

    NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gp2 (default)                 kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   3d18h
    gp2-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3d18h
    ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   3d
    ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   3d
    openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  3d

Downloading the Guardium Insights CASE file and set up your environment for dependencies

Procedure

  1. Export the environment variables and create local directories.
    export CASE_NAME=ibm-guardium-insights 
    export CASE_VERSION=<CASE VERSION> 
    export LOCAL_CASE_DIR=$HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION 
    export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz
    Specify the CASE_VERSION for the version of Guardium Insights you are deploying.
    For example, Guardium Insights 3.5.0 requires CASE_VERSION 2.5.0. For more information, see Container Application Software for Enterprises (CASE) version support.
  2. Save the CASE bundle locally.
    oc ibm-pak get $CASE_NAME \
    --version $CASE_VERSION \
    --skip-verify
    
    Important: If you encounter the following error, you may experience a temporary communication problem with the remote repository. Wait a few minutes and try again.
    No Case registries found for case ibm-cert-manager->=1.3.0 <1.3.1.tgz with the given repository URL information
    FAILED
  3. If you are using the all-in-one script to install Guardium Insights, extract the CASE bundle to your local directory.
    tar -xvf $LOCAL_CASE_DIR/$CASE_ARCHIVE --dir $LOCAL_CASE_DIR

Editing the values.conf file to install IBM® Common Services and Guardium Insights

About this task

Locate $LOCAL_CASE_DIR/ibm-guardium-insights/inventory/automateInstall/files/values.conf and edit the values.conf file according to Configuration file parameters for all-in-one installation.

Running the all-in-one script

About this task

Define the location of the custom resource (CR) file by running the following command:
export LOCAL_INSTALL_DIR=<CR file location>
Then, run the following commands to start the installation process of Guardium Insights and its dependencies:
cd $LOCAL_CASE_DIR/$CASE_NAME/inventory/automateInstall/files
oc ibm-pak launch $CASE_NAME \
--version $CASE_VERSION \
--namespace ${NAMESPACE} \
--inventory automateInstall \
--action autoInstall \
--tolerance 1 | tee -a ${LOCAL_INSTALL_DIR}/installation.log

This process takes approximately 15 to 20 minutes to complete.

See this sample Guardium Insights CR file:

$LOCAL_CASE_DIR/ibm-guardium-insights/inventory/guardiumInsightsOperator/files/samples/gi-custom-AWS.yaml

When prompted with If you want to continue with the provided yaml file for Guardium Insights CR creation (yes/no)?, you have two options:

  • Enter yes to create the CR file. The following example shows a successful output:
    APPLYING
    guardiuminsights.gi.ds.isc.ibm.com/staging created
    -----IBM Security Guardium Insights Auto-Installation Successfully Completed----------
  • If you enter no, you can install Guardium Insights manually by creating a .yaml file. For the storageClassName, use the RWX/FileSystem storageClassName.
    apiVersion: gi.ds.isc.ibm.com/v1
    kind: GuardiumInsights
    metadata:
      #name: This must be 10 or less characters
      name: Staging
      #Provide the name of the namespace in which you want to install the CR.
      namespace: staging
    spec:
      version: 3.4.0
      license:
        accept: true
        licenseType: "L-YRPR-ZV3BA6"
      connections:
         insightsEnv:
           FEATURE_STAP_STREAMING: "false"
      guardiumInsightsGlobal:
        backupsupport:
           enabled: true
           name: <GI_Backup_PVC>
           storageClassName: ocs-storagecluster-cephfs
           size: 500Gi
        dev: "false"
        licenseAccept: true
        size: values-small
        image:
          insightsPullSecret: ibm-entitlement-key 
          repository: cp.icr.io/cp/ibm-guardium-insights
        insights:
          ingress:
            hostName: staging.apps.gi-ocp47.guardium-insights.com
            domainName: api.gi-ocp47.guardium-insights.com
          ics:
            namespace: ibm-common-services
            registry: common-service
        storageClassName: ocs-storagecluster-cephfs
        #storageClassNameRWO: Must be a ReadWriteOnce StorageClass
        storageClassNameRWO: "ocs-storagecluster-ceph-rbd"
      dependency-db2:
        image:
          insightsPullSecret: ibm-entitlement-key
        db2:
         size: 2
         resources:
           requests:
             cpu: "6"
             memory: "24Gi"
           limits:
             cpu: "6"
             memory: "24Gi"
         storage:
         - name: meta
           spec:
             storageClassName: "ocs-storagecluster-cephfs"
             accessModes:
             - ReadWriteMany
             resources:
               requests:
                 storage: "1000Gi"
           type: create
         - name: data
           spec:
             storageClassName: "ocs-storagecluster-cephfs"
             accessModes:
             - ReadWriteOnce
             resources:
               requests:
                 storage: "4000Gi"
           type: template
         mln:
           distribution: 0:0
           total: 2
      dependency-kafka:
        kafka:
          storage:
            type: persistent-claim
            size: 250Gi
            class: "ocs-storagecluster-ceph-rbd"
        zookeeper:
          storage:
            type: persistent-claim
            size: 20Gi
            class: "ocs-storagecluster-ceph-rbd"
      mini-snif:
        persistentVolumesClaims:
          mini-snif-shared:
            storageClassName: "ocs-storagecluster-cephfs"
      universal-connector-manager:
        persistentVolumesClaims:
          universal-connector-manager-shared:
            storageClassName: "ocs-storagecluster-cephfs"
      settings-datasources:
        persistentVolumesClaims:
          settings-datasources:
            storageClassName: "ocs-storagecluster-cephfs"
      ticketing:
        persistentVolumesClaims:
          ticketing-keystore:
            storageClassName: "ocs-storagecluster-cephfs"

    After you create the yaml file, apply it:

    oc apply -f <filename.yaml>

Verifying the installation

About this task

After you install Guardium Insights, run this command:

oc get guardiuminsights -w

The output is similar to this example, with all pods in a Running or Complete state:

NAME      TYPE      STATUS   REASON        MESSAGE                 DESIRED_VERSION   INSTALLED_VERSION
staging   Running   True     Reconciling   Starting to Reconcile   3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Secret creation completed   3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Instantiated Redis Sentinel CR   3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Instantiated MongoDB CR          3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Instantiated Kafka CR            3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Instantiated DB2 CR              3.2.0             
staging   Failure   True     Failed                           Failed to gather information about Certificate(s) even after waiting for 120 seconds   3.2.0             
staging   Running   True     Running                          Running reconciliation                                                                 3.2.0             
staging   Running   True     Reconciling                      Starting to Reconcile                                                                  3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Checking for Kafka CR Success                                                          3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Validating Kafka Connection Success                                                    3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Checking for Redis CR Success                                                          3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Validating Redis Connection Success                                                    3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Checking for MongoDB CR Success                                                        3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Validating MongoDB Connection Success                                                  3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Checking for DB2 CR Success                                                            3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Validating DB2 Connection Success                                                      3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Checking for initial Tenant creation                                                   3.2.0             
staging   Running   True     GuardiumInsightsInstallRunning   Checking for initial Tenant creation                                                   3.2.0             
staging   Running   True     Reconciling                      Checking GI Pods/Deployments/Statefulsets are running                                  3.2.0             
staging   Running   True     Reconciling                      Checking GI Pods/Deployments/Statefulsets are running                                  3.2.0             
staging   Running   True     Reconciling                      Checking GI Pods/Deployments/Statefulsets are running                                  3.2.0             3.2.0
staging   Ready     True     Complete                         Completed Reconciliation                                                               3.2.0             3.2.0
staging   Ready     True     Complete                         Completed Reconciliation                                                               3.2.0             3.2.0
staging   Ready     True     Complete                         Completed Reconciliation                                                               3.2.0             3.2.0

Next, run this command:

oc get guardiuminsights

The following example shows a successful output:

NAME        TYPE    STATUS   REASON     MESSAGE                    DESIRED_VERSION   INSTALLED_VERSION

staging   Ready   True     Complete   Completed Reconciliation   3.3.0             3.3.0