Installing IBM Guardium Cryptography Manager 1.0.0.0

You can install IBM Guardium® Cryptography Manager 1.0.0.0 using Helm.

Before you begin

Before you start installing Guardium Cryptography Manager, ensure that you have the following prerequisites.
  • System requirements and prerequisites
  • Registration for a Red Hat subscription
  • Helm with version 3.x
  • Rook for Ceph storage
  • Run the following command to ensure that your cluster storage is in a healthy state:
    kubectl get cephcluster -n rook-ceph

    You should get the status of HEALTH_OK for the HEALTH parameter.

    NAME         DATADIRHOSTPATH   MONCOUNT   AGE     PHASE   MESSAGE                        HEALTH      EXTERNAL   FSID
    my-cluster   /var/lib/rook     1          2m39s   Ready   Cluster created successfully   HEALTH_OK              67835dbb-1f29-48c9-b880-239760d9469a
  • For successful connectivity and operation of the deployed application, ensure that the following ports are explicitly allowed in your firewall or network access rules:
    • TCP Port of the OIDC server.
    • TCP Port 31443 for IAG host.
    TCP Port 31443 and TCP Port of the OIDC server should be open for inbound and outbound traffic from the relevant client or integration systems, depending on your deployment topology. Ensure that the rules allow access from the required IP address ranges or systems as required by your internal security policies.
Note: By default (<global-values.yaml>externalOidc = false), Keycloak gets installed as the internal OIDC server. If you choose to use the internal Keycloak as OIDC server, the default OIDC port will be 30443. It is recommended to configure any external OIDC server for production use. Internal Keycloak should be used only for Development and Test environments.

About this task

You can install the Guardium Cryptography Manager application through Helm on a Kubernetes cluster or OpenShift Container Platform (OCP) cluster.

Procedure

  1. Download the gcm-1.0.0.0.tgz file that contains the Helm charts from IBM® Passport Advantage®.
  2. Run the following command to extract the .tgz file that contains Helm files:
    tar -xzvf gcm-1.0.0.0.tgz
    The gcm-1.0.0.0.tgz is extracted into the helm-charts folder.
  3. If you are installing Guardium Cryptography Manager 1.0.0.0, then, edit the fields in the global-values.yaml file:
    global:
      # Specify the container orchestration platform in use: either "ocp" (OpenShift Container Platform) or "k8s" (Kubernetes).
      platform: ocp 
      node1:
        # Kubernetes: Hostname or fully qualified domain name (FQDN) of the master node in the cluster for Kubernetes.
        # Openshift: Provide route DNS for the openshift cluster platform.
        # (Note: Please use double quotes when passing value. For example -  name: "*.apps.final.cp.example.com" )
        name: <name>
        # Kubernetes: IP address of the master node; used for network communication.
        # Openshift: IP address of the Infra node; used for network communication.
        ip: <ip>
      app:
        # Address for accessing the application
        # Kubernetes: Provide same IP used above <ip>, e.g., 9.30.199.196:31443.(Note: Port will always be 31443)
        # Openshift: Hostname such as "<app_name>.<route_dns>". The hostname should always contain the route DNS of the cluster used in "name" variable above, e.g., host: gcmapp.apps.final.cp.example.com
        host: <ip>:31443
      oidcServer:
        # Set to "true" if you are using an external OpenID Connect (OIDC) server for authentication; "false" otherwise.
        externalOidc: false
        # Address of the OIDC server. 
        # If using external OpenID Connect (OIDC) server, specify as <IP>:<Port>, e.g., 9.30.199.196:8443 or provide hostname.
        # If using internal OpenID Connect (OIDC) server -
        #   Kubernetes: Provide same IP used above <ip>, e.g., 9.30.199.196:30443.(Note: Port will always be 30443)
        #   Openshift: Hostname such as "<oidc_name>.<route_dns>". The hostname should always contain the route DNS of the cluster used in "name" variable above, e.g., host: oidc.apps.final.cp.example.com
        host: <oidc-host>
        # The following values must be configured if you enable an external OIDC server.
        # Enable ("true") or disable ("false") verification of users against the OIDC server during authentication.
        usersOidcVerification: false
        # URL for OIDC discovery endpoint (metadata URL to retrieve OIDC configuration).
        discoveryEndpoint:
        # URL to redirect users for OIDC authorization requests.
        authorizationEndpoint:
        # URL used to introspect and validate OIDC tokens.
        introspectEndpoint:
        # The security realm or domain associated with your OIDC provider.
        realm:
        # The "gem_tenant_creation" scope is required by the GCM app. Please ensure this scope is added to your OIDC server.
        # Do not modify this scope.
        gcmScope: gem_tenant_creation
      oidc:
        # Default username used for OIDC authentication to the application.
        # When using an external OIDC server, Update below values.
        userName: gcmadmin
        # The first name of the admin user created during OIDC setup.
        firstName: gcm
        # The last name of the admin user created during OIDC setup.
        lastName: admin
        # The email address associated with the admin user.
        emailId: gcmadmin@gcm.local
        # Default password for the gcm admin user.
        password: gcmAdmin@123
      gcmApp:
        # User ID to be used by the GCM application for administrative or API access.
        # When using an external OIDC server, Use the same email address associated with the admin user.
        userId: gcmadmin@gcm.local
        # Name of the organization that owns the application deployment.
        tenantName: IBM
      image:
        # Container image repository path where the application images are hosted.
        # (Note: Do not change)
        repository: icr.io/guardium-cryptomgr
        # Branch name of the source or image tag stream.
        # (Note: Do not change)
        branch: release
        # Specific image tag to deploy or the version of application.
        tag: 1.0.0.0
        # Helper image associated with the deployment process.
        # (Note: Do not change)
        helper: helper
      # Kubernetes namespace where the application components will be deployed.
      # (Note: Do not change)
      namespace:
        app: gcmapp
        dbs: gcmapp
        iag: gcmapp
      # Middleware components endpoint and port details.
      postgres: 
        endpoint: postgres
        port: 5432
        dbName: gem_postgres
      kafka:
        endpoint: kafka
        port: 9092          
      mongodb:
        endpoint: mongodb
        port: 27017
        dbName: tenant_manager
      redis:
        port: 6379
        endpoint: redis
      persistence:
        app:
          storageClassEnabled: true
          # StorageClass name to use for persistent volumes, rook-cephfs for Ceph file storage.
          storageClass: rook-cephfs
          # Size of persistent storage volume for application data. Increase at the time of installation as per need.
          size: 25Gi
        dbs:
          storageClassEnabled: true
          # StorageClass used for database persistent volumes, rook-ceph-block for block storage.
          storageClass: rook-ceph-block
      # The images are hosted in a public namespace of Container Registry, and no authentication is required to pull them. Please do not modify this value
      imagePullSecrets:
        name: icr-secret
      # Number of pod replicas for the GCM application services.
      gcmReplicas:
        cpm: 1
        auditmgmt: 1
        integrationmgr: 1
        notification: 1
        swagger: 1
        scheduler: 1
        policy: 1
        policyrisk: 1
        asset: 1
        discovery: 1
        usermgmt: 1
        syscerts: 1
        imcui: 1
        iag: 1

    Refer the following table to update the global values:

    Table 1. Global values for installation
    Key Description Example value Mandatory or Optional
    global.platform Defines the container orchestration platform
    • Kubernetes value: k8s
    • OCP value: ocp
    • ocp
    or
    • k8s
    Mandatory
    global.node.name Cluster-level identifier
    • Kubernetes value: FQDN or hostname of the master node
    • OCP value: Route DNS for the OpenShift cluster within quotes (" ")
    • master-node01
    or
    • "*.apps.final.cp.example.com"
    Mandatory
    global.node.ip IP address to access the network
    • Kubernetes value: IP address of the master node
    • OCP value: IP address of the infrastructure node
    • 9.30.199.196
    or
    • 9.30.199.220
    Mandatory
    global.app.host The address or hostname used to access the Guardium Cryptography Manager application
    • Kubernetes value: Use the same IP as node.ip with port 31443
    • OCP value: Provide hostname in the format <app_name>.<route_dns>
    • 9.30.199.196:31443
    or
    • gcmapp.apps.final.cp.example.com
    Mandatory
    global.oidcServer.externalOidc
    • External OIDC (k8s) value: true
    • Internal OIDC (OCP) value: false <app_name>.<route_dns>
    true or false Mandatory
    global.oidcServer.host OIDC server hostname or IP address:Port.External OIDC: Specify IP:Port or hostname.Internal OIDC (k8s): Use same node.ip with port 30443.Internal OIDC (ocp): <oidc_name>.<route_dns>.
    • External OIDC (k8s) value: <IP address>:Port
    • Internal OIDC (OCP) value: <oidc_name>.<route_dns>
    • 9.30.199.196:30443
    or
    • oidc.apps.final.cp.example.com
    Mandatory
    global.oidcServer.usersOidcVerification Enable (true) or disable (false) user verification with the OIDC server. true or false Optional (Provide if global.oidcServer.externalOidc is set true)
    global.oidcServer.discoveryEndpoint URL for OIDC discovery metadata https://oidc.example.com/well-known/openid-configuration Optional (Provide if global.oidcServer.externalOidc is set true)
    global.oidcServer.authorizationEndpoint URL for OIDC authorization redirect https://oidc.example.com/auth Optional (Provide if global.oidcServer.externalOidc is set true)
    global.oidcServer.introspectEndpoint URL for OIDC token introspection https://oidc.example.com/introspect Optional (Provide if global.oidcServer.externalOidc is set true)
    global.oidcServer.realm Security realm or domain (if applicable) master or realm01 Optional (Provide if global.oidcServer.externalOidc is set true)
    global.oidcServer.gcmScope OIDC scope specific to Guardium Cryptography Manager. Do not modify. gcm_tenant_creation Mandatory
    global.oidc.userName Default username for OIDC authentication gcmadmin Mandatory
    global.oidc.firstName Administrator user’s first name gcm Mandatory
    global.oidc.lastName Administrator user’s last name admin Mandatory
    global.oidc.emailId Administrator user’s email gcmadmin@gcm.local Mandatory
    global.oidc.password Default administrator password gcmAdmin@12 Mandatory
    global.gcmApp.userId User ID or email for API or administrator access gcmadmin@gcm.loca Mandatory
    global.gcmApp.tenantName Tenant or organization name IBM Mandatory
    global.image.repository Image repository location. Do not modify. icr.io/gemapp Mandatory
    global.image.branch Branch or image tag stream dev Mandatory
    global.image.tag Image tag or version of the app to deploy dev_latest Mandatory
    global.image.helper Helper image used during deployment. Do not modify. helper Mandatory
    global.namespace.app Namespace for application components gcmapp Mandatory
    global.namespace.db Namespace for database components gcmapp Mandatory
    global.postgres.endpoint, global.postgres.port, and global.postgres.dbName PostgreSQL connection details postgres, 5432, and gem_postgres Mandatory
    global.kafka.endpoint and global.kafka.port Kafka broker and port kafka and 9092 Mandatory
    global.mongodb.endpoint, global.mongodb.port, and global.mongodb.dbName MongoDB connection details mongodb, 27017, and tenant_manager Mandatory
    global.redis.endpoint and global.redis.port Redis cache endpoint redis and 6379 Mandatory
    global.persistence.app.storageClassEnabled Enable custom StorageClass for application data. true or false Mandatory
    global.persistence.app.storageClass StorageClass name (Ceph FS) rook-cephfs Mandatory
    global.persistence.app.size Persistent Volume Claim (PVC) size for app data 25Gi Mandatory
    global.persistence.dbs.storageClassEnabled Enable custom StorageClass for databases true or false Mandatory
    global.persistence.dbs.storageClass StorageClass for databases (block storage) rook-ceph-block Mandatory
    global.imagePullSecrets.name Name of secret to pull container images. Do not modify. icr-secret Mandatory

    Specify the number of pod replicas per microservice. By default specify the value of replica per service as 1. You can scale replicas for higher availability.

  4. If you are deploying Helm for the first time or if you are deploying Helm into a fresh or cleaned-up cluster, run the following command as a root user, and then select yes to accept the license:
    cd helm-charts
    ./installer.sh install
    Figure 1. Accepting license after prompt
    Output of installation script to prompt acceptance of license
    The installation script checks for all required prerequisites at first. If any prerequisites are missing, it installs them system-wide in/usr/local/bin (including Helm, kubectl, oc, and yq). The script also creates the necessary secrets and proceeds to install all required services. After the installation of the services, you can verify that the containers are running successfully.

    If your environment does not allow root user access, you must install these tools manually before using the Helm chart.

    Table 2. Prerequisite tools for Helm chart
    Tool name Required Version Download Link Verification Command
    Helm 3.13.x https://helm.sh/docs/intro/install/ helm version
    kubectl 1.33 or 1.34 Install kubectl kubectl version --client
    OpenShift Command Line Interface (CLI) 4.18.x https://mirror.openshift.com/pub/openshift-v4/clients/ocp/ oc version
    YQ CLI Most recent version https://github.com/mikefarah/yq/releases yq --version

Results

You successfully installed Guardium Cryptography Manager which leverages pre-hardened images from trusted sources. The images are installed on container platforms such as Kubernetes and OCP.

You can uninstall Helm due to any of the following conditions:

  • You want to uninstall the deployment
  • You want to clear the cluster before a fresh installation

If you need to uninstall Helm, run the following command:

cd helm-charts
./installer.sh uninstall
Warning: The cleanup process of Helm deletes all resources of Guardium Cryptography Manager.

You can cleanup Helm when you need to cleanup resources in the clusters where Guardium Cryptography Manager is deployed.

If you need to cleanup Helm, run the following command:

cd helm-charts
./helm-cleanup.sh

What to do next

After installing Guardium Cryptography Manager, complete the following tasks:

  1. Validate the deployment of by doing these steps:
    1. Run the following command to check the Helm releases:
      helm ls --namespace gcmapp
    2. Run the following command to verify pods:
      kubectl get pods --namespace gcmapp
    3. Run the following command to check the logs of pods:
      kubectl logs -f <pod-name> --namespace gcmapp
  2. Open the Guardium Cryptography Manager application by using either of the following URLs:
    • If the Guardium Cryptography Manager application is installed in a Kubernetes cluster, use the URL, https://<ip_address>:31443
    • If the Guardium Cryptography Manager application is installed in an OCP cluster, use the URL, https://<route_url>
  3. Log in to the Guardium Cryptography Manager user interface by using the following credentials:
    • Username: gcmadmin
    • Password: gcmAdmin@123
  4. On the Guardium Cryptography Manager, click Help>About>Version, and verify that the version is as required.
  5. To validate OIDC server access, log in using either of the following URLs with username as gcmadmin and password as gcmsecret, and then verify that the gcmadmin exists:
    • Use https://<ip-address>:<OIDC server port> for Kubernetes cluster
    • Use https://<route-url> for OCP cluster

    Log in by using the following credentials:

    • Username: gcmadmin
    • Password: gcmsecret