Installation on GCP

Detailed procedure to install the IBM Verify Identity Governance - Container on Google Cloud Platform (GCP).

Overview

IBM Verify Identity Governance - Container can be deployed on Google Kubernetes Engine (GKE) using any of the supported Databases and Directory Servers.

GKE is a fully-managed service for running and managing containerized applications on Google Cloud Platform.

Optionally, you can also use Cloud SQL for PostgreSQL on Google Cloud as an external database for the IBM Verify Identity Governance - Container. For more information, refer to the section on Setting up Cloud SQL for PostgreSQL.

Before you begin

Ensure that you complete the following prerequisites:
  1. Get started with your Google Cloud account. Create a project and enable billing for it.
  2. Verify that all the software requirements are fulfilled.
  3. Additionally, you may choose to Install the gcloud CLI.

Optional: Setting up Cloud SQL for PostgreSQL

Note: This section is optional. If you choose NOT to use Cloud SQL, proceed directly to the Deployment procedure.

You can use Cloud SQL for PostgreSQL as an external database for IBM Verify Identity Governance - Container within your Google Cloud cluster. Cloud SQL is a fully managed relational database service that supports MySQL, PostgreSQL, and SQL Server, handling database administration tasks so you can focus on managing your data. For detailed information, see the Cloud SQL for PostgreSQL.

Steps:
  1. Create an instance: Follow this guide to create an instance.
  2. View instance information: See details on your instance by visiting View instance information.
  3. Create a database: Create a database, such as `isvgim`, following these instructions: Create and manage databases
  4. Create a user: Add a user (e.g., `pguser`) as outlined in Create and manage users
  5. Establish a connection: Use the connection properties and the created user to connect to your PostgreSQL Cloud instance.
  6. Configure SSL Set up SSL following the instructions here: Configure SSL for your instance
  7. Retrieve SSL certificate: Download the SSL Certificate by clicking Download SSL Certificate.
  8. Specify certificate during installation: When running the configure.sh script, specify the downloaded certificate for the truststore.
  9. Provide details for configuration: Enter the required details while executing the starter/bin/configure.sh script.
    Here is a sample of db section in config.yaml file.
    
    db:
       user: {database_username}
       password: {user_password}
       dbtype: postgres
       ip: {database_link}
       port: {database_port}
       name: {database_name}
       admin: {database_username}
       adminPwd: {admin_password}
       security.protocol: ssl
       tablespace.location.data:
       tablespace.location.indexes:
    

Deployment on GCP

Perform the following steps to deploy IBM Verify Identity Governance - Container on Google Cloud.
Cluster Setup
  1. Clusters can be created in two modes, as described in About cluster configuration choices
  2. Depending on your choice, create an Autopilot Cluster or a Standard Cluster (either zonal) or regional
Ensure the following aspects.
  • Enable the Dataplane V2 option when creating a Standard Cluster.
  • If you plan to use NFS for persistent storage, enable the Filestore CSI Driver option.
Cluster connect
  1. To connect via `kubectl`, install the gke-gcloud-auth-plugin as a prerequisite.
  2. Connect to your cluster using the following bash command:
    gcloud container clusters get-credentials CLUSTER_NAME --region=COMPUTE_REGION
Configuring persistent storage
For persistent storage with IBM Verify Identity Governance - Container pods, you have two options:
1. Block Storage (Persistent Disk)
  • Single Zone: Default storage classes are limited to a single zone.
  • Regional Persistent Disk: Provides backup in an additional zone of your choice. Set up a storage class for regional persistent disks by following these instructions.
2. File Storage (Filestore)

Multishares: Use Filestore multishares, which allows up to 80 shares across a single enterprise-tier instance.

Storage Classes: You can use the default `enterprise-multishare-rwx` storage class or create a custom one by following this guide.

To integrate Filestore multishare, make the following adjustments to the starter kit:
  1. Update `accessModes` in the PVC files under `starter-kit/helm/templates`.

    Change accessModes: [ "ReadWriteOnce" ]

    to

    accessModes: [ "ReadWriteMany" ] in the following YAML files:
    • 070-pvc-isvd
    • 071-pvc-isvd2
    • 072-pvc-isvd-proxy
    • 075-pvc-mqshare
    • 080-pvc-postgres
    • 300-statefulset-isvgim
  2. Edit the `storage.volumes` section in starter-kit/helm/values.yaml file, ensuring each entry is at least 10 GiB to accommodate multi-share requirements. Here is an example:
    
    storage:
      className: enterprise-multishare-rwx
      volumes:
        isvd: 10Gi
        isvdproxy: 10Gi
        isvdi: 10Gi
        isvgimcustom: 10Gi
        mqlocal: 10Gi
        mqshare: 10Gi
        postgres: 10Gi
    
Starter kit
  1. Once logged into the cluster, run the following command to list storage classes: `kubectl get storageclass`.
  2. Select a suitable storage class from the list, or create a custom one if needed as described in the Configuring Persistent Storage section in this topic.
  3. If you plan to use Cloud SQL for PostgreSQL as your database, refer to the [Setting up Cloud SQL for PostgreSQL section in this topic.
  4. Execute the `starter/bin/configure.sh` script. This script will prompt you for all required installation parameters:
    • Set the InstallType` to `cloud`.
    • When prompted, enter the name of your chosen `StorageClass`.
  5. Navigate to the `starter/bin` directory and run the install.sh script. For detailed installation instructions, see the Installation topic.

Post-installation steps

Remember: After the installation, you must retain the Starter kit. The Starter kit contains configuration files that are required for executing various scripts. The starter kit is also required when deploying the Fix Packs and Interim Fixes.
Accessing the console
  1. Edit the `starter-kit/yaml/100-service-isvgim.yaml` file to update the `spec.type` parameter to `LoadBalancer`, and comment out any lines containing `sessionAffinity`.
  2. Apply the changes with: `kubectl apply -f 100-service-isvgim.yaml`.
  3. In your Linux terminal, run: `kubectl get svc --namespace <namespace>

    Here, replace <namespace> with the one you used during `configure.sh` (for example: `isvgim`)

    .
  4. Copy the IP address from the external ip column.
  5. Access the IBM Verify Identity Governance - Container console in your web browser using: https://<external_IP>:9443/itim/console
  6. To make the internal ISVD service externally accessible:
    • Modify the `starter/yaml/115-service-ldapExt.yaml` file. Change `metadata.name` from `isvd-external` and its `type` from `NodePort` to `LoadBalancer`.
    • Apply the changes with: `kubectl apply -f 115-service-ldapExt.yaml`.
  7. To expose the internal PostgreSQL service externally:
    • Edit the `starter/yaml/116-service-dbExt.yaml` file. Change `metadata.name` from `postgres-external` and its `type` from NodePort to LoadBalancer`.
    • Apply the changes with: `kubectl apply -f 116-service-dbExt.yaml`.

Next steps

If you are deploying a fresh installation of IBM Verify Identity Governance - Container, then proceed to configuration activities.

If you are migrating from a legacy Identity Manager-Virtual Appliance or Identity Manager-Software Stack setup, then proceed to database migration.