Manual Installation

Detailed procedure for manual installation of the IBM Verify Identity Governance - Container.

Overview

In some circumstances, it might not be desirable to rely on the automated installation scripts to deploy IVIG. This document will cover how to do that while having full control over the configuration deployed into Kubernetes. The overall design of the installation was to require no dependencies on the host, save for kubectl connected to a Kubernetes cluster, and helm, for filling out the yaml templates. Everything needed is provided by the container itself, and most of the scripts are just wrappers for running commands inside the container.

In a Kubernetes cluster, there are typically many ways to handle any specific issue. Throughout this document, there will first be a discussion about what needs to be accomplished, and then how the IVIG installer would handle it. You are free to use any viable approach that you like, as long as it ends up with the same result. The steps provided will mirror how the automated installer performs the task, they cannot cover every possible permutation that might be more advantageous in any specific environment.

1. Setup IVIG Configuration

In this section, we need to unpack the starter kit, which contains the yaml templates, and provide configuration information to be used later. At the end of this, you should have a config/config.yaml and helm/values.yaml with all of the settings needed for your environment, as well as all of the configmaps ready to go. In an automated installation, you would use the bin/configure.sh script to provide these details, but in this process, we need to edit the configuration files directly.
  1. Unzip the IVIG starter kit into a suitable location. It does not need root access.

    mkdir -p ~/projects/ivig

    Unzip the ivig-starter.zip (filename will be different for each release.)

  2. Edit the helm/values.yaml file to make sure all settings have the correct values. Most will not need to be changed; the internal port numbers and userid/groupid values are based on the container and should not be modified. But external ports, PVC sizes, healthcheck settings, and the like may be modified to reflect your environment. Refer to the values.yaml reference for details about each setting.

    For an air-gapped installation, all of the image names must be changed to point to your local repository. The tags (e.g. 11.0.0.0), however, MUST remain the same.

    vi helm/values.yaml

    If you need a local repository, and do not know how to create one, please refer to the Docker Documentation on setting one up. Be sure to load the CA cert for your repository certificate into the OS truststore on all machines in your kubernetes cluster.

  3. Edit config/config.yaml file to set the necessary values. cat config/config.yaml

    Refer to the config.yaml reference for details about each setting.

    Here are some example values. Note that your values will be different.
    
    version: "24.12"
    general:
     install:
     createTablespace: true
     license:
     activationKey: ABC_123
     accepted: true
    db:
     user: isimuser
     password: password
     dbtype: postgres
     ip: postgres
     port: 5432
     name: ivig
     admin: postgres
     adminPwd: password
     security.protocol: ssl
     tablespace.location.data: /var/lib/postgresql/data/isvgimdata
     tablespace.location.indexes: /var/lib/postgresql/data/isvgimindexes
    ldap:
     security.principal: cn=root
     security.credentials: password
     defaulttenant.id: ivig
     organization.name: IVIG Example
     ldapserver.ip: isvd-replica-1
     ldapserver.port: 9636
     ldapserver.root: dc=ivig
     security.protocol: ssl
    server:
     keypass: someRandomValue
     flags:
     disableTLSv12: false
     enableFIPS: false
     hostname:
     - test
     - test.example.com
     - 192.168.0.1
     truststore:
     - '@ldapCA.crt'
     - '@dbCA.crt'
     keystore:
    oidc:
     adminConsole:
     isc:
     rest:
     - attribute=value
     - attribute=value
    
    

    If you will be using OIDC to authenticate connections to the REST API, the Admin Console, or the Service Center interface, you must add an additional section to the root level of config.yaml (matching up with general and server).

    Only include the sections that you want to use OIDC for: it can be configured for one, two, or all three. Inside the three configurations (adminConsole, isc, rest) you must provide a list of the attribute value pairs required by your OIDC provider. Do not include the clientID or client secret values, as those will be stored in a secret. If they are provided here, they will be ignored.

  4. Edit the rest of the config directories.
    RMI dispatcher container

    If you will be using the RMI dispatcher container, edit config/adapters/isvdi_config.yaml

    Under general.license, you must add two new values:

    general:

    license:

    accept: true

    key: Long-Base64-encoded-key-from-Passport-Advantage (this must all be a single line)

    Under keyfile: a certificate and trusted certificate are listed. Adjust these if using your own certificates. The entry:

    - key: "@/tmp/isvgimsdi/isvdi.pem"

    breaks down as follows:

    "@" - the prefix indicating this is a file, not a Base64 string.

    "/tmp/isvgimsdi/" - the path where the isvdi ConfigMap will be mounted in the pod

    "isvdi.pem" - An ASCII formatted file containing both the certificate and the private key

    Risk Analytics component

    If you will be using the Risk Analytics component, edit the files in config/analytics/store/config

    In applicationConfig.properties, check the "streaming.datasource" setting at the end of the file. It must be "postgres" or "db2" depending on your database type.

    In infrastructureConfig.properties, setup your DB settings (Values already exist in config.yaml, but need to be here too)

    db.servername=hostname.domain # A host or IP used to reach the DB server. If using postgres pod, use "postgres"

    db.username=isimuser # The userid used to access the database

    db.port=5432 # The port the DB server is listening on

    db.database.name=ivig # The name of the IVIG database

    db.schema=isimuser # The schema owner, which is typically the db.username

    db.sslEnabled=true # Whether the DB is using SSL or not

    To facilitate correct license metrics, you will also need to update the annotations in helm/templates/300-statefulset-isvgim.yaml. Under spec.template.metadata, you will find this section:

    annotations:

    productName: "IBM Verify Identity Governance Lifecycle"

    productId: "1fcf57ba04ee4a779a860a8d1fac675e"

    productMetric: "{{ .Values.licenseType }}_VALUE_UNIT"

    productChargedContainers: "All"

    If you are using a Compliance key, the first two lines need to be changed to:

    productName: "IBM Verify Identity Governance Compliance"

    productId: "5a6909cef7c04434814c44642e031a44"

    If you are using an Enterprise key, the first two lines need to be changed to:

    productName: "IBM Verify Identity Governance Enterprise"

    productId: "b4e21aac6fca426ebbb6927d47cbf635"

    Internal LDAP
    If you will be using the internal LDAP component, edit the files in config/ldap

    Edit ldap_config.yaml

    Add the license information:

    general:

    license:

    accept: limited

    key: Long-Base64-encoded-key-from-Passport-Advantage (this must all be a single line)

    Under keyfile: a certificate and trusted certificate are listed. Adjust these if using your own certificates. The entry:

    - key: "@/var/isvd/config/isvd.pem"

    breaks down as follows:

    "@" - the prefix indicating this is a file, not a Base64 string.

    "/var/isvd/" - the path where the isvd ConfigMap will be mounted in the pod

    "isvd.pem" - An ASCII formatted file containing both the certificate and the private key

    If setting up LDAP in HA mode, also edit ldap2_config.yaml and proxy_config.yaml

    - ldap2_config.yaml is the same as ldap_config, EXCEPT for the key file. It uses a different certificate because it has a different hostname. The isvd.pem file needs "isvd-replica-1" in it's Subject Alternative Names field, while isvd2.pem needs "isvd-replica-2" in it's SAN field.

    - proxy_config.yaml is very similar to the other two, in that you need to add the same license information, and proxy.pem needs "isvd-cluster" in it's SAN field.

    FIPS

    If you will be enabling FIPS, edit the files in config/mq

    In both ISVGContainerQMgr.mqsc and ISVGContainerQMgr-shared.mqsc, find the line:
    ALTER QMGR SSLFIPS(NO)
    and change it to:
     ALTER QMGR SSLFIPS(YES)

2. Work with isvgimconfig pod

At this point, we need access to the IVIG container to continue collecting everything needed to install. This stage involves generating the certificates, creating the keystore, optionally deploying LDAP, DB, and ISVDI (RMI Dispatcher), and loading in the schema. It is the bulk of the installation process. To fill out the templates, we rely on the updateYaml.sh script, which runs a template through helm, and then automatically loads it into Kubernetes. However, in this scenario, the automatic loading is not desired. While the helm command can simply be run manually, the script includes some additional checks that add value, and thus the script should be preferred if possible.

To avoid the automatic loading, there are 3 options:
  • Set the environment variable OFFLINE to 1 (e.g. export OFFLINE=1)
  • Edit the bin/updateYaml.sh script to specify OFFLINE=1 at the top of it, so you don't forget to set it in the environment.
  • Edit bin/updateYaml.sh to remove the section at the end that calls "kubectl apply", and then it can never load automatically.
a. Process the templates

First, make sure the <starterDir>/yaml directory exists. If not, create it. (e.g. mkdir yaml)

Several yaml files already exist in helm/templates, but they need to be filled out before Kubernetes will accept them. To process them, run:

bin/createConfigs.sh templates

The yaml directory will now contain an incomplete set of filled out yaml files. The namespace and configmap files will be missing. The namespace file was excluded because it's handled elsewhere in scripts we will not be running, so we need to handle it manually. The configmaps will be created after we have the cerficates. Create the namespace yaml by running:

bin/updateYaml.sh 000-namespace.yaml

We also need to mount config.yaml inside the isvgimconfig pod, so run:

bin/createConfigs.sh setup

If you are using a custom repository, such as in an air-gapped environment, and you need to provide credentials for that repository, then you will need to create a regcred secret. The yaml templates are already configured to use a pullSecret named regcred. However, you cannot create it until your namespace exists. After loading the 000-namespace.yaml file in the step below, create the regcred secret:

Name: regcred

Purpose: To provide credentials for custom image repository

As a docker-registry type secret, the requirements are very specific. You must provide the repoistory URL, a username, and a password.

Example: kubectl create secret docker-registry regcred --docker-server="https://myrepo" --docker-username="username" --docker-password="password" --docker-email="email@address.com" --namespace="ivig"

Now load the following three yaml files:
  • yaml/000-namespace.yaml
  • yaml/020-config-isvgimconfig.yaml
  • yaml/200-deployment-isvgimconfig.yaml

The 200-deployment-isvgimconfig.yaml differs from the main isvgim pod in two ways. First, it only specifies one container. Second, it does not start Liberty. It is just a container with many utilities we need to setup IVIG.

b. Create the certificates

Find the name of the pod: kubectl -n namespace get pods

It will look something like this:

NAME READY STATUS RESTARTS AGE
isvgimconfig-85df579f59-zhs2d 1/1 Running 0 30s
Open a shell inside the container. In this case, that would be:

kubectl -n namespace exec -it isvgimconfig-85df579f59-zhs2d -- bash
cd /work

The entrypoint for the container is initIMContainer.sh, which unpacks any configmaps that had to be mounted as .tar.gz files to save space, calls parseYaml.sh to inspect the config.yaml, and makes sure any settings passed in are put in the correct place. One of the side effects of parseYaml.sh is that if an SSL certificate and key was not supplied, it will generate a CA certificate and an SSL certificate for Liberty to use. The CA certificate files are (all in the /work directory): isvgimRootCA.crt, isvgimRootCA.key, and isvgimRootCA.srl. The IVIG SSL certificate files are: isvgim.csr, isvgim.key, and isvgim.cert. The Certificate Signing Request (.csr) file is kept so the certificate can be easily renewed when it's close to expiration.

Create the certificates for the other components:

/work/certificateUtil.sh ldap

/work/certificateUtil.sh mq

/work/certificateUtil.sh db

/work/certificateUtil.sh isvdi

These commands will create a set of .csr, .crt, and .key files for isvd, isvd2, proxy, mq, pgsql, and isvdi. certificateUtil.sh will include in their Subject Alternative Names field the names of the services they're attached to in the default setup. It will also add any hostnames defined in config.yaml in case you need to expose any components externally (e.g. LDAP or DB access). If you will not be using a particular component, then there is no need to create certificates for it.

c. create the keystore

Next, create the IVIG keystore to hold the encryption key. If you did not specify the keypass value in config.yaml, you will need to create the /work/keypass file with the password you wish to use for the keystore.

echo "this_is_my_password" > /work/keypass

then create the keystore:

/work/createKeystore.sh

d. download the artifacts

The IM_HOME directory corresponds to /opt/ibm/wlp/usr/servers/defaultServer/config/data, which will be used instead of constantly repeating the full path. This script updates the IM_HOME/encryptionKey.properties with an obfuscated version of the new keystore password, and creates the IM_HOME/keystore/itimKeystore.jceks, and the IM_HOME/keystore/kek* directory that holds the masterKey. You must download all of these files to the host to be turned into ConfigMaps. For the keystore directory, since all of the metadata must be preserved, you should tar it up before downloading.

cd IM_HOME

tar czf /work/keystore.tgz encryptionKey.properties keystore/

You can also tar the certificates, or download them individually. Either way, to copy the files to the host, you must leave the container and use the kubectl cp command:

exit

kubectl -n namespace cp isvgimconfig-85df579f59-zhs2d:/work/keystore.tgz ./keystore.tgz

The format is: cp <pod_name>:/path/to/file /local/path/to/file

The files need to end up in the following locations. If the /data directory does not already exist, create it. The certificate files all need to end up in config/certs. For the keystore files:

cd data

tar zxf /path/to/keystore.tgz

rm /path/to/keystore.tgz

In the config/certs directory, a couple changes need to be made:

mv isvgim.cert isvgim.crt

cat isvd.crt isvd.key > isvd.pem

cat isvd2.crt isvd2.key > isvd2.pem

cat proxy.crt proxy.key > proxy.pem

cat isvgi.crt isvdi.key > isvdi.pem

e. Create all of the necessary configmaps

To persist data across all of the pods, configmaps are used. Configmaps are convenient because they can mount on any node without any additional configuration, but they are limited to 1MB maximum in size. These can be created manually, but there's really no reason not to use the createConfigs.sh script that already knows how to build them. It will collect the needed details from the config directory, create a template for it, then call updateYaml.sh to fill out the template and load it into Kubernetes. However, in this scenario we do NOT want files loaded automatically. Options to prevent that were covered at the start of this section.

Before running these, you can save time later by editing config/config.yaml to make sure the truststore section includes isvgimRootCA.crt. e.g.

server:

truststore:

- '@isvgimRootCA.crt'

bin/createConfigs.sh setup

bin/createConfigs.sh keystore

bin/createConfigs.sh ldap

bin/createConfigs.sh db

bin/createConfigs.sh mq

bin/createConfigs.sh isvdi

bin/createConfigs.sh risk

bin/createConfigs.sh

"setup" needs to be run again because it packages config.yaml AND the certificates. The last one, with no parameter, packages the data directory, but not the keystore files.

f. Create all of the necessary secrets

Several secrets are needed to pass credentials to the pods. The names provided can all be changed, but these are what the yaml files are currently expecting.

Name: pgcreds

Purpose: To provide credentials to the internal Postgres pod. Not required if using an external DB.

Values:

- pguser - userid used to connect to the DB. If it is not all lowercase, be sure to enclose it in double quotes.

- pgpass - password for pguser

- admuser - Admin userid for the server (default is postgres)

- admpass - password for Admin userid

Example: kubectl create secret generic pgcreds --from-literal=pguser=username --from-literal=pgpass=password --from-literal=admpass=password --from-literal=admuser=postgres --namespace="ivig"

Name: isvdcred

Purpose: To provide credentials to the interal LDAP pods. Not required if using an external LDAP.

Values:

- admindn - userid to bind to LDAP

- adminpwd - password of admindn

Example: See above. The rest will all follow the same form, just with different values stored.

Name: mqcreds

Purpose: To provide credentials to the MQ pods. Whatever is set here is what both IVIG and MQ will use to communicate.

Values:

- mqlocal - password for the app user in the mqlocal container of the isvgim pod.

- mqshare - password for the app user in the mqshare pod.

- mqadmin - password for the MQ admin user in both mqlocal and mqshare. This is only useful if using an external MQ deployment.

Name: oidccreds

Purpose: If using OIDC, you must provide the clientID and clientSecret values for the endpoints you want to authenticate to OIDC.

Values:

- restuser - id for OIDC connection when using REST APIs

- restsecret - secret for the rest id

- adminConsoleuser - id for OIDC connection when using Admin Console

- adminConsolesecret - secret for the adminConsole id

- iscuser - id for OIDC connection when using the Service Center

- iscsecret - secret for the iscuser id

Name: riskcreds

Purpose: To provide DB/MQ credentials to the Spark pods of the Risk Analytics module

Values:

- dbuser - userid used to connect to the database

- dbpass - password of the dbuser

- mqpass - password for the app user in the mqshare pod

3. Deploy the middleware

a. Deploy LDAP

To deploy the first replica, load the following yaml files with kubectl apply -f:

yaml/005-serviceaccount-isvd.yaml

yaml/025-config/isvgimldap.yaml

yaml/070-pvc-isvd.yaml

yaml/105-service-isvd.yaml

yaml/205-deployment-isvd.yaml

Once the isvd-replica-1 pod starts, you can move to the next section. If you are configuring LDAP for HA, this is just the beginning.

First, create these LDIF files:

user.ldif:

dn: cn=manager,cn=ibmpolicies

changetype: add

objectclass: inetOrgPerson

sn: manager

cn: manager

userpassword: $ADMINPWD

group.ldif:

dn: globalGroupName=GlobalAdminGroup,cn=ibmpolicies

changetype: modify

add: member

member: cn=manager,cn=ibmpolicies

schema.ldif:

dn: ibm-replicaServerId=isvd-replica-1,ibm-replicaGroup=default,cn=ibmpolicies

changetype: add

objectclass: top

objectclass: ibm-replicaSubentry

ibm-replicaServerId: isvd-replica-1

ibm-replicationServerIsMaster: true

cn: isvd-replica-1

description: server 1 (peer master) ibm-replicaSubentry

dn: ibm-replicaServerId=isvd-replica-2,ibm-replicaGroup=default,cn=ibmpolicies

changetype: add

objectclass: top

objectclass: ibm-replicaSubentry

ibm-replicaServerId: isvd-replica-2

ibm-replicationServerIsMaster: true

cn: isvd-replica-2

description: server2 (peer master) ibm-replicaSubentry

dn: cn=isvd-replica-2,ibm-replicaServerId=isvd-replica-1,ibm-replicaGroup=default,cn=ibmpolicies

changetype: add

objectclass: top

objectclass: ibm-replicationAgreement

cn: isvd-replica-2

ibm-replicaConsumerId: isvd-replica-2

ibm-replicaUrl: ldaps://isvd-replica-2:$PORT

ibm-replicaCredentialsDN: cn=replcred,cn=replication,cn=ibmpolicies

description: server1(master) to server2(master) agreement

dn: cn=isvd-replica-1,ibm-replicaServerId=isvd-replica-2,ibm-replicaGroup=default,cn=ibmpolicies

changetype: add

objectclass: top

objectclass: ibm-replicationAgreement

cn: isvd-replica-1

ibm-replicaConsumerId: isvd-replica1

ibm-replicaUrl: ldaps://isvd-replica-1:$PORT

ibm-replicaCredentialsDN: cn=replcred,cn=replication,cn=ibmpolicies

description: server2(master) to server1(master) agreement

Be sure to replace the $ADMINPWD in user.ldif and $PORT (both times) in schema.ldif with the correct values.

Obtain a reference to the isvd-replica-1 pod. Run: kubectl -n namespace get pods, and it should contain an entry like this:

NAME READY STATUS RESTARTS AGE

isvgimconfig-85df579f59-zhs2d 1/1 Running 0 20m

isvd-replica-1-bdc76fb4b-nf2hc 1/1 Running 0 45s

Upload the LDIF files to the isvd-replica-1 pod:

kubectl -n namespace cp user.ldif isvd-replica-1-bdc76fb4b-nf2hc:/tmp/user.ldif

kubectl -n namespace cp group.ldif isvd-replica-1-bdc76fb4b-nf2hc:/tmp/group.ldif

kubectl -n namespace cp schema.ldif isvd-replica-1-bdc76fb4b-nf2hc:/tmp/schema.ldif

Add the admin user:

kubectl -n namespace exec isvd-replica-1-bdc76fb4b-nf2hc -- idsldapadd -h localhost -p $PORT -D $ADMINDN -w $ADMINPWD -f /tmp/user.ldif -K /home/idsldap/idsslapd-idsldap/etc/server.kdb

Add the user to the admin group:

kubectl -n namespace exec isvd-replica-1-bdc76fb4b-nf2hc -- idsldapmodify -h localhost -p $PORT -D $ADMINDN -w $ADMINPWD -f /tmp/group.ldif -K /home/idsldap/idsslapd-idsldap/etc/server.kdb

Create the replication agreement:

kubectl -n namespace exec isvd-replica-1-bdc76fb4b-nf2hc -- isvd_manage_replica -ap -z -h isvd-replica-2 -p $PORT -i isvd-replica-2 -ph isvd-replica-1 -pp $PORT

Add the schema:

kubectl -n namespace exec isvd-replica-1-bdc76fb4b-nf2hc -- idsldapadd -h localhost -p $PORT -D "$ADMINDN" -w "$ADMINPWD" -f /tmp/ldif3 -K /home/idsldap/idsslapd-idsldap/etc/server.kdb

Be sure to replace anything starting with a $ with the correct value. PORT is the ldaps port listed in values.yaml, which should be 9636. The primary replica must now be stopped to allow the seed job to copy the contents to the isvd-replica-2 volume.

kubectl -n namespace scale --replicas=0 deployment/isvd-replica-1

kubectl apply -f yaml/071-pvc-isvd2.yaml

kubectl apply -f yaml/400-job-isvdseed.yaml

Once the job completes, less than 5 minutes usually, deploy the second replica.

kubectl apply -f yaml/106-service-isvd2.yaml

kubectl apply -f yaml/206-deployment-isvd2.yaml

Once it starts up, the replication agreement needs to be accepted. Obtain a reference to the isvd-replica-2 pod with kubectl -n namespace get pods

NAME READY STATUS RESTARTS AGE

isvgimconfig-85df579f59-zhs2d 1/1 Running 0 30m

isvd-replica-2-agv56hn2b-ma1dj 1/1 Running 0 1m

kubectl -n namespace exec isvd-replica-2-agv56hn2b-ma1dj -- isvd_manage_replica -ar -z -h isvd-replica-2 -p $PORT -i isvd-replica-2 -s isvd-replica-1

Both replicas must now be restarted.

kubectl -n namespace scale --replicas=1 deployment/isvd-replica-1

kubectl -n namespace scale --replicas=0 deployment/isvd-replica-2

kubectl -n namespace scale --replicas=1 deployment/isvd-replica-2

Now the proxy can be deployed.

kubectl apply -f yaml/072-pvc-isvd-proxy.yaml

kubectl apply -f yaml/110-service-proxy.yaml

kubectl apply -f yaml/210-deployment-proxy.yaml

Once the proxy server has started, LDAP is deployed.

b. deploy DB

Create tbscript.sh:

cd /var/lib/postgresql/data

mkdir isvgimdata

mkdir isvgimindexes

If you are deploying a single postgres instance, load the following yaml files:

kubectl apply -f yaml/040-config-isvgimdb.yaml

kubectl apply -f yaml/080-pvc-postgres.yaml

kubectl apply -f yaml/130-service-postgres.yaml

kubectl apply -f yaml/220-deployment-postgres.yaml

Once the pod has started, retrieve it's name from kuebctl -n namespace get pods and skip to the create tablespace step.

If you are deploying a postgres cluster, the kubegres operator is provided. Kubegres is an opensource operator that works with the default postgres image from Docker Hub. There are several other postgres operators on the market, which are not free, and generally require the use of their specific postgres image. An operator is required for HA support because some process needs to monitor the postgres pods and connect in to a backup server to promote it to primary when needed. Please be advised that kubegres will run in it's own namespace, that it creates while it is installing. To install, load the following yaml files:

kubectl apply -f yaml/040-config-isvgimdb.yaml

kubectl apply -f yaml/405-operator-kubegres.yaml

kubectl apply -f yaml/410-crs-postgres.yaml

Once postgres-1-0, postgres-2-0, and postgres-3-0 pods have started, continue on to create the tablespaces.

The postgres tablespace need to be created before we can configure a database. Load an run the tbscript.sh on postgres-1-0, and -2-0 and -3-0 if you installed a cluster.

kubectl -n namespace cp tbscript.sh postgres-1-0:/tmp/thscript.sh

kubectl -n namespace exec postgres-1-0 -- /bin/bash /tmp/tbscript.sh

c. deploy RMI Dispatcher

To install the RMI dispatcher, load the following yaml files:

kubectl apply -f yaml/045-config-adapters.yaml

kubectl apply -f yaml/085-pvc-isvdi.yaml

kubectl apply -f yaml/135-service-isvdi.yaml

kubectl apply -f yaml/225-deployment-isvdi.yaml

Once the isvdi pod has started, it's ready.

4. Configure the data tier

a. Restart the isvgimconfig container

The data tier is configured to use SSL. However, the CA that created their certificates is not currently in the IVIG truststore. One option is to simply add it to the truststore, but the truststore is recreated with a different password on every restart of the container, so a simpler approach is to just follow the existing process.

If you did not do so earlier, please check config/config.yaml to make sure the isvgimRootCA.crt is specified in the truststore section:

server:

truststore:

- '@isvgimRootCA.crt'

If you only added it now, you will need to repackage config.yaml in the ConfigMap:

bin/createConfigs.sh setup

Then, make sure the latest has been loaded:

kubectl apply -f yaml/020-config-isvgimconfig.yaml

And finally, restart the pod:

kubectl -n namespace scale --replicas=0 deployment isvgimconfig

kubectl -n namespace scale --replicas=1 deployment isvgimconfig

Once the pod is Running and 1/1 Ready, you can continue.

b. Configure LDAP

Connect back in to the isvgimconfig pod to continue the configuration. kubectl -n namespace get pods, then based on the name:

kubectl -n namespace exec -it isvgimconfig-85df579f59-zhs2d -- bash

/work/ldapConfig.sh install

When the pod was started, the config.yaml was read and the LDAP section was turned into /work/ldapConfig.properties. The ldapConfig.sh script calls the regular IM Java code that loads the LDAP schema. This code also updates the enRole.properties and enRoleLDAPConnection.properties files with the information from the ldapConfig.properties.

c. Configure DB

/work/dbConfig.sh install

Similarly, the DB section of config.yaml was turned into dbConfig.properties. This script calls the regular IM Java code to load the DB schema. It also updates enRoleDatabase.properties with the connection information. In addition, the script also updates enRoleDatabase.properties to add the admin credentials (encrypted) and the tablespace information. The admin credentials can be retrieved automatically during upgrades.

d. Collect artifacts
We need to transfer a few more files back to the host to be persisted in a ConfigMap.

kubectl -n namespace cp isvgimconfig-85df579f59-zhs2d:/work/config.yaml config/config.yaml
kubectl -n namespace cp isvgimconfig-85df579f59-zhs2d:/opt/ibm/wlp/usr/servers/defaultServer/config/data/enRole.properties data/enRole.properties
kubectl -n namespace cp isvgimconfig-85df579f59-zhs2d:/opt/ibm/wlp/usr/servers/defaultServer/config/data/enRoleLDAPConnection.properties data/enRoleLDAPConnection.properties
kubectl -n namespace cp isvgimconfig-85df579f59-zhs2d:/opt/ibm/wlp/usr/servers/defaultServer/config/data/enRoleDatabase.properties data/enRoleDatabase.properties

If you will be using the Risk Analytics module, also retrieve enRoleAnalytics.properties:

kubectl -n namespace cp isvgimconfig-85df579f59-zhs2d:/opt/ibm/wlp/usr/servers/defaultServer/config/data/enRoleAnalytics.properties data/enRoleAnalytics.properties

Then edit this file in the data directory to change:

analytics.enable=false

to

analytics.enable=true

You must also load the key into the database. Run the following command:

kubectl -n namespace exec isvgimconfig-85df579f59-zhs2d -- /bin/bash -c "/work/loadEnterpriseKey.sh $RISKKEY"

where $RISKKEY is the additional key you acquired from Passport Advantage.

If you have selected USER_VALUE_UNIT licensing, and you wish to define criteria for identifying External Users, add the following two lines to data/enRole.properties:

enrole.license.externaluser.objectclass=

enrole.license.externaluser.attribute=

You can specify a comma delimited list of objectclasses the denote a Person record is an External User (e.g. extPerson,vendor), or you can specify an attribute=value pair that will exist on External User records (e.g. userType=External). If both are specified, only the objectclass definition will be used.

Then package them in ConfigMaps:

bin/createConfigs.sh setup

bin/createConfigs.sh

And redeploy them into the cluster:

kubectl apply -f yaml/020-config-isvgimconfig.yaml

kubectl apply -f yaml/015-config-isvgimdata.yaml

5. Deploy the IVIG application

With the data tier configured, the environment is now ready to deploy the IVIG application.
  1. Load the MQ ConfigMaps:

    kubectl apply -f yaml/011-config-mqcfg-ssl.yaml

    kubectl apply -f yaml/030-config-mqcfg.yaml

  2. Stop the isvgimconfig pod:

    kubectl delete -f yaml/200-deployment-isvgimconfig.yaml

  3. Deploy the IVIG yamls:
    
    kubectl apply -f yaml/075-pvc-mqshare.yaml
    kubectl apply -f yaml/100-service-isvgim.yaml
    kubectl apply -f yaml/120-service-mqlocal.yaml
    kubectl apply -f yaml/125-service-mqshare.yaml
    kubectl apply -f yaml/215-deployment-mqshare.yaml
    kubectl apply -f yaml/300-statefulset-isvgim.yaml
    
  4. Wait for the isvgim-0 pod to be Ready 4/4 (mqlocal, isvgim, and 2 busybox containers to expose the logs). If you reach this point, that indicates everything was done correctly and the application is working. However, there is an issue with MQ where no matter what, the first time the container starts (with an uninitialized volume), it will only run in non-SSL mode. The pods need to be restarted to come up supporting SSL - and they will each time thereafter, this is a one time necessary restart.
    
    kubectl -n namespace scale --replicas=0 deployment mqshare
    kubectl -n namespace scale --replicas=1 deployment mqshare
    kubectl -n namespace scale --replicas=0 sts isvgim
    kubectl -n namespace scale --replicas=1 sts isvgim
    

    Once it becomes Ready 4/4, it is ready to use.

If you have an Enterprise or Compliance Key, and would like to enable the Risk Analytics module, there are a few more yamls to load:

kubectl apply -f yaml/006-serviceaccount-ivigrisk.yaml
kubectl apply -f yaml/031-config-ivigrisk.yaml
kubectl apply -f yaml/090-pvc-riskengine.yaml
kubectl apply -f yaml/415-job-riskstart.yaml

The risk-start job will attempt to contact the Kubernetes API endpoint you specified in values.yaml as "clusterUrl" to spawn a "driver" pod. This pod will in turn spawn an "executor" pod. All 3 will remain running, but only the executor pod will be consuming resources as it works. If there is a failure to start, it might be due to resources. The default settings require 2 full CPU cores per node, which is not always available. You can check if this is an issue with:

kubectl get nodes

(for each node) kubectl describe node node_name

This will show you the resource requests compared to the resource limits. If it is a resource issue, you will either need to modify the config/analytics/store/config/mainJob.properties to lower the CPU requirement to 1, or increase the number of CPU cores available on your nodes.

6. Additional setup options

If you need to expose the internal LDAP or DB outside the Kubernetes cluster, two yaml files have been setup for this, but you can modify them as needed.

kubectl apply -f yaml/115-service-ldapExt.yaml

kubectl apply -f yaml/116-service-dbExt.yaml

These will open node ports to the various pods. The default mapping is:

30636 -> isvd-proxy

30637 -> isvd-replica-1

30638 -> isvd-replica-2

30543 -> postgres (primary)

30544 -> postgres (replica)

To best manage licenses, it is highly recommended to install the IBM License Server. If you selected PROCESSOR_VALUE_UNIT licensing, it will collect statistics automatically from Kubernetes based on the pod annotations. If you selected USER_VALUE_UNIT licensing, IVIG will update the License Server daily with user calculations. It will install into it's own namespace (ibm-common-services), and includes an operator and the server instance.

More details about it can be found here:

https://www.ibm.com/docs/en/cloud-paks/foundational-services/3.23?topic=platforms-tracking-license-usage-stand-alone-containerized-software

The easiest way to install it is via its installation script:

https://raw.githubusercontent.com/IBM/ibm-licensing-operator/latest/common/scripts/ibm_licensing_operator_install.sh

But it can also be installed manually from the instructions on this page:

https://www.ibm.com/docs/en/cloud-paks/foundational-services/3.23?topic=software-offline-installation

With it installed, we need to collect its connection information and store that in the IVIG namespace. These 4 commands will reset the values in the IVIG namespace. Only the second two need to be run when installing.

kubectl -n namespace delete --ignore-not-found=true secret ibm-licensing-upload-token

kubectl -n namespace delete --ignore-not-found=true cm ibm-licensing-upload-config

kubectl get secret ibm-licensing-upload-token -n ibm-common-services -oyaml | grep -v \"namespace: ibm-common-services\" | kubectl apply -n namespace -f -

kubectl get cm ibm-licensing-upload-config -n ibm-common-services -oyaml | grep -v \"namespace: ibm-common-services\" | kubectl apply -n namespace -f -

7. Applying updates

To apply fixpack updates, you must first pull the new image from icr.io and push it into your local repository. There is also an IMFixInstaller.sh script on FixCentral to help deploy the fixpack, but it relies on updateYaml.sh being able to deploy YAML files, which has been disabled here. The steps it would have taken are:
  1. a. Stop the existing isvgim pods. LDAPUpgrade and DBUprade cannot be run while IVIG is running.

    (1) kubectl -n namespace scale --replicas=0 sts isvgim

  2. b. Start the new image in config mode

    (1) Edit helm/values.yaml to change the tag of the isvgim image to the new release.

    (2) Run: bin/updateYaml.sh 201-deployment-isvgimconfig.yaml

    Note: There are both 200- and 201- versions. The 200- is for initial setup, while 201- is for upgrades because it mounts the existing keystore and data files.

    (3) Deploy the updated YAML: kubectl apply -f yaml/201-deployment-isvgimconfig.yaml

  3. c. Download the fix package

    (1) If it doesn't exist, create the <starter>/fixes directory

    (2) Find the config pod's name: kubectl -n namespace get pods

    (3) Copy the file: kubectl -n namespace cp PODNAME:/work/fixPackage.tgz fixes/fixPackage.tgz

    (4) Unpack it: tar zxf fixes/fixPackage.tgz -C fixes

    (5) Free up space: rm fixes/fixPackage.tgz

  4. d. Determine which fixes to install

    (1) Find your current level: cat <starter>/.installedFixes

    You should see something like this, 4 columns separated by tabs:

    ID FIX FULL NAME INSTALL DATE USER

    0001 10.0.2.0_IF1 2024-03-31 system

    0002 10.0.2.1 2024-03-31 system

    0003 10.0.2.2 2024-03-31 system

    0004 10.0.2.3 2024-07-19 system

    0005 10.0.2.4 2024-09-30 system

    The ID corresponds to the directories that exist in the fixes directory. The full name is the more understandable version that the ID maps to. The install date and user indicate who ran the IMFixInstaller.sh script and when. However, in this scenario, we only see the fixes that existed when the product was installed, as indicated by "system" as the user. The date was when each fixpack was released. The highest ID, which should also be the last entry in the file, is your current version.

    (2) In the fixes directory, any directories there that do not appear in the .installedFixes file are the ones that need to be applied, IN ORDER.

  5. e. Apply the fixes

    (1) For each fix that needs to be installed, there is a corresponding applyFix.sh script in the directory. You can run it in dry-run mode to see what would happen during an actual upgrade. The script is designed to be called from IMFixInstaller.sh rather than manually, so it's not as user friendly as you might like. The format of the command is:

    applyFix.sh [0|1] [apply|rollback]

    The first parameter is whether to use dry-run mode, the second is whether to apply the fix or roll it back. There is no rollback in dry-run mode, but if there is an error when actually applying the fix, the IMFixInstaller.sh calls it again to rollback the changes. All the file system changes can roll back, but any updates to LDAP or the DB are irreversible outside of going to backup.

    In this case, we need to use: applyFix.sh 1 apply

    This will print out a list of files it's modifying. This step is not technically necessary, but it's easy to run and gives a good idea of what changes will need to be made.

    (2) In each specific fix directory, you will find two files, among others: filelist and copylist. filelist is the full list of modified files, including diffs, and they can all be found in the files directory. copylist is the list of files that can simply be copied to the starter area because they are either scripts or new files. These are files that would not have user customizations. The diff files are located in the diffs directory, and show what changed between the last fixpack release and this one.

    You need to copy the files in copylist into the starter area, and review the diffs to make the necessary changes in your yaml files. You should be able to apply the diffs directly using the patch command, but it can also be done manually with vi, emacs, or whatever text editor you prefer.

    (3) If updateConfig.sh exists, it also must be applied. updateConfig.sh is a placeholder for any custom commands that need to be run. The fixpack process has been automated, but sometimes there are steps that need to be done in a specific way. Most updates will not include this file, but if it exists, it's important. You can either run the script to allow it to work, or you can review it and implement the necessary changes.

    For example, in the upgrade from 10.0.2.x to 11.0.0.0, new entries needed to be added to values.yaml to support the new Risk Analytics module. Updating existing lines is easy for the automated script, but adding new lines in specific places requires some custom code.

    (4) Update the image tags in values.yaml

    At the top of the applyFix.sh script is a line with the new image tags. e.g.

    IMAGES=" isvgim:11.0.0.0 isvdproxy:10.0.3.0_IF3 mq:9.4.0.6 kubegres:1.16"

    The format is <values.yaml attr>:<new tag>

    The line above would change these lines:

    isvgim: icr.io/isvg/identity-manager:10.0.2.4

    isvdproxy: icr.io/isvd/verify-directory-proxy:10.0.2.0

    mq: icr.io/isvg/mq:9.3.0.15

    kubegres: reactivetechio/kubegres:1.18

    to

    isvgim: icr.io/isvg/identity-manager:11.0.0.0

    isvdproxy: icr.io/isvd/verify-directory-proxy:10.0.3.0_IF3

    mq: icr.io/isvg/mq:9.4.0.6

    kubegres: reactivetechio/kubegres:1.16

    The name of the image is not touched so that if they are coming from a private repostiory, they stay coming from a private repository.

    (5) Update the templates

    Any changes to yamls or values.yaml need to be reflected in kubernetes. This involves recreating the ConfigMaps:

    bin/createConfigs.sh setup

    kubectl apply -f yaml/020-config-isvgimconfig.yaml

    bin/createConfigs.sh ldap

    kubectl apply -f yaml/025-config-isvgimldap.yaml

    bin/createConfigs.sh mq

    kubectl apply -f yaml/011-config-mqcfg-ssl.yaml

    kubectl apply -f yaml/030-config-mqcfg.yaml

    bin/createConfigs.sh isvdi

    kubectl apply -f yaml/045-config-adapters.yaml

    Not all of these might have changed in the fixpack, but it's safe to run them anyway because if they are the same, kubernetes will just acknowledge nothing changed. For each of the pods you have deployed, you will also need to update their yaml. Below is a list of all of them:

    bin/updateYaml.sh 205-deployment-isvd.yaml

    bin/updateYaml.sh 206-deployment-isvd2.yaml

    bin/updateYaml.sh 210-deployment-proxy.yaml

    bin/updateYaml.sh 215-deployment-mqshare.yaml

    bin/updateYaml.sh 220-deployment-postgres.yaml

    bin/updateYaml.sh 225-deployment-isvdi.yaml

    bin/updateYaml.sh 405-operator-kubegres.yaml

    bin/updateYaml.sh 410-crd-postgres.yaml

    For any of the above that you updated, you must also load the changes into kubernetes:

    kubectl apply -f yaml/205-deployment-isvd.yaml

    (6) Repeat the above steps for any additional fixpacks you might need to apply.

    (7) Once all the file portions of the fixpacks have been installed, you can update the data tier.

    (a) Upgrade LDAP: bin/util/ldapUpgrade.sh --installer

    (b) Upgrade DB: bin/util/dbUpgrade.sh --installer

    (c) If upgrading from 10.0.2.x to 11.0.0.0, also run: bin/util/upgradeRoleHierarchy.sh

    - This is a one time event when moving from 10.0.2.x to 11.0.0.0.

    (8) Update installedFixes

    (a) Edit <starter>/.installedFixes to add entries for all the fixpacks you installed. Use tabs rather than spaces between the columns.

  6. f. Post-install steps

    (1) Remove the isvgimconfig pod

    - kubectl delete -f yaml/201-deployment-isvgimconfig.yaml

    (2) Start the isvgim pods

    - bin/updateYaml.sh 300-statefulset-isvgim.yaml

    - kubectl apply -f yaml/300-statefulset-isvgim.yaml

    - kubectl -n namespace scale --replicas=1 sts isvgim

    Note: If you run more than one isvgim pod, use that number for the replicas instead of "1".