Customization overview

Overview of various customization supported by IBM Verify Identity Governance - Container.

Customization Directory Structure

IBM Verify Identity Governance - Container supports a wide range of customizations.

The first step is to run the getExtensions.sh script. The script creates a starter/custom directory. This directory contains the .war files (such as ISC_UI.war, itim_console.war, etc.) that you can use for customization.

The getExtensions.sh script also creates a starter/custom/jars directory. You should place any custom JAR files into this jars directory.

Customizing JAR files

The IBM Verify Identity Governance - Container can be customized by specifying custom jar files, subform definitions, and other UI modifications. While the properties and certificates are persisted via ConfigMap objects, this approach will not work for most jar files because there is a 1MB limit. To load larger files, there are three options available:
1. Using the "kubectl cp" command

The format of this command is: kubectl -n your_namespace cp /path/to/file pod_name:/path/in/container

This approach can be useful when developing a new customization, as it allows for a quick turnaround. However, there are two major drawbacks. One is that you need to know the path to copy to, which involves understanding the layout of IM in the container, as well as where your custom file will be looked for. The most important problem is that the files will not survive a restart of the pod.

Thus, in a development environment you can copy in the files, restart the server process, and check on the results. But this would never be used in production.

2. Creating new image based on the existing ISVGIM image
By populating a directory of customizations as specified below (named files in this example), and placing the following Dockerfile in the directory above it, running "docker build -t repo/isvgim:custom ." (Do not forget the period at the end), you can create your own image with your custom files. The Dockerfile should look something like:

FROM icr.io/isvg-im-eap/isvg-im-beta:beta6
COPY ./files /tmp/isvgimcustom
EXPOSE 9443
CMD ./work/initIMContainer.sh
The initIMContainer.sh script will handle copying the files from /tmp/isvgimcustom to where they need to be for IM to find them. You will need to host this image in your own repository, which is what the "repo" token in the "docker build" command represents. You will need to update the image used by the isvgim statefulset. And you will need to repeat this process for each new release of the ISVGIM container.
3. Mounting a volume to the container

There are any number of ways to create a volume, with the best option being an NFS mount. The key points are that the volume needs to be arranged as described below, and that it is mounted to /tmp/isvgimcustom in the isvgim container of the isvgim statefulset. If you do not have access to cloud storage or NFS, one simple approach is to use local storage. This involves placing the custom files in a directory on each node, creating node-specific PersistentVolume (PV) objects referencing the local directory, and then creating a PersistentVolumeClaim to reference the PV. Whichever node the pod is scheduled on, Kubernetes will assign it the PV from that node.

To aid in this approach, example files have been created in the yaml directory: 900-storageclass- local.yaml, 901-pv-local.yaml, and 910-pvc-local.yaml.

The only file that needs to be modified is 901-pv- local.yaml. Start by running “kubectl get nodes”. You will need a separate PV yaml file for each node, so you can copy the first one to use as a base: "cp 901-pv-local.yaml 902-pv-local.yaml", or whatever name you like.

There are three lines to potentially change in each of those files.
metadata.name

The example uses "local-pv-node1", but this must be unique. You can use any unique value, but it’s recommended to continue with "local-pv-node2", etc, or using the actual node name instead of node1 or node2.

spec.local.path

It does not need to be the same on each node, though that might make maintenance a bit easier if they are.

nodeAffinity

This is the value specified for nodeAffinity at the bottom of the file. The example uses "node1", but this must be changed to the name of your actual node. These are displayed in the NAME column of the "kubectl get nodes" output. You only need to define PVs for nodes that will run the ISVGIM pods. If in doubt, it is safe to create PVs for all nodes, some just might not be used (e.g. on control-plane nodes).

When setting up the directory on each node, make sure it has sufficient permissions. For the example "/tmp/customizations", the commands are:

mkdir -p /tmp/customizations
chmod 777 /tmp/customizations 
chcon -Rh system_u:object_r:svirt_sandbox_file_t:s0 /tmp/customizations 
Note: The "chcon" command is only needed if SELinux is enabled.

You must copy the custom JAR files into the "jars" subdirectory (for example: /tmp/customizations/jars), so that they are available to IVIG.

After you have performed this step and created the storageclass, PV, and PVC objects, the last update is to the 300-statefulset-isvgim.yaml file. This change must be made in the helm/templates directory, as the version in the yaml directory will be replaced during fix pack installations and any changes made directly to that file will be lost. During a fix pack install, the template will be reprocessed, so any changes made there will be persisted.

In the spec.template.spec.volumes section, add another entry for the isvgim-custom PVC:

- name: custom-volume 
persistentVolumeClaim: 
 claimName: isvgim-custom 
Next, in the spec.template.spec.containers.name=isvgim container section, add another entry for volumeMounts:

- name: custom-volume 
mountPath: /tmp/isvgimcustom 
To load the changes into Kubernetes, use the following command:
.bin/updateYaml.sh 300-statefulset-isvgim.yaml