Using ISVDI
Detailed procedure to use IBM Security Verify Directory Integrator v10 for IBM Verify Identity Governance - Container deployment.
Overview
IBM Verify Identity Governance - Container offers support for Dispatcher container which allows easy setup of IBM Security Verify Directory Integrator (ISVDI) v10 adapters in the same kubernetes cluster.
Deploying ISVDI
- ISVDI deployment during IBM Verify Identity Governance - Container installation
-
When you begin the installation of IBM Verify Identity Governance - Container using configure.sh script, you are prompted on whether to deploy the ISVDI container or not. In order to deploy this container, you need to provide the license key, available on IBM Passport Advantage portal.
If you choose not to deploy the ISVDI container during the installation of IBM Verify Identity Governance - Container or during an upgrade, you can easily deploy it later.
- ISVDI deployment after IBM Verify Identity Governance - Container installation
- You can choose the deploy the ISVDI container after you have installed the IBM Verify Identity Governance - Container. In this case, you use the bin/sys/setupISVDI.sh script.
The default deployment name for the Dispatcher container is isvdi. This identifier is used in the name of the container, the service, and the persistent volume (PV) that is created to store the adapters. A given deployment can be scaled to multiple pods to handle a larger workload.
kubectl -n <namespace> scale --replicas=5 deployment isvdi
While this is a
powerful way to increase the capacity of the adapters, you must take into consideration a technical
detail regarding the storageclass.During installation, if there is only a single node in the cluster, the PVs will be setup to use the mode "ReadWriteOnce". The Once is slightly misleading because it represents how many nodes can access the PV, not how many pods. If there are multiple nodes, then IVIG will require a storageclass that supports "ReadWriteMany", which allows pods on any node in the cluster to access it. NFS and rook-ceph are examples of storageclasses that support ReadWriteMany. Without this requirement, scaling the pods would lead to errors since pods on the additional nodes would not have access to the PV with all of the adapter artifacts on it.
After the Dispatcher pod is set up and running, you can work with it using the bin/adapterUtil.sh script. Download the required Adapter zip file from IBM Passport Advantage, and then load it into the pod using this command: adapterUtil.sh -loadAdapter path/to/Adapter.zip accept.
Note that, "accept" parameter is required to acknowledge the acceptance of the adapter license. This command copies the necessary files to the IBM Verify Identity Governance - Container pod. To verify, use this command: adapterUtil.sh -listAdapters. Alternatively, you can view more details about a given adapter using this command: adapterUtil.sh -infoAdapter adapterName, where adapterName is the adapter name as displayed using the -listAdapters option.
In the IBM Verify Identity Governance - Container, you must load the profile for each adapter before creating a service for it. When defining a service, the path to the Dispatcher is: rmi://isvdi:1099/SDIDispatcher
Support for multiple deployments
IBM Verify Identity Governance - Container also supports multiple deployments, so you can have multiple clusters of Dispatchers where each cluster handles a different set of adapters.
Be aware that while this option is available, it should only be used when trying to segregate adapter profiles, or when there is a need to run a set of Dispatchers in a different way for some reason (such as, the teams that manage them, network segmentation, different logging settings, etc.). The standard kubernetes scale command is the recommended approach to handle increased volume.
If you have multiple deployments, ensure to provide the correct deployment name to the adapterUtil.sh script. Here is an example: adapterUtil.sh -deploymentName unix-adapters -loadAdapter path/to/UnixLinuxAdapter.zip accept
- Example command for rollout restart: kubectl -n isvgim rollout restart deployment isvdi (where isvgim is the namespace, and isvdi is the Dispatcher pod name)
- Example command for scale:
kubectl -n isvgim scale --replicas=0 deployment isvdi
kubectl -n isvgim scale --replicas=3 deployment isvdi (where 3 represents the number of pods currently active)
To define a service in case of multiple deployments, you must specify the name of the deployment for the Dispatcher path.
For example: rmi://unix-adapters:1099/SDIDispatcher