Troubleshooting tips
Troubleshooting is a systematic approach to determine why something does not work as expected and how to resolve the problem. Certain common techniques can help with the task of troubleshooting. In case of issues or errors, try these steps before you reach out to Customer Support.
Diagnostic tools
- Run the getLogs.sh script. This script collects the IM application and Liberty logs in a tar/gzip file in the logs directory. Check the log files for any error messages and resolve those errors.
- You can also access the log files using the kubectl logs facility. The name of the log file is
logs-im.
Here is an example that shows how to retrieve the logs-im file:
kubectl -n <namespace> logs isvgim-0 logs-im
Here, the <namespace> is the Kubernetes namespace that you specified in the config.yaml file during deployment.
- By default, the log files are in JSON format. You can change the format to XML.
- Open the enRoleLogging.properties file.
- Change the formatter.PDXML.className as shown:
formatter.PDXML.className=com.ibm.itim.logging.LogXMLFormatter
Troubleshooting tips
- The IVIG and Liberty logs are output from the IBM Verify Identity Governance - Container pod as logs-im and logs-liberty respectively. They can be viewed with: kubectl -n your_namespace logs isvgim-0 -c logs-im. If you have multiple IBM Verify Identity Governance - Container pods, you might need to use -1 or -2 or more, depending on which pod handled the request you are interested in.
- If a pod has crashed, you can use the “--previous” option to retrieve the logs from the prior run.
- Alternatively, the bin/getLogs.sh script will connect into each running IBM Verify Identity Governance - Container pod, tar up the entire logs directory, and place it in the logs directory of the starter kit installation area.
- If it is a Kubernetes problem, you can start with: kubectl -n your_namespace get pods
- If it is a problem with a pod, you might try: kubectl -n your_ns describe pod the_pod_name
- If an expected pod is missing, it is likely either a deployment or a statefulset. You can try: kubectl -n your_ns get deployments or kubectl -n your_ns get statefulsets and if any of those show a problem, you can describe them. E.g. kubectl -n your_ns describe deployment isvd-replica-1
- You can view anything in the Kubernetes environment with “get” and “-o yaml”. For example, if you wanted to see the current definition for the isvgimsetup configmap, you could use: kubectl -n your_ns get cm isvgimsetup -o yaml –
Performance enhancement for Provisioning Policy
In certain scenarios, you can improve the performance of Provisioning Policy tasks by tweaking one parameter in the enrolepolicies.properties.
The policy.analysis.complete.async parameter in the enrolepolicies.properties file defines the execution mode for the provisioning policy partitioning completion task, and allows it to run either synchronously or asynchronously.
If your business process involves evaluation of very large user populations, then asynchronous processing of completion events has demonstrated improved performance.
- Prerequisite
- Ensure that a valid event messaging queue entry is configured in the enrole.properties file.
- Updating the Provisioning Policy
- Perform the following steps.
- In your IBM Verify Identity Governance - Software Stack environment, navigate to <LibertyHome>\usr\servers\defaultServer\config\data
- Open the enrolepolicies.properties file.
- Set the value of the following parameter to true. By default, the value
is False.
policy.analysis.complete.async
- Save your changes and close the enrolepolicies.properties file.
Troubleshooting: Uncontrolled growth of WAL files due to PostgreSQL WAL archiving configuration in IVIG Container v11
- Problem
- In a IBM Verify Identity Governance - Container v11 deployment with PostgreSQL as database, configuration of Write-Ahead Logging (WAL) archiving may lead to an uncontrolled growth of WAL files, potentially consuming all the available disk space.
- Symptom
-
You may notice a steady increase in disk usage in the Containerized PostgreSQL data directory.
The file system eventually becomes full, leading to database write failures or service disruptions.
Log files indicate that WAL segments are not being archived or removed.
- Cause
-
The issue arises due to the following configuration in the 040-config-isvgimdb.yaml file:
archive_mode = on
When the archive_mode parameter is enabled, it instructs PostgreSQL to retain the WAL files for archiving.
However, no archive_command or archive_library is specified; thus, PostgreSQL has no mechanism to actually archive or remove the obsolete WAL files.
As a result, WAL files continue to accumulate indefinitely.
- Environment
-
This issue may occur in the following versions:
- IBM Verify Identity Governance Container version 11.0.0
- IBM Verify Identity Governance Container version 11.0.0 Interim Fix 1
- IBM Verify Identity Governance Container version 11.0.0 Interim Fix 2
- Problem Resolution
-
To stop the WAL archiving, perform the following steps:
- Open the <STARTER>/config/db/postgres.conf file.
- Update the following parameter as shown here:
archive_mode = off
- Next, execute this script: <STARTER>/bin/createConfigs.sh db
- Finally, restart the PostgreSQL pod for the changes to take effect.
This disables WAL archiving, and allows PostgreSQL to manage WAL file cleanup automatically.
- Additional Information
-
If you want to continue to use WAL archiving configuration for backup and restore purposes, then you may enable archiving:
archive_mode = on
In this case, ensure that you correctly configure the archiving feature by referring to PostgreSQL documentation.