Troubleshooting
Problem
Discovering assets in cloud pak for data can fail for various reasons. At times, these errors are due to the kind of data being discovered and fixing such issues just require tweaking the discovery engine with optimal values. The following sections talks about some of such errors and probable solution for them.
Identifying the Issue -

These issues usually manifest itself on UI as errors on the discovery result page, errors as tool tips on warning/error icons.
To get more details for the problem, logs needs to be scanned which can be obtained either following the log collection process (diagnostics collection) and scanning them or manually retrieving specific logs from relevant pods and folders in pod .
The manual log collection can be achieved by running below commands with an authenticated oc command
Get and scan the logs from iis-services pod
$ oc get pods | grep iis $ oc logs < iis-services pod name>
Get and scan the pod logs from is-en-conductor-0 pod
$ oc logs is-en-conductor-0
Get and scan the logs files for ASBAgent and ODF from is-en-conductor-0 pod.
$ oc rsh is-en-conductor-0 $ cd /opt/IBM/InformationServer/ASBNode/logs/ $ ls Agent.out AgentService.pid asb-agent-0.out.lck odfengine.log.0 odfengine.log.1 odfenginelog.out Agent.pid asb-agent-0.out ODFAdmin.out odfengine.log.0.lck
1. Discovery Jobs fail with "JVM Could not be created"
"Entry 41: pxbridge(6),0: The JVM could not be created. Error code:-4 (::createJVM, file CC_JNICommon.cpp, line 652)
Solution :
To ensure proper operation of services, configure the OpenShift cluster default thread count to permit containers with 12288 PIDs. This needs to be done on all the worker nodes.
On Docker, change the setting by appending --default-pids-limit=12288 to OPTIONS= in the /etc/sysconfig/docker file.
On CRI-O, add pids_limit = 12288 under the [crio.runtime] section in the /etc/crio/crio.conf file.
On CRI-O, add pids_limit = 12288 under the [crio.runtime] section in the /etc/crio/crio.conf file.
To effect the changes, restart the service daemon on each worker node
#for crio based installation systemctl restart cri-o #for docker based installation systemctl reatart dockerd
Verify that the daemon was started by running the below command
#for crio based installation systemctl status cri-o #for docker based installation systemctl status dockerd
Once the changes are verified, restart the discovery job
2. Discovery Jobs fail with an " java.lang.OutOfMemoryError" error.
pxbridge(0),0: Java runtime exception occurred: java.lang.OutOfMemoryError: Java heap space (java.lang.J9VMInternals$1::run, file J9VMInternals.java, line 183)
Solution :
This sometimes has been attributed to heap going out of memory and increasing the heapsize on iis-services pod should resolve it.
$ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ia.jdbc.connector.heapSize -value 2048 $ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ia.engine.javaStage.heapSize -value 1024
Verify the settings by running
$ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -display -key com.ibm.iis.ia.jdbc.connector.heapSize $ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -display -key com.ibm.iis.ia.engine.javaStage.heapSize
Once the changes are verified, restart the discovery job
3. Discovery Jobs fails with "Data truncation Error" error
"name": "com.ibm.infosphere.ia.CADataStageJobService {ec1481df.fee6c3ac.n7i6iq3dr.jv1v8os.irds24.k6b86c41o33610jnng17k}",
"message": "DataStage job failed. Check the log for more details.\nDataStage job log:\nEntry 56: pxbridge(0),0: Failure during execution of operator logic.\nEntry 58: pxbridge(0),0: Fatal Error: The connector detected character data truncation for the link column post_mvvar2. The length of the value is 419 and the length of the column is 255
"message": "DataStage job failed. Check the log for more details.\nDataStage job log:\nEntry 56: pxbridge(0),0: Failure during execution of operator logic.\nEntry 58: pxbridge(0),0: Fatal Error: The connector detected character data truncation for the link column post_mvvar2. The length of the value is 419 and the length of the column is 255
Solution :
By default the length of the column is set to 255 and the solution is to increase the value based on the length of the data value you encountered.
$ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ia.jdbc.columns.length -value 5000 $ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ia.jdbc.long.columns.length -value 5000
Verify that the change are successful
$ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -display -key com.ibm.iis.ia.jdbc.columns.length $ oc exec -it <iis-services pod name> -- /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -display -key com.ibm.iis.ia.jdbc.long.columns.length
Once the changes are verified, restart the discovery job
4. Discovery Job fails with "too big to fit in a block" error
Entry 130: main_program: Step execution finished with status = FAILED.
Entry 225: pxbridge(0),1: Fatal Error: Virtual data set.; output of
"pxbridge(0)": the record is too big to fit in a block; the length requested
is: 410400, the max block length is: 131072.
Entry 226: node_node2: Player 1 terminated unexpectedly.
Entry 227: main_program: APT_PMsectionLeader(2, node2), player 1 - Unexpected
exit status
Entry 225: pxbridge(0),1: Fatal Error: Virtual data set.; output of
"pxbridge(0)": the record is too big to fit in a block; the length requested
is: 410400, the max block length is: 131072.
Entry 226: node_node2: Player 1 terminated unexpectedly.
Entry 227: main_program: APT_PMsectionLeader(2, node2), player 1 - Unexpected
exit status
Solution :
Change the value of APT_DEFAULT_TRANSPORT_BLOCK_SIZE with in the conductor pod
is-en-conductor-0
$ oc rsh is-en-conductor-0 $ . /opt/IBM/InformationServer/Server/DSEngine/dsenv; $ /opt/IBM/InformationServer/Server/DSEngine/bin/dsadmin -envset APT_DEFAULT_TRANSPORT_BLOCK_SIZE -value "200000" ANALYZERPROJECT;
Verify that the change was successful
$ /opt/IBM/InformationServer/Server/DSEngine/bin/dsadmin -listenv ANALYZERPROJECT | grep DEFAULT_TRANSPORT_BLOCK
Once the changes are verified, restart the discovery job
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB76","label":"Data Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSHGYS","label":"IBM Cloud Pak for Data"},"ARM Category":[{"code":"a8m3p000000UoQxAAK","label":"Administration-\u003EAssets"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"2.5.0;3.0.0;3.0.1"}]
Log InLog in to view more of this document
This document has the abstract of a technical article that is available to authorized users once you have logged on. Please use Log in button above to access the full document. After log in, if you do not have the right authorization for this document, there will be instructions on what to do next.
Was this topic helpful?
Document Information
Modified date:
24 August 2023
UID
ibm16205696