Troubleshooting integration servers and integration runtimes in the App Connect Dashboard
Review this information to help resolve issues when you deploy a Toolkit or Designer integration to an integration server or integration runtime in the App Connect Dashboard, or when the integration runs.
About this task
For advice about specific problems that can occur when you deploy or run the integration, see the following sections.
- Resolving a BAR file analysis error while creating a 13.0.1.0-r1 or later integration runtime
- Resolving resource usage issues on startup of a pod
- Checking the deployment status of an integration server or integration runtime
- Retrieving logs
- Enabling and downloading trace
- Resolving liveness probe failures for long running flows
- Resolving ValidatingAdmissionWebhook errors for BAR files
- Resolving MountVolume.SetUp failed for volume "contentservertls" errors for integration server or integration runtime deployments
Resolving a BAR file analysis error while creating a 13.0.1.0-r1 or later integration runtime
When you deploy a BAR file to create an integration runtime, you might see the following error if the types of flows in the BAR file cannot be identified when an attempt is made to analyze the BAR file:
Error analysing bar file, review analyse output for further information:….
Procedure
To resolve this error, complete the following steps:
Resolving resource usage issues on startup of a pod
About this task
If a deployed BAR file for a Designer or Toolkit integration contains a definition that is complex or resource-intensive, on startup of the integration server or integration runtime, you might notice that its pod continually restarts.
To resolve this issue, check the CPU and memory usage of the pod to establish whether the default minimum limits need to be increased. For example, to check the resource usage from the Red Hat® OpenShift® web console, you can view the integration server or integration runtime pod details from the Details tab under , and then click the usage graph to view the metrics.

If the pod is close to or exceeding its resource limits, increase the limits to stop the restarts or to stop the pod from crashing.
Procedure
From the App Connect Dashboard, edit the integration server or integration runtime definition to adjust the resource limits for the pod's runtime container.
Checking the deployment status of an integration server or integration runtime
Procedure
To check the deployment status of an integration server or integration runtime, complete the following steps.
Retrieving logs
You can obtain user logs for an integration server or integration runtime from your cluster by running oc commands from the command line. You can also obtain logs for the IBM App Connect Operator and its custom resource instances in the same way.
If you need to send operational logs to IBM Support
to aid with troubleshooting, see Gathering diagnostic information.
Procedure
To retrieve user logs for an integration server, complete the following steps:
Enabling and downloading trace
To aid with problem determination and troubleshooting, you can enable and then download user or service trace on a deployed integration server or integration runtime. Enabling trace is useful if you cannot get enough information about a particular problem from the entries that are available in the log.
Procedure
You can enable and manage trace as follows:
- From the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment, you can start (or enable) a trace on a deployed integration server or integration runtime by creating a trace object, and stop (or disable) a trace by deleting the trace object. You can also start and stop trace by using the server.conf.yaml file or an environment variable. For more information, see Trace reference.
- From an App Connect Dashboard instance, you can start a trace, stop
a trace, download the trace log files, and clear the trace information from the logs by using the
options menu
on an integration server tile on the Servers page or an integration runtime tile on the Runtimes page. You can also start and stop trace by using the server.conf.yaml file or an environment variable. For more information, see Enabling and managing trace for a deployed integration server or Enabling and managing trace for a deployed integration runtime.
- If you need to increase the download speed of the trace log files for an integration server or integration runtime, you can choose to allocate more CPU by updating the integration server or integration runtime custom resource. For more information, see Integration Server reference: Updating the custom resource settings for an instance or Integration Runtime reference: Updating the custom resource settings for an instance.
- If an integration server or integration runtime container is crashing, you can use an environment variable to prevent the container from stopping so that the trace can be collected as normal. For more information, see Collecting trace from an integration server or integration runtime container that is crashing.
Resolving liveness probe failures for long running flows
About this task
For long running flows, which take more than five minutes to complete, you might observe liveness probe failures for some of the containers in the integration server or integration runtime pods due to the amount of time that the event loop requires to process the request.
To resolve this issue, adjust the following liveness probe values for the pod containers (except the runtime container) to values that will stop the probe from failing. The failureThreshold and timeoutSeconds settings are particularly relevant.
- Integration server:
In the listed parameters, * represents a container name such as connectors; for example, spec.pod.containers.connectors.livenessProbe.failureThreshold.
- spec.pod.containers.*.livenessProbe.failureThreshold
- spec.pod.containers.*.livenessProbe.initialDelaySeconds
- spec.pod.containers.*.livenessProbe.periodSeconds
- spec.pod.containers.*.livenessProbe.timeoutSeconds
- Integration runtime:
- spec.template.spec.containers[].name
- spec.template.spec.containers[].livenessProbe.failureThreshold
- spec.template.spec.containers[].livenessProbe.initialDelaySeconds
- spec.template.spec.containers[].livenessProbe.periodSeconds
- spec.template.spec.containers[].livenessProbe.timeoutSeconds
For more information about these parameters, see Integration Server reference: Custom resource values or Integration Runtime reference: Custom resource values.
Resolving ValidatingAdmissionWebhook errors for BAR files
When you deploy one or more BAR files to an integration server or integration runtime, a 13-second time limit is applied for downloading the BAR files (with any applicable credentials) to ensure that they are available.
Depending on the number or size of these BAR files, or the speed of your network connection (if the files are hosted remotely), the validation checks might exceed the 13-second time limit, which Red Hat OpenShift enforces for running webhooks. When this timeout occurs, you see the following error:
"admission plugin "ValidatingAdmissionWebhook" failed to complete validation in 13s" for field "undefined"
Procedure
To skip the validation checks and prevent the
ValidatingAdmissionWebhook
error, complete the following steps:
Resolving MountVolume.SetUp failed for volume "contentservertls" errors for integration server or integration runtime deployments
If you are deploying an integration server or integration runtime, the deployment will fail to complete if the BAR file is hosted on a content server that is incompatible with the Kind value in the custom resource (CR) that you are trying to deploy.
A typical error that you might see is as follows:
MountVolume.SetUp failed for volume "contentservertls" : secret "dashboardName-dash.namespaceName" not found.
- You are deploying an integration server and manually specify a spec.barURL
value for a BAR file in the content server of an App Connect Dashboard
instance with an
IntegrationRuntimes
display mode. - You are deploying an integration runtime and manually specify a spec.barURL
value for a BAR file in the content server of a Dashboard instance with an
IntegrationServers
display mode.
BAR files that you upload to a Dashboard's content server can only be used to deploy integrations that match the display mode value of the Dashboard. It's also worth noting that BAR URL formats vary slightly based on a Dashboard's display mode.
- Format of a BAR URL in the content server of a Dashboard with a display mode of
IntegrationServers
:https://dashboardName-dash:3443/v1/directories/barFileStem?uniqueID
Example:https://db-fd-nokeyclk-acelic-is-dash:3443/v1/directories/Customer_API?123456ef-f2eb-4680-9e2e-6a3de15f04e8
- Format of a BAR URL in the content server of a Dashboard with a display mode of
IntegrationRuntimes
:https://dashboardName-dash.namespaceName:3443/v1/directories/barFileStem?uniqueID
Example:https://db-fd-nokeyclk-acelic-ir-dash.ace-test:3443/v1/directories/Customer_API?8abcdef7-bf93-4a95-aac6-e1f40709fca8