Troubleshooting errors in the App Connect Dashboard
Review this information to help resolve issues while using the App Connect Dashboard.
Enabling debug logging
To obtain detailed logging of interactions in the App Connect Dashboard, you can enable debug logging either while creating the Dashboard instance or by editing the custom resource (CR) settings after the Dashboard is created.
- To enable debug logging for the container logs, set the spec.logLevel
parameter in the App Connect Dashboard CR to
debug
.
You can subsequently retrieve the debug logs for the Dashboard UI container by running the following commands to obtain the Dashboard pod name and then write the pod logs to a file (for example, /tmp/dashboard.logs):PODNAME=$(kubectl get pods -n namespaceName -l release=dashboardName) kubectl logs $PODNAME -n namespaceName -c control-ui > /tmp/dashboard.logs
- To restore the default level of logging, set spec.logLevel to
info
.
Resolving an s3Credentials
configuration error
While attempting to access the Configuration page or
Configuration panel from the App Connect Dashboard (as
described in Configuration types for integration servers and integration runtimes), you might see the following
s3Credentials
configuration error:
Error occurred while trying to load configurations: invalid configuration type:
s3Credentials
Message from the server: invalid configuration type:
s3Credentials

This error is expected to occur only if IBM App Connect Operator 1.5.0 (or
later) is installed in your cluster and the version (spec.version) of your App Connect Dashboard instance resolves to 11.0.0.12-r1 or earlier. Because
support for the S3Credentials
configuration type is introduced in IBM App Connect Operator 1.5.0 and App Connect Dashboard
12.0.1.0-r1, an error is emitted if an 11.0.0.12-r1 or earlier Dashboard detects the existence of an
unsupported configuration object of type S3Credentials
in its namespace. This
configuration type provides support for Simple Storage Service (S3) compatible Dashboards, as
described in Dashboard reference:
Storage.
To resolve this error, complete either of the following actions:
- Upgrade your existing 11.0.0.12-r1 or earlier App Connect Dashboard instance to a spec.version value that resolves to 12.0.1.0-r1 or later. For more information, see Upgrading your instances and spec.version values.
- Delete the configuration object of type
S3Credentials
from the namespace. Only take this action if you are confident that the configuration object is not being used.
App Connect Dashboard crashes when hosting larger numbers of integration servers or integration runtimes
For Dashboard instances that are hosting approximately 60 (or more) integration servers or integration runtimes, you might see the following error after a small delay when you try to open a Dashboard:
Error 3.5.0.0 (timestamp) x86_64

This error typically indicates that the Dashboard pod has crashed. To confirm, complete the following steps:
- Retrieve the list of pods in the namespace where the Dashboard is installed:
oc get pods
You should see output similar to this, with the Dashboard pod name given as
dashboardName-dash-uniqueID
. (The value in the STATUS field might vary.)NAME READY STATUS RESTARTS AGE ... feedee-dashbd-dash-6f8d5bcb74-869xp 2/2 Running 3 4d5h ...
- To obtain more detailed information about the Dashboard pod, run the following command:
oc describe pod podName
For example:
oc describe pod feedee-dashbd-dash-6f8d5bcb74-869xp
- Review the output to see whether any
out of memory
(OOM) errors are reported by searching for the textOOMKilled
.
If you see any OOM errors, this indicates that the CPU or memory needs to be increased. You can do so by updating the relevant settings in the Dashboard custom resource (CR):
- Use a command such as oc edit to partially update the CR, where
instanceName is the Dashboard name (that is, the
metadata.name value in the CR):
oc edit dashboard instanceName
This command will automatically open the default text editor for your operating system.
- Update the spec.pod.containers.control-ui.resources.limits.cpu and
spec.pod.containers.control-ui.resources.limits.memory values as follows.
(These are the current default values, but your Dashboard CR might have lower values if it was
created by using a Development sample or a CR from an earlier version.)
spec: pod: containers: control-ui: resources: limits: cpu: '1' memory: 512Mi
- Save the YAML definition and close the text editor to apply the changes.
Note: If you continue to see OOM errors for the pod after you save, update the CR again to further increase the values. Tuning your CPU and memory in this way is recommended only if your App Connect Dashboard hosts up to 100 integration servers or integration runtimes. Do not run more than 100 integration servers or integration runtimes within a single namespace if you intend to use the App Connect Dashboard to monitor them.
App Connect Dashboard remains in a Pending state while waiting for its PVC to be ready
If a PersistentVolumeClaim (PVC) is in a Pending state, a Dashboard deployment can remain stuck
as Pending
with the following error:
Waiting for PVC to be 'Ready'. Currently 'Pending'
For a Dashboard deployment to complete, its PVC must be in a bound state. A PVC is typically created and bound to the cluster quickly, but if it fails to bind, the PVC might get stuck in a Pending state, which in turn stops the Dashboard deployment from completing. If the PVC is shown as pending after a reasonable wait, you can investigate and resolve the issue as follows.
From the Red Hat OpenShift web console, complete the following steps to track the progress of the deployment and resolve the error:
- From the Dashboard tab in your IBM App Connect Operator deployment, click the Dashboard instance name and then
check the messages in the Conditions section of the
Details tab.
- If you see the message indicating that the PVC is currently pending, complete the following
steps to investigate why its provisioning is not succeeding:
- Click .
- From the PersistentVolumeClaims page, locate and then click the PVC name
for the Dashboard. PVC names are in the format
dashboardName-content
. - From the
PersistentVolumeClaim details
page, click the Events tab. Then review the messages to help you determine why the PVC provisioning failed and resolve the error.
- List the pods in the namespace where you are trying to deploy the
Dashboard:
oc project namespaceName oc get pods
The status of the Dashboard pod should be shown as Pending; for example:
NAME READY STATUS RESTARTS AGE db-01-quickstart-dash-5f87cc8dcd-2scnf 0/2 Pending 0 21h
- Obtain detailed information about the Dashboard pod to determine why it is in a Pending state:
oc describe pod podName
For example:
oc describe pod db-01-quickstart-dash-5f87cc8dcd-2scnf
You should see output indicating that provisioning failed for the PVC; for example:
Name: db-01-quickstart-dash-5f87cc8dcd-2scnf Namespace: ace-rob Priority: 0 ... Conditions: Type Status PodScheduled False Volumes: shared-certs: Type: Secret (a volume populated by a Secret) SecretName: db-01-quickstart-dash Optional: false ui-certs: Type: Secret (a volume populated by a Secret) SecretName: db-01-quickstartui-cert Optional: false content: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: db-01-quickstart-content ReadOnly: false ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 21h default-scheduler running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "db-01-quickstart-content" Warning FailedScheduling 21h default-scheduler running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "db-01-quickstart-content"
Notice that the ClaimName value for the PersistentVolumeClaim is given as
dashboardName-content
, where dashboardName is the metadata.name value in the Dashboard's custom resource. - Check the status of the PVC:
oc get pvc ClaimName
For example:
oc get pvc db-01-quickstart-content
The PVC status should be shown as Pending; for example:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-01-quickstart-content Pending test 22h
- Obtain detailed information about the PVC to determine why it is in a Pending state:
oc describe pvc ClaimName
For example:
oc describe pvc db-01-quickstart-content
Review the output to help you determine why the PVC provisioning failed and then resolve the error; for example:
Name: db-01-quickstart-content Namespace: ace-rob StorageClass: test Status: Pending Volume: ... VolumeMode: Filesystem Mounted By: db-01-quickstart-dash-5f87cc8dcd-2scnf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 22h persistentvolume-controller waiting for first consumer to be created before binding Warning ProvisioningFailed 21h (x33 over 22h) persistentvolume-controller Failed to provision volume with StorageClass "test": failed to create volume: failed to create volume: see kube-controller-manager.log for details Normal WaitForPodScheduled 21h (x41 over 22h) persistentvolume-controller waiting for pod db-01-quickstart-dash-5f87cc8dcd-2scnf to be scheduled Normal WaitForPodScheduled 21h (x16 over 21h) persistentvolume-controller waiting for pod db-01-quickstart-dash-5f87cc8dcd-2scnf to be scheduled Warning ProvisioningFailed 21h (x10 over 21h) persistentvolume-controller Failed to provision volume with StorageClass "test": failed to create volume: failed to create volume: see kube-controller-manager.log for details ... Normal WaitForPodScheduled 76m (x30 over 113m) persistentvolume-controller waiting for pod db-01-quickstart-dash-5f87cc8dcd-2scnf to be scheduled Normal WaitForPodScheduled 23m (x34 over 74m) persistentvolume-controller waiting for pod db-01-quickstart-dash-5f87cc8dcd-2scnf to be scheduled Warning ProvisioningFailed 18m (x32 over 74m) persistentvolume-controller Failed to provision volume with StorageClass "test": failed to create volume: failed to create volume: see kube-controller-manager.log for details Normal WaitForPodScheduled 6m42s (x16 over 16m) persistentvolume-controller waiting for pod db-01-quickstart-dash-5f87cc8dcd-2scnf to be scheduled Warning ProvisioningFailed 1s (x15 over 16m) persistentvolume-controller Failed to provision volume with StorageClass "test": failed to create volume: failed to create volume: see kube-controller-manager.log for details