Troubleshooting
Information about how to troubleshoot a problem with Custom Edition.
Adjusting log level for Instana components
To adjust the level for Instana components, complete following steps:
-
Configure a component’s log level in the CoreSpec. In the following example, the log level is changed to
DEBUGfor thebutlercomponent:apiVersion: instana.io/v1beta2 kind: Core metadata: name: instana-core namespace: core spec: ... componentConfigs: - name: butler env: - name: COMPONENT_LOGLEVEL # Possible values are DEBUG, INFO, WARN, ERROR (not case-sensitive) value: DEBUG -
View the logs by running the following command:
kubectl logs <component name> -n instana-core<component name> is the component name that you want to troubleshoot.
Using kubectl-specific command to debug and diagnose
The kubectl cluster-info dump command is a helpful tool for debugging and diagnosing Kubernetes clusters. It provides a detailed report on the current state of the Kubernetes cluster, including different information of resources.
Configure the target namespace and the directory to output debugging information. In the following example, the namespace is instana-units and the directory is temp. When you run this command, kubectl generates YAML files for each resource in the instana-units namespace and saves those YAML files to the temp directory.
kubectl cluster-info dump --namespace instana-units --output-directory temp --output yaml
The following extract shows part of the content in the temp directory. In the instana-units subdirectory, you can find details about the daemonsets, deployments, events, pods, and other resources in .yaml files. Logs on all the pods are in each instana-units\pod name subdirectory.
├── instana-units
│ ├── daemonsets.yaml
│ ├── deployments.yaml
│ ├── events.yaml
│ ├── pods.yaml
│ ├── replicasets.yaml
│ ├── replication-controllers.yaml
│ ├── services.yaml
│ ├── tu-instana-prod-appdata-legacy-converter-755bb474c7-xn4vg
│ │ └── logs.txt
│ ├── tu-instana-prod-appdata-processor-6b8f448584-nmvgl
│ │ └── logs.txt
│ ├── tu-instana-prod-filler-9485b85d-wj7pv
│ │ └── logs.txt
│ ├── tu-instana-prod-issue-tracker-bbd5f5d5f-98zxx
│ │ └── logs.txt
│ ├── tu-instana-prod-processor-fc956c46c-fxs5z
│ │ └── logs.txt
│ └── tu-instana-prod-ui-backend-89bccd9c5-8lp76
│ └── logs.txt
...
└── nodes.yaml
- You can choose another namespace such as
instana-core. - If
kubectlis not installed on your cluster, you can use the commandoc cluster-info dump, which provides the same support as the commandkubectl cluster-info dump.
For more troubleshooting commands, see the Red Hat OpenShift documents troubleshooting Clusters through kubectl and Red Hat OpenShift CLI developer command reference.
Using internal backend API
You can use some internal component API endpoints to help you administer a self-hosted Instana backend. These API calls can be submitted from the cluster to the corresponding pods by using curl. Because the resources require authentication, you must first get valid credentials. The API credentials are located in the internal-instana secret in the core namespace. The API endpoints contain two types of AdminAPIUser and ServiceAPIUser. To get the credentials that are valid for an Instana installation, run the following commands:
kubectl get secret instana-internal -n instana-core --template='{{ index .data.serviceAPIUser | base64decode }}'
kubectl get secret instana-internal -n instana-core --template='{{ index .data.serviceAPIPassword | base64decode }}'
To get the credentials for the admin user, query .data.adminAPIUser and .data.adminAPIPassword.
These credentials can be used to for the following procedures.
Resetting an Instana user password
To reset the password for an Instana user account, run the following command:
kubectl exec -it -n instana-core deploy/butler -- curl -X PUT http://localhost:8601/admin/authentication/{tenant}/reset/user -u {adminAPIUser}:{adminAPIPassword} -H 'Content-Type: application/json' -d '{"email":"{user}","pass":"{newPassword}"}'
Deactivating SSO provider configurations (LDAP/SAML/OIDC)
If an Identity Provider is used for Instana authentication, you can deactivate that Provider by the following command. Afterward, the internal Instana user accounts can be used for authentication.
kubectl exec -it -n instana-core deploy/butler -- curl -X PUT http://localhost:8601/admin/authentication/{tenant}/idp -u {adminAPIUser}:{adminAPIPassword}
Disabling 2FA on a user account
To deactivate 2FA (two-factor authentication) for an Instana user account, run the following command:
kubectl exec -it -n instana-core deploy/butler -- curl -X DELETE http://localhost:8601/admin/2fa/users/{email} -u {adminAPIUser}:{adminAPIPassword}
Verifying licenses
To verify all stored licenses, run the following commad:
kubectl exec -it -n instana-core deploy/groundskeeper -- curl -X GET http://localhost:8600/license/list/{tenant}/{unit} -u {serviceAPIUser}:{serviceAPIPassword}
Some metrics are not displayed on dashboards
If some metrics are not displayed on dashboards, it's possible that the limit for metrics is reached. The metric limit is set to 3000, by default.
Tip: You can see the current limit under maxMetrics in the filler/config.yaml file. The limit is there to prevent some entities from sending too many metrics, which would increase storage and CPU used by filler, and increase the bandwidth to send metrics to the UI.
To increase the limit, add the block with config.max.metrics in the Custom Resource for Unit as follows:
kind: unit
...
spec:
...
properties:
- name: config.max.metrics
value: "6000"
...
Instana backend becomes non‑functional when the Elasticsearch data disk exceeds 85% usage
Elasticsearch automatically switches its data store to read‑only mode when the disk it uses exceeds 85% usage. This causes the Instana backend to stop functioning. Free up space on the Elasticsearch data disk or increase its capacity to restore normal operations. Note: Other Instana disks do not trigger read‑only behavior at similar usage levels (even above 95%), which can make this issue appear confusing.
- Free up space on the Elasticsearch data disk
- Increase the disk size allocated to Elasticsearch
License is invalid or missing
If the license is invalid or missing, the backend prevents agents from connecting.
When this occurs
- The imported license is invalid.
- The Instana Operator cannot apply the license to the Groundskeeper backend.
How to troubleshoot
- Verify that the Sales Key in the core secret matches the license strings in the unit secret. If they differ, re-download the license using the correct Sales Key.
- Check the Instana Operator logs for license import errors:
kubectl logs -n instana-operator deployment/instana-operator --tail=100 - Check the Groundskeeper backend component, pod status, and logs:
kubectl get pods -n instana-core | grep groundskeeper - If the license still shows an invalid state, contact IBM Support.