Troubleshooting and debugging

Collect cluster information and debugging logs to troubleshoot issues with Standard Edition.

Adjusting log level for Instana components

To adjust the level for Instana components, complete following steps:

  1. Edit the Core Config file, for example, $HOME/.stanctl/values/instana-core/custom-values.yaml.

  2. Configure a component’s log level in the Core or Unit CR. In the following example, the log level is changed to DEBUG for the butler component:

    componentConfigs:
      - name: butler
        env:
          - name: COMPONENT_LOGLEVEL
            # Possible values are DEBUG, INFO, WARN, ERROR (not case-sensitive)
            value: DEBUG
    
  3. Apply the custom values by running the following command:

    stanctl backend apply
    
  4. View the logs by running the following command:

    kubectl logs <component name> -n instana-core
    

    <component name> is the component name that you want to troubleshoot.

Collecting information

Create an archive file with information about your cluster. You can use the information in the file to troubleshoot issues, or share the file with the support team.

The archive file collects the following information:

  • Container logs
  • Resource manifests (in YAML format)
  • stanctl logs
  • System information that includes memory, CPU, and CPU usage
  • Disk mounts and their usage
  • Open files (allocated, free, and maximum)
  • Backend logs

Use the following command to create the archive file:

stanctl debug

After you run the command, you see the following messages. When you see Done! in the messages, it means that your archive file is ready.

./stanctl debug
⠼ Streaming container logs  [26s] ✓
⠸ Gathering resource manifests  [27s] ✓
⠋ Gathering stanctl config files  [0s] ✓
⠋ Gathering system information  [0s] ✓
⠹ Creating tar file  [0s] ✓

----------------------
Done!
Debug package -> debug_20231027111737
Compressed debug package -> debug_20231027111737.tar.gz
----------------------

Troubleshooting

Resolve these issues.

Instana agent is not displayed in the UI

After you delete the Instana agent that was configured for remote monitoring and install the Instana agent for self monitoring, the agent might not be displayed on the Instana UI.

The agent might be trying to connect to the remote Instana backend instead of the local Instana backend.

To resolve this issue, install the agent and specify the backend endpoint host and an agent key:

stanctl agent apply --agent-cluster-name <cluster-name> --agent-endpoint-host acceptor.instana-core --agent-endpoint-port 8600 --agent-zone-name <zone-name> --agent-key <agent-key-of-local-backend>

Kafka pods show CrashLoopBackOff status

Kafka pods do not restart after a shutdown of the Instana backend host. You might see a CrashLoopBackOff status of the Kafka pods.

To resolve the issue, restart the Instana backend.

  1. Shut down the backend.
    stanctl down
    
  2. Start the backend.
    stanctl up
    

After the backend is restarted, check the status of Kafka pods.

kubectl get pods --all-namespaces | grep kafka

The Kafka pod status should show as Running.

Scheduled Synthetic tests are not running after Instana backup and restore

After Instana backend and agent data are restored, the scheduled Synthetic tests are not running.

To resolve this issue, restart the synthetic-pop-controller pod on the cluster where it is installed.

Host agent cannot connect to the Instana backend on SLES hosts

After you install the host agent on the local host on SUSE Linux Enterprise Server (SLES) 15 SP5 hosts for self monitoring, the agent does not automatically connect to the Instana backend.

You must use the agent external URL to connect to the backend as a remote host.

Use the following command:

stanctl agent apply --agent-endpoint-host agent-acceptor.<base_domain> --agent-endpoint-port 8443

Standard Edition installation on RHEL 9.3 fails

Red Hat® Enterprise Linux® 9.3 uses iptables 1.8.8.

If you are installing Standard Edition on RHEL 9.3, the installation might fail due to iptables 1.8.8.

To work around the issue, upgrade your host to RHEl 9.4, which also upgrades the iptables to version 1.8.10.

Instana backend upgrade fails due to corrupt Helm chart installation

The Instana backend upgrade fails after you run the stanctl backend apply command. You might see the following error:

Error: another operation (install/upgrade/rollback) is in progress

In the console.log file, you might see information similar to the following entries:

ts=2025-05-26T12:26:09Z level=INFO msg="upgrading Helm chart" name=instana-core release=instana-core version=1.8.1 namespace=instana-core
ts=2025-05-26T12:26:09Z level=DEBUG msg="preparing upgrade for instana-core"

This issue indicates a corrupt Helm chart installation of the current core chart that you can reset by using the following command:

  1. Delete the old Helm chart secret from the instana-core namespace.
kubectl delete secret -n instana-core -l owner=helm
  1. Upgrade the backend.
stanctl up

Contacting support

If you are unable to resolve the issue, contact IBM support. Provide the archive file that you created to the support team.