Changing the logging level

The logging level for the application can be changed to generate helpful information for identifying and solving issues with this application.

Logging level options are as follows:
ERROR
Shows only error messages. No warnings or informational messages is provided.
WARN
Shows only error and warning messages.
INFO (default)
Show all error, warning, and general informational messages.
DEBUG
Shows all messages at the INFO level with extra messages related to application execution.
TRACE
Shows all messages at the DEBUG level with detailed application execution messages.
Warning: Selecting DEBUG or TRACE might result in a large amount of output and negatively impact performance. Do so only upon request from IBM® Software Support.

Docker on Linux Standalone Distributed Gateway

Changing the logging level for the standalone Distributed Gateway requires restarting the application with the --log -level flag. For example:
# Restarting to change the logging level.
# Substitute <log level> with the desired level.
sudo ./zapmctl start --log-level <log level>

# Restart the Distributed Gateway with DEBUG logging enabled
sudo ./zapmctl start --log-level DEBUG

Cluster Distributed Gateway

The method of changing the logging level for the cluster Distributed Gateway depends on whether or not Helm is used. In any case, the logging level for most microservices can be changed individually.

  • Installations using Helm
    Changing the logging level requires an update to the values.yaml file used during installation. The sections transaction-processor, and ttg all accept an optional logLevel entry that accepts one of the above logging levels as its value. To change the logging level, add the following line under the appropriate section(s) where <loglevel> is replaced with the desired level:
    logLevel: <log level>
    For example, to change the logging level for the transaction-processor deployment:
    transaction-processor: 
      logLevel: "DEBUG"
      # The rest of the existing transaction-processor configuration comes after
    
    Once the values.yaml file is updated to include the new logging level, run the following command:
    helm upgrade --namespace <desired project> -f <config yaml> <name> <chart location>
    For example:
    helm upgrade --namespace ibm-zapm -f values.yaml zapm ./zapm-helm-chart-6.1.1-4.tgz
  • Installations using Kubernetes manifest files
    Changing the logging level requires updating various environment variables in the deployment files for the Distributed Gateway. Each deployment is slightly different. The environment variables already exist in each deployment file; the value needs to be changed to reflect the desired logging level.
    • TTG: The TTG is slightly different because it uses a ConfigMap object to store its configuration and there are two separate logging levels.
      • TTG_LOG_LEVEL updates the IBM Z APM Connect code's logging level.
      • APPD_SDK_LOGGING_LEVEL updates the AppDynamics SDK logging level.
    • Transaction Processor: ZAPM_LOG_LEVEL. Each deployment's logging level is specified individually. Changing the value of ZAPM_LOG_LEVEL for just the connection-manager deployment will not change the logging level for any of the other components. In total, there are 5 components to update that all use the ZAPM_LOG_LEVEL environment variable:
      • connection-manager
      • event-partitioner
      • span-collector
      • span-factory
      • transaction-factory
    Once the updates to all YAML deployment files are made, the changes can be applied by running the following command:
    kubectl apply -f <path to deployment files>
    For example, if all of the deployment files are located in the current directory:
    kubectl apply -f ./

Restarting the TTG

For both installation types, if the logging level for the TTG deployment is updated then all of the TTG pods must be manually restarted.

Take the following steps to restart the TTG pods.
  1. Find the names of all TTG pods and follow the pattern ttg-<controllername>-N where N is a number representing the pod replica number.
    kubectl get pods
  2. For each TTG pod, run the following command to stop and restart it:
    kubectl delete pod <TTG pod name>

This will ensure that any changes made to the TTG ConfgMap are picked up by the TTG pods.