Instana AutoTrace webhook

The Instana AutoTrace webhook is a Kubernetes and Red Hat OpenShift-compatible admission controller mutating webhook. The webhook automatically configures Instana tracing on Node.js, .NET Core, Ruby, and Python applications that run across the entire Kubernetes or Red Hat OpenShift cluster. In addition, you can enable Instana tracing for IBM MQ and App Connect deployments that run in IBM Cloud Pak for Integration.

Limitations

  • The Instana AutoTrace webhook works only on new Kubernetes resources. After the webhook is installed, you must create new resources for the transformation to take effect. Delete your Pods, ReplicaSets, StatefulStes, Deployments, and DeploymentConfigs, and then create them again for the Instana AutoTrace webhook to complete its configuration (for example, by using kubectl apply).
  • Only linux/amd64 Kubernetes nodes are supported.
  • Instana AutoTrace webhook does not automatically update the instrumentation that it installs. Therefore, Instana AutoTrace webhook is not enabled by default.
  • The transformation and instrumentation are not removed when the Instana AutoTrace webhook is uninstalled. For more information, see Removing the instrumentation.
  • If Instana Autotrace webhook is installed on IBM Cloud Paks (Cloud Pak for Business Automation and Cloud Pak for Integration), you must exclude Zen from instrumentation.

Supported runtimes

The Instana AutoTrace webhook supports for the following technologies:

Only NGINX 1.19 or later is supported for automatic instrumentation. Instana AutoTrace webhook does not support OpenResty yet.

Prerequisites

Before you install the Instana AutoTrace webhook on a Kubernetes-based cluster, confirm that the following prerequisites are met:

  • Kubernetes 1.16+
  • Red Hat OpenShift 4.5+
  • kubectl 1.16+
  • Helm 3.2+ (Some automation relies on Helm lookup functions)
  • Sufficient memory limits in the target pod for loading the instrumentation files. By default, webhook loads 300MB of instrumentation files into the target pod. Optionally, you can reduce the ephemeral storage that is required for the instrumentation files by limiting the instrumentation files that are loaded. For more information, see Minimize required ephemeral storage. Depending on the approach that you choose, make sure that you allot sufficient memory to load the instrumentation files in the spec.template.spec.containers[x].resources.limits.memory attribute of the target deployment.

Installing the Instana AutoTrace webhook

Replace <download_key> in the following script with a valid Instana agent key or download key, and then run the script with administrator privileges for your cluster:

helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
  --repo https://agents.instana.io/helm instana-autotrace-webhook \
  --set webhook.imagePullCredentials.password=<download_key>

If you are installing on Red Hat OpenShift, you must specify the --set openshift.enabled=true option in the script.

Installing the Instana Autotrace webhook on IBM Cloud Paks

For instructions about how to install the Instana Autotrace webhook on IBM Cloud Paks, see How to configure Instana AutoTrace webhook in a Cloud Pak Environment.

Configuring in an air-gapped environment

For air-gapped environments, you need to provide the instana-autotrace-webhook Helm chart, the instana-autotrace-webhook container image, and the instrumentation container image in your environment.

To download the latest release of the Helm chart to your current working directory, run the following command:

helm pull instana-autotrace-webhook --repo https://agents.instana.io/helm instana-autotrace-webhook

To change the destination directory of this command, you can use the option -d <DESTINATION_PATH> to specify another destination directory.

To download the latest instana-autotrace-webhook image, run the following command:

docker pull containers.instana.io/instana/release/agent/instana-autotrace-webhook:latest

To download the latest instrumentation image, run the following command:

docker pull icr.io/instana/instrumentation:latest

The two container images need to be available in your container registry. The previously downloaded Helm chart archive must be available on the system that runs the helm install command.

To install the Instana AutoTrace webhook, run the following command:

helm upgrade --install --create-namespace \
  --namespace instana-autotrace-webhook \
  --set webhook.image=<INSTANA_AUTOTRACE_WEBHOOK_IMAGE_PATH> \
  --set autotrace.instrumentation.image=<INSTRUMENTATION_IMAGE_PATH> \
  instana-autotrace-webhook <PATH_TO_HELM_CHART_ARCHIVE>

Container registry authentication

The instrumentation image in the container registry is used as an initContainer in all application pods. If your container registry requires the imagePullSecret resource, it must be available in all application namespaces.

Verifying that the webhook works

To verify that the webhook works, complete the following steps:

  1. To verify that the instana-autotrace-webhook in the instana-autotrace-webhook namespace is running as expected, run the following command:

    kubectl get pods -n instana-autotrace-webhook
    

    Example result:

    NAME                                         READY   STATUS    RESTARTS   AGE
    instana-autotrace-webhook-7c5d5bf6df-82w7c   1/1     Running   0          12m
    
  2. Use the Instana AutoTrace webhook. If the Instana AutoTrace webhook is running, deploy a Node.js pod. Instana AutoTrace is automatically enabled in the Node.js pod. In the log for the pod, you can see a label that indicates that the Instana AutoTrace was applied to the Node.js pod:

    kubectl get pod test-nodejs -n test-apps -o=jsonpath='{.metadata.labels.instana-autotrace-applied}'
    true
    

If you installed the Instana host agent by using the instana/agent Helm chart, the Node.js process appears in your Instana dashboard. For more information, see the Installing the Host Agent on Kubernetes documentation.

However, if you do not see the instana-autotrace-applied labels appear on your containers, see Troubleshooting.

Updating Instana AutoTrace webhook and instrumentation

Instana AutoTrace webhook does not have an automated way of updating the instrumentation that it installs. The instrumentation is delivered over the icr.io/instana/instrumentation image. Instana specifies a version tag in the values.yaml to ensure that the instana-autotrace-webhook Helm chart is regularly updated to use the latest icr.io/instana/instrumentation image.

To update Instana AutoTrace webhook and instrumentation, complete the following steps:

  1. Remove the previous webhook installation by running the following command:

    helm uninstall --namespace instana-autotrace-webhook instana-autotrace-webhook
    
  2. Update the Helm chart repository on your local Helm installation by running the following command:

    helm repo update
    
  3. Install the instana-autotrace-webhook Helm chart deployment again by running the following command:

    helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
      --repo https://agents.instana.io/helm instana-autotrace-webhook \
      --set webhook.imagePullCredentials.password=<download_key> --set <initial-custom-flags>
    
  4. Restart the deployment to apply the new instrumentation to your workloads:

    kubectl rollout restart <your-deployment> -n <namespace>
    

Uninstalling the Instana AutoTrace webhook

To uninstall the Instana AutoTrace webhook, run the following command:

helm uninstall instana-autotrace-webhook \
  --namespace instana-autotrace-webhook \
  --no-hooks

After you run the helm uninstall command, the following output is displayed: release "instana-autotrace-webhook" uninstalled.

Verify that the Instana AutoTrace webhook was uninstalled correctly. You can verify by clicking Kubernetes > Clusters in the Instana UI or by running the following command:

kubectl get pods --namespace instana-autotrace-webhook

If uninstallation is successful, the instana-autotrace-webhook pod no longer appears in the namespace.

Removing the instrumentation

To remove AutoTrace webhook from deployed applications and prevent it from being included in new applications, redeploy all higher-order resources that were formerly modified by the AutoTrace webhook. The redeployment makes sure that all the AutoTrace configuration (init-containers and environment variables) are removed from the resource specifications and pod templates.

You can run the following commands to redeploy the higher-order resources:

  1. Delete the existing deployment by running the following command:

    kubectl delete deployment <deployment-name> -n <deployment-ns>
    
  2. Deploy the resources by running the following command:

    kubectl apply -f <initial-deployment-spec.yaml>
    

You can also redeploy the resources by using the operator, Helm chart, or other automation, depending on how you deployed your resources initially.

The kubectl rollout restart deployment command does not work because the AutoTrace webhook also modifies higher-order resources, such as ReplicaSets, StatefulStes, Deployments, and DeploymentConfigs, in addition to the pods.

If you cannot redeploy the application, you can remove the instrumentation by using the kubectl rollback command to roll back to a revision that doesn't include the webhook fields.

To roll back to a previous revision, complete the following steps:

  1. Check the history of the deployment by running the following command:

    kubectl rollout history deployment <deployment-name> -n <deployment-ns>
    
  2. Determine the revision to which the deployment must be rolled back by running the following command:

    kubectl rollout history deployment <deployment-name>  -n <deployment-ns> --revision=<n>
    
  3. Roll back to the revision n that you require by running the following command:

    kubectl rollout undo deployment <deployment-name> -n <deployment-ns> --to-revision=<n>
    
  4. Verify that the new pods are created and the transformation fields are no longer present.

Configurations

Disabling tracers

You can individually disable tracers by using the following option --set autotrace.[technology].enabled=false. The following technologies are available:

  • .Net Core (netcore)
  • Python (python)
  • Node.js (nodejs)
  • Ruby (ruby)

Example:

helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
  --repo https://agents.instana.io/helm instana-autotrace-webhook \
  --set webhook.imagePullCredentials.password=<download_key> \
  --set autotrace.ruby.enabled=false \

Role-based access control

To deploy the AutoTrace webhook into a ServiceAccount guarded by a ClusterRole and matching ClusterRoleBinding, set the rbac.enabled=true flag when you are deploying the Helm chart.

In addition to the role-based access control, if you use pod security policies, add rbac.psp.enabled=true to the Helm arguments.

Pod Security Standards can also be enforced through the built-in Pod Security Admission controller. For more information about the Pod Security Admission, see Kubernetes documentation.

If the flags rbac.enabled=false and webhook.pod.hostNetwork=false are set in the Helm installation, you can run the AutoTrace webhook with the restrictive Pod Security Standard by running the following command:

kubectl label --overwrite ns instana-autotrace-webhook pod-security.kubernetes.io/enforce=restricted

Container port

To be reachable from Kubernetes' apiserver, the AutoTrace webhook pod must be hosted on the host network, and the deployment must be configured to achieve that transparently. By default, the container is bound to port 42650.

If port 42650 is in use, the AutoTrace webhook crashes because its port is already bound. You can change the port by using the webhook.pod.port property.

Opt-in or opt-out

The AutoTrace webhook instruments all containers in all pods. However, you can have more control over which features are instrumented and which are not. By setting the autotrace.opt_in=true value when you are deploying the Helm chart, the AutoTrace webhook modifies pods, replica sets, stateful sets, daemon sets, and deployments that carry the instana-autotrace: "true" label.

Irrespective of the value of the autotrace.opt_in, the AutoTrace webhook does not touch pods that carry the instana-autotrace: "false" label.

The instana-autotrace: "false" label is respected in the metadata of DaemonSets, Deployments, DeploymentConfigs, ReplicaSets, and StatefulSets, as in nested pod templates and in stand-alone pods.

Ignoring namespaces

By using the autotrace.exclude.namespaces configuration, you can exclude entire namespaces from being auto-instrumented.

Resources that have the instana-autotrace: "true" label are instrumented regardless of namespace exclusion.

The instana-autotrace label is respected in the metadata of DaemonSets, Deployments, DeploymentConfigs, ReplicaSets, and StatefulSets, as in nested pod templates and in stand-alone pods.

Ignoring resources

Resources that have the instana-autotrace: "false" label are ignored regardless of other settings.

The instana-autotrace label is respected in the metadata of DaemonSets, Deployments, DeploymentConfigs, ReplicaSets, and StatefulSets, as in nested pod templates and in stand-alone pods.

NGINX and ingress-nginx

To activate the NGINX and ingress-nginx auto-instrumentation, you must opt-in by setting autotrace.ingress_nginx.enabled=true. Make sure to read the troubleshooting section afterward, and make sure that the relevant objects are updated or re-created.

The AutoTrace webhook supports the ingress-nginx Kubernetes ingress controller 0.34.1 or later with a support policy of 45 days, and it is compatible with Helm chart 2.11.2 or later.

IBM MQ and ACE

To activate the IBM MQ and ACE auto-instrumentation, you must opt-in by setting autotrace.ibmmq.enable=true and autotrace.ace.enable=true. The AutoTrace webhook supports only IBM MQ and ACE running in IBM Cloud Pak for Integration. IBM Cloud Pak for Integration runs on the Red Hat OpenShift cluster, and thus you must also set openshift.enabled=true. To set up Instana AutoTrace webhook with the IBM MQ and ACE auto-instrumentation that are enabled, enter the following command:

helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
  --repo https://agents.instana.io/helm instana-autotrace-webhook \
  --set webhook.imagePullCredentials.password=<download_key> \
  --set openshift.enabled=true \
  --set autotrace.ibmmq.enabled=true \
  --set autotrace.ace.enabled=true

Node.js ECMAScript modules

The ECMAScript modules are in the experimental phase.

In Autotrace webhook 1.272.2, a new configuration option autotrace.nodejs.application_type is added due to the breaking changes introduced in Node.js 18.19.0. To ensure that you are using the latest version of the AutoTrace webhook, see the updates guide.

  • For Node.js 18.19.0 and later, set autotrace.nodejs.application_type to module_v2:

    helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
      --repo https://agents.instana.io/helm instana-autotrace-webhook \
      --set webhook.imagePullCredentials.password=<download_key> \
      --set autotrace.nodejs.application_type=module_v2
    
  • For versions before Node.js 18.19.0, set autotrace.nodejs.application_type to module_v1:

    helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
      --repo https://agents.instana.io/helm instana-autotrace-webhook \
      --set webhook.imagePullCredentials.password=<download_key> \
      --set autotrace.nodejs.application_type=module_v1
    
  • To revert to the default behavior or remove the configuration option entirely, set autotrace.nodejs.application_type to commonjs:

    helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
      --repo https://agents.instana.io/helm instana-autotrace-webhook \
      --set webhook.imagePullCredentials.password=<download_key> \
      --set autotrace.nodejs.application_type=commonjs
    

The previous configuration option autotrace.nodejs.esm is now deprecated. You must change to the new configuration option autotrace.nodejs.application_type.

Troubleshooting

If you do not see the Instana AutoTrace webhook impacting your new Kubernetes resources, troubleshoot by using the following steps:

Verifying that the Instana AutoTrace webhook is receiving requests

To verify that the Instana AutoTrace webhook is receiving requests, check the logs of the instana-autotrace-webhook pod by running the following command:

kubectl logs -l app.kubernetes.io/name=instana-autotrace-webhook -n instana-autotrace-webhook

In a functioning installation, you can see the following logs:

14:41:37.590 INFO  |- [AdmissionReview 48556a1a-7d55-497b-aa9c-23634b089cd1] Applied transformation DefaultDeploymentTransformation to the Deployment 'test-netcore-glibc/test-apps'
14:41:37.588 INFO  |- [AdmissionReview 1d5877cf-7153-4a95-9bfb-de0af8351195] Applied transformation DefaultDeploymentTransformation to the Deployment 'test-nodejs-12/test-apps'

If you do not see such logs, a problem with Kubernetes setup might exist. Continue troubleshooting by checking the following section.

Checking the kube-apiserver logs

Check the logs of your kube-apiserver. These logs report on whether the Instana AutoTrace webhook is being started and provide information about the outcome of the execution.

Common issues

No network connectivity between kube-apiserver and the instana-autotrace-webhook pods

The most common issue is that the kube-apiserver cannot reach the worker nodes that are running the instana-autotrace-webhook pods due to security policies, which prevents the Instana AutoTrace webhook to work. In this case, the solution is to change the network settings so that kube-apiserver can access the instana-autotrace-webhook pods. Review your network security policies to make sure that kubeapi-server can initiate connections and receive responses from instana-autotrace-webhook. Instana cannot provide direct guidance for resolving this issue because it solutions vary based on your policy and enforcement mechanisms.

kube-apiserver and the instana-autotrace-webhook pods cannot negotiate a TLS session

Another sporadic issue that can occur is when cryptography restrictions, specifically in terms of which algorithms can be used for TLS, prevent kube-apiserver from negotiating a TLS session with the instana-autotrace-webhook pod. In this case, open a ticket, and inform Instana support about which cryptography algorithms your clusters support.

Insufficient ephemeral storage

The initContainer that is added by the webhook pulls the image that has instrumentation files for all the supported technologies (Node.js, .NET Core, Ruby, Python, and NGINX) and copies the files to the pod. The files are stored in the emptyDir volume instana-instrumentation-volume under volume mount path /opt/instana/instrumentation/. The emptyDir volumes are stored on the local file system of the node, and therefore, the instrumentation files increases the pod's ephemeral storage usage. The total size of the instrumentation files is around 300MB. See the following table for the storage requirements of each technology:

Technology Storage requirement
libinstana_init 5M - required for all technologies
IBM MQ 17M
Ruby 151M
IBM App Connect Enterprise 9M
Netcore 4M
NGINX 66M
Node.js 32M
Python 20M

If the pod does not have sufficient ephemeral storage to load these files, the pod enters the restart loop with the status OOMKilled or CrashLoopBackOff. To resolve this issue, modify the target deployment and increase spec.template.spec.containers[x].resources.limits.memory by 300MB to load all instrumentation files. Alternatively, you can limit the required ephemeral storage by specifying webhook to only copy the files of the necessary technologies. You can modify webhook settings globally for the whole cluster by specifying the Helm chart flags or modifying the values.yaml file. For example, --set autotrace.instrumentation.manual.<technology>=true, where technology can be nodejs, netcore, python, ruby, or nginx.

Alternatively, you can limit the required ephemeral storage for each deployment or pod by setting the environment variable in the spec.template.spec.containers[x].env INSTANA_INSTRUMENT_<technology>=true, where technology can be NODEJS, NETCORE, PYTHON, RUBY, or NGINX.

To include the instrumentation files for two or more technologies, specify the required Helm chart flags for each technology. For more details on the environment variables or Helm chart flags, see Helm values.