Configuring AutoTrace webhook
You can configure Instana AutoTrace webhook according to your specific requirements. See the following list of available configuration options:
- Pinning AutoTrace Webhook version
- Disabling tracers
- Configuring role-based access control
- Setting container port
- Opting in or out of instrumentation
- Ignoring namespaces
- Ignoring resources
- Enabling webhook instrumentation for NGINX and ingress-nginx
- Enabling webhook instrumentation for IBM MQ and ACE
- Node.js ECMAScript modules
- Minimizing required ephemeral storage
Pinning AutoTrace Webhook version
Starting with AutoTrace Webhook version 1.295.7
With the webhook version 1.295.7, a new flag was introduced for pinning the AutoTrace Webhook version. When necessary, you can use the --set global.version=<version>
flag to specify the version manually for the AutoTrace
webhook deployment. For more information about all Instana AutoTrace webhook releases, see Instana autotrace webhook.
Previously, image versions were pinned by using SHA hashes, making it difficult to identify and align versions across multiple images. Starting with webhook 1.295.7, you can use a single version value to control both images by setting the
global.version
flag. The flag is set by default. When set, global.version
takes precedence and overrides any version or SHA defined in webhook.image
and autotrace.instrumentation.image
.
If global.version
is not set, the values that are defined in webhook.image
or autotrace.instrumentation.image
is used.
Before AutoTrace Webhook version 1.295.7
The pinning of the AutoTrace Webhook version in charts before version 1.295.7 can be done via the individual image flags for the webhook and the instrumentation image. For example, if the desired version for pinning is 1.294.0, this can be
done by using the two flags --set webhook.image="containers.instana.io/instana/release/agent/instana-autotrace-webhook:1.294.0" --set autotrace.instrumentation.image="icr.io/instana/instrumentation:1.294.0"
.
This approach is deprecated, and global.version
flag should be used instead.
Disabling tracers
You can disable the tracing of technologies individually by adding the line --set autotrace.<technology>.enabled=false
to the AutoTrace webhook installation command. Replace <technology>
with the technology
for which that you want to disable tracing.
You can disable the tracers for the following technologies:
- .Net Core (netcore)
- Python (python)
- Node.js (nodejs)
- Ruby (ruby)
See the sample Helm command:
helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
--repo https://agents.instana.io/helm instana-autotrace-webhook \
--set webhook.imagePullCredentials.password=<download_key> \
--set autotrace.ruby.enabled=false \
Configuring role-based access control
To deploy the AutoTrace webhook into a ServiceAccount
that is guarded by a ClusterRole
and matching ClusterRoleBinding
, set the rbac.enabled=true
flag when you deploy the Helm chart.
In addition to the role-based access control, if you use pod security policies, add rbac.psp.enabled=true
to the Helm arguments.
You can also enforce Pod Security Standards through the built-in Pod Security Admission controller. For more information about the Pod Security Admission, see Kubernetes documentation.
If you set the flags rbac.enabled=false
and webhook.pod.hostNetwork=false
in the Helm installation, you can run the AutoTrace webhook with the restrictive Pod Security Standard by running the following command:
kubectl label --overwrite ns instana-autotrace-webhook pod-security.kubernetes.io/enforce=restricted
Setting container port
You must host the AutoTrace webhook pod on the host network and configure it properly to make sure that it is reachable from the Kubernetes API server (apiserver
).
By default, the container is bound to port 42650
.
If port 42650
is already in use, the AutoTrace webhook crashes. To avoid this issue, you can change the port by using the webhook.pod.port
property.
Opting in or out of instrumentation
The AutoTrace webhook instruments all containers in all pods.
However, you can have more control over which resources are instrumented. To specify that only selected resources are instrumented, first add the autotrace.opt_in=true
argument during the Helm deployment.
Then, specify which resources must be instrumented by adding the instana-autotrace: "true"
label to the required pods, replica sets, stateful sets, daemon sets, and deployments. AutoTrace webhook uses this label to identify
the required resources and mutates them. Additionally, you can set the label on a namespace level, where all resources within that namespace are instrumented.
Regardless of the value of the autotrace.opt_in
, the AutoTrace webhook does not mutate pods or resources within a namespace that carry the instana-autotrace: "false"
label.
The instana-autotrace: "false"
label is respected in the metadata of DaemonSets, Deployments, DeploymentConfigs, ReplicaSets, StatefulSets, and namespaces, as in nested pod templates and in stand-alone pods.
Ignoring namespaces
You can exclude entire namespaces from being auto-instrumented by using the autotrace.exclude.namespaces
configuration.
Resources that have the instana-autotrace: "true"
label are instrumented regardless of namespace exclusion.
The instana-autotrace
label is respected in the metadata of DaemonSets, Deployments, DeploymentConfigs, ReplicaSets, and StatefulSets, as in nested pod templates and in stand-alone pods.
Ignoring resources
You can exclude individual resources from auto-instrumentation by adding the instana-autotrace: "false"
label. AutoTrace webhook ignores the resources with this label regardless of other settings.
The instana-autotrace
label is respected in the metadata of DaemonSets, Deployments, DeploymentConfigs, ReplicaSets, and StatefulSets, as in nested pod templates and in stand-alone pods.
Enabling webhook instrumentation for NGINX and ingress-nginx
To activate the NGINX and ingress-nginx auto-instrumentation, you must opt in by setting the label autotrace.ingress_nginx.enabled=true
.
Before your proceed, see Troubleshooting AutoTrace webhook, and make sure that the relevant objects are updated or re-created.
The AutoTrace webhook supports the ingress-nginx Kubernetes ingress controller 0.34.1 or later with a support policy of 45 days, and is compatible with Helm chart 2.11.2 or later.
Enabling webhook instrumentation for IBM MQ and ACE
To activate the IBM MQ and ACE auto-instrumentation, you must opt in by setting the labels autotrace.ibmmq.enable=true
and autotrace.ace.enable=true
. The AutoTrace webhook supports only IBM MQ and ACE that are running
in IBM Cloud Pak for Integration.
Because IBM Cloud Pak for Integration runs on the Red Hat OpenShift cluster, you must also set openshift.enabled=true
during the Helm deployment.
To set up Instana AutoTrace webhook with the IBM MQ and ACE auto-instrumentation that are enabled, enter the following command:
helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \
--repo https://agents.instana.io/helm instana-autotrace-webhook \
--set webhook.imagePullCredentials.password=<download_key> \
--set openshift.enabled=true \
--set autotrace.ibmmq.enabled=true \
--set autotrace.ace.enabled=true
Node.js ECMAScript modules
Currently, the ECMAScript modules are in experimental phase.
In Autotrace webhook 1.272.2, a new configuration option autotrace.nodejs.application_type
is added because of the breaking changes that were introduced in Node.js 18.19.0.
To make sure that you are using the latest version of the AutoTrace webhook, see Updating AutoTrace webhook.
-
For Node.js 18.19.0 and later, set
autotrace.nodejs.application_type
tomodule_v2
:helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \ --repo https://agents.instana.io/helm instana-autotrace-webhook \ --set webhook.imagePullCredentials.password=<download_key> \ --set autotrace.nodejs.application_type=module_v2
-
For versions earlier than Node.js 18.19.0, set
autotrace.nodejs.application_type
tomodule_v1
:helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \ --repo https://agents.instana.io/helm instana-autotrace-webhook \ --set webhook.imagePullCredentials.password=<download_key> \ --set autotrace.nodejs.application_type=module_v1
-
To revert to the default behavior or remove the configuration option entirely, set
autotrace.nodejs.application_type
tocommonjs
:helm install --create-namespace --namespace instana-autotrace-webhook instana-autotrace-webhook \ --repo https://agents.instana.io/helm instana-autotrace-webhook \ --set webhook.imagePullCredentials.password=<download_key> \ --set autotrace.nodejs.application_type=commonjs
The previous configuration option, autotrace.nodejs.esm
, is now deprecated. You must use the new configuration option autotrace.nodejs.application_type
.
Minimizing required ephemeral storage
The webhook mutates the deployment and adds the initContainer. The initContainer pulls the image that has instrumentation files for all supported technologies (Node.js, .NET Core, Ruby, Python, and NGINX) and copies the files to the volume.
The files are stored in the emptyDir volume instana-instrumentation-volume
under volume mount path /opt/instana/instrumentation/
. The emptyDir volumes are stored on the local file system of the node and therefore
the instrumentation files increase the pod's ephemeral storage usage. The total size of the instrumentation files is around 300 MB. See the following table for the storage requirements of each technology:
Technology | Storage requirement |
---|---|
libinstana_init | 5M - required for all technologies |
IBM MQ | 17M |
Ruby | 151M |
IBM App Connect Enterprise | 9M |
.NET Core | 4M |
NGINX | 66M |
Node.js | 32M |
Python | 20M |
In some cases, your cluster might use only certain technologies. To optimize, you can limit the files that are copied during the installation, which reduces the ephemeral storage and improving the initContainer running time. Complete the following steps to limit the files to only the technologies you require:
-
Determine the technologies used in your cluster. In this example, Node.js and .NET Core are selected.
-
Configure the webhook installation by explicitly enabling the required technologies that use Helm chart flags. The syntax is as follows:
--set autotrace.instrumentation.manual.<technology>=true
Replace
<techonology>
with one of the following:- nodejs
- netcore
- python
- ruby
- nginx
For example, to enable both Node.js and .NET Core, use flags:
--set autotrace.instrumentation.manual.nodejs=true --set autotrace.instrumentation.manual.netcore=true
-
After the helm chart installation is successful, make sure to re-create your deployment so the new configuration takes effect on the cluster workloads.
For more information on the environment variables or Helm chart flags, see Helm values.