Installing the host agent on Kubernetes

You can install the Instana host agent in an online or offline (air-gapped) environment in a Kubernetes cluster.

Supported Versions

  • Instana supports the current stable version of Kubernetes. If the current stable version of Kubernetes is 1.26, the Kubernetes sensor is pinned to version 1.24. According to the Kubernetes version compatibility guarantee, Instana supports Kubernetes versions 1.22 to 1.26. However, the lowest 2 versions in that range are considered as a soft deprecation.

Installing the host agent in an air-gapped environment

You can install Instana agent in an air-gapped environment, which has limited internet access, with Kubernetes package manager HELM.

In the following example, the CentOS Linux distribution is used, but installation steps are valid for all distributions.

Prerequisites

  • Linux CentOS 7 or 8
  • Docker
  • HELM
  • Minikube
  • Instana helm-charts

Procedure

  1. Install Docker by running the following commands:

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    sudo systemctl start docker
    
  2. Install HELM by running the following commands:

    sudo su
    yum install -y epel-release
    yum install -y snapd
    systemctl enable --now snapd.socket
    ln -s /var/lib/snapd/snap /snap
    snap install helm --classic
    PATH="$PATH:/snap/bin/"
    helm version
    
  3. Install Minikube by completing the following steps:

    This step serves as an example to show the target environment.

    1. Add the Kubernetes repository:

      cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
      enabled=1
      gpgcheck=1
      gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
      exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
      EOF
      
    2. Install the kubectl command line tool:

      sudo yum install -y kubectl
      
    3. Download the Minikube RPM package:

      curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
      
    4. Install the Minikube RPM package:

      sudo rpm -Uvh minikube-latest.x86_64.rpm
      
    5. Start a Kubernetes cluster by using Docker:

      minikube start --force --driver=docker
      
    6. Set up the Docker environment so that it can be used with Minikube:

      eval $(minikube docker-env)
      
    7. Check the Kubernetes cluster:

      kubectl cluster-info
      
  4. Pull the necessary Docker images for transfer to an air-gapped environment:

    1. Log in to the Instana Container Registry with the agent key:

      docker login https://containers.instana.io/v2 -u _ -p <agentKey>
      
    2. Pull the latest version of the Instana agent from the Instana Container Registry:

      docker pull containers.instana.io/instana/release/agent/static:latest
      
    3. Pull the latest version of the Instana Kubernetes sensor from:

      docker pull icr.io/instana/k8sensor:latest
      
    4. Log out of the Docker registry:

      docker logout
      
    5. Convert the Docker images into .tar files:

      docker images
      docker tag <instanaAgentImageID> instana/agent
      docker save instana/agent > instana-agent.tar
      
      docker tag <k8SensorID> instana/k8sensor
      docker save instana/k8sensor > instana-k8sensor.tar
      
    6. Copy files (instana-agent.tar and instana-k8sensor.tar) to the required host (air-gapped system with all necessary prerequisites).

    7. Delete current images from the air-gapped host:

      docker rmi -f <instanaAgentImageID> <k8SensorID>
      
    8. Import images from .tar files (instana-agent.tar and instana-k8sensor.tar):

      docker load --input instana-agent.tar
      docker load --input instana-k8sensor.tar
      
  5. Run the Docker registry server and push images:

    docker run -d -p 5000:5000 --restart=always --name registry registry:2
    
    docker tag instana/agent:latest localhost:5000/instana-agent
    docker push localhost:5000/instana-agent
    
    docker tag instana/k8sensor:latest localhost:5000/instana-k8sensor
    docker push localhost:5000/instana-k8sensor
    
    #Now we can delete all images related to the agent:
    docker rmi -f <instanaAgentImageID> <k8SensorID>
    
  6. Pull Instana charts from the following repository:

    helm pull instana-agent --repo https://agents.instana.io/helm instana-agent
    

    Check for the latest Instana agent helm chart file in your current directory.

  7. Deploy Instana agent into the Kubernetes cluster:

    helm upgrade --install --create-namespace \
      --namespace instana-agent \
      --set agent.key=<agentKey> \
      --set agent.endpointHost=ingress-red-saas.instana.io \
      --set agent.endpointPort=443 \
      --set cluster.name='mip-back-test' \
      --set zone.name='mip-gke-zone' \
      --set k8s_sensor.deployment.enabled=true \
      --set k8s_sensor.image.name=localhost:5000/instana-k8sensor \
      --set k8s_sensor.image.tag=latest \
      --set k8s_sensor.image.pullPolicy=IfNotPresent  \
      --set agent.image.name=localhost:5000/instana-agent \
      --set agent.image.tag=latest \
      --set agent.image.pullPolicy=IfNotPresent  \
      instana-agent instana-agent-1.2.61.tgz
    

    Enter <agentKey> and edit <agent.endpointHost>, <agent.endpointPort>, <k8_senosr.image.name>, and <agent.image.name>. Also, replace instana-agent-1.2.61.tgz with the latest Instana agent helm chart file in your directory.

  8. Check your pods with the following command:

    kubectl get all -n instana-agent
    
  9. Check the monitored Kubernetes cluster in the Instana UI.

Installing the host agent in an online environment

You can use several available methods to install the Instana agent onto a Kubernetes cluster. If you install the agent on a host manually, only containers and processes are monitored, and Kubernetes data is not collected. You are recommended to install the agent by using the Operator. Alternative methods are the Helm Chart or YAML file (DaemonSet) deployment.

Choosing the proper installation method

The recommended way to deploy the Instana agent into a Kubernetes cluster is to use the Operator or the Helm chart deployment as an alternative. Both ways use all available config options, such as reporting to multiple backends and many more. This will give you full control and even enable some options that are not available in the static YAML file. All available options are described in detail for the Helm chart and these descriptions apply to the operator CRD in the same way.

The YAML file installation is good for quick deployments, although it doesn't have all available configuration options that the Instana agent provides when the agent runs in your Kubernetes cluster. Choose the YAML installation method if you do not need to apply further customization to your Instana agents and rely on manual kubectl handling.

Current versions of installation methods

New versions of the Operator, Helm Chart, and YAML file are released fairly frequently. To keep up with the latest updates for fixes, improvements, and new features, ensure that you are running the current version of either Operator, Helm Chart or YAML file.

To find the current version of Operator, Helm Chart, or YAML file, see the following topics:

Install by using the Operator

Instana provides a Kubernetes operator to install and manage the Instana agent.

For configuration options that you can set by using the Instana Agent Custom Resource Definition and environment variables, see Operator configuration.

Prerequisites

To upgrade the Instana agent CRD from version 1.x of the Instana agent operator automatically, the following prerequisites are needed:

  • The automatic CRD upgrade from version 1.x of the operator depends on the 3rd party cert-manager operator, which needs to be present in the cluster. For how to install and configure the cert-manager operator properly, see the cert-manager docs.

  • You must first upgrade the Instana agent to version 2.0.9, which includes a webhook conversion to convert existing agent CRs into the newer format. This webhook is removed after version 2.0.9. You can upgrade the Instana agent again to a newer version after the conversion is complete.

If you are reinstalling the host agent, then you must have cleanly uninstalled the agent, including deleting all the host agent's cluster level objects. For more information, see Uninstalling the host agent.

Install the Operator manually

Complete the prerequisites steps before you proceed with installing the operator.

  1. Deploy the Operator as follows. The command installs the latest Operator, and 1.x CRDs are not upgraded:

    kubectl apply -f https://github.com/instana/instana-agent-operator/releases/latest/download/instana-agent-operator.yaml
    

    The Operator should be up and running in the namespace instana-agent, and waiting for an instana-agent custom resource to be created. Note that each legacy version 1.x.x of the instana-agent-operator.yaml references the same version of the instana/instana-agent-operator container image. The latest tag for the Instana Agent Operator image in DockerHub and the Red Hat Registry are not supported. To get a new version of the Instana Agent Operator, update to the latest Operator YAML from the Operator's GitHub Releases page as mentioned previously.

  2. Sign in to the Instana UI, and then select an option to display the agent catalog; for example, on the home page, click Deploy Agent.

    If you are starting a new trial instance of Instana, the agent catalog is displayed with a prompt to select a host agent to install.

  3. Click the tile Kubernetes - Operator

  4. Enter the cluster name and (optionally) the agent zone that you want the cluster to be part of.

    The cluster name (INSTANA_KUBERNETES_CLUSTER_NAME) is used to customize the zone grouping displayed on the infrastructure map. It also sets the default name of the cluster.

    The agent zone (INSTANA_ZONE) is the customised name of the cluster monitored by this daemonset.

  5. Create a custom resource YAML file by copying the YAML template provided on the UI page.

    The YAML template is pre-filled with your agent key, host agent endpoint, cluster name, and agent zone.

  6. Edit the custom resource YAML file.

    1. If you are installing the agent in a self-hosted environment, and the agent key does not have authority to download from the Instana public artifactory, add the download key as downloadKey: <your_download_key>. For example:
    agent:
      key: wPYpH7EGK0ucLaO0Nu7BYw
      downloadKey: m007YDoWNload6kE42yukg
      endpointHost: ...
    
    1. You can replace the following values:

      • agent.env (optional) can be used to specify environment variables for the agent, such as specifying the proxy configuration for the agent. For more possible environment values, see agent configuration. Here is an example:

        spec:
            agent:
              env:
               INSTANA_AGENT_TAGS: staging
        
      • agent.configuration_yaml can be used to specify a configuration.yaml configuration file. See the following example:

        spec:
            agent:
              configuration_yaml: |
                # Example of configuration yaml template
                # Host
                com.instana.plugin.host:
                  tags:
                    - 'dev'
                    - 'app1'
        

        For more information about adapting configuration.yaml files, see Configuring host agents by using the agent configuration file.

    2. If you want to deploy the static host agent, configure the custom resource YAML file with the static agent image. To list the static host agent image, set agent.image.name to containers.instana.io/instana/release/agent/static. See the following example:

      spec:
       agent:
         image:
           name: containers.instana.io/instana/release/agent/static
      
  7. Apply the the custom resource YAML file:

    kubectl apply -f instana-agent.customresource.yaml
    

    Where customresource.yaml is the name of your custom resource YAML file.

    The operator picks up the configuration from the custom resource YAML file and deploys the Instana agent.

Uninstalling the host agent

  1. Uninstall the Instana host agent by removing the custom resource YAML file:

    kubectl delete -f instana-agent.customresource.yaml
    
  2. Uninstall the Operator by running the following command:

    kubectl delete -f https://github.com/instana/instana-agent-operator/releases/latest/download/instana-agent-operator.yaml
    
  3. Delete the host agent's cluster level objects. For example, you can identify the cluster level objects by running the following command:

    oc get clusterrole,clusterrolebinding -n instana-agent -o name  | egrep 'instana-agent|k8sen'
    

Operator configuration

Custom resource values

The Instana Agent custom resource supports the exact same configuration as the Instana Helm Chart. For a detailed list of all the configuration parameters, see the Helm Chart configuration.

Setup TLS encryption for agent endpoint

TLS encryption can be added in two ways. Either an existing secret can be provided or a certificate and private key can be used.

Using existing secret

You can use an existing secret of the type kubernetes.io/tls for TLS encryption. But the agent.tls.secretName must be provided in the custom resource YAML file.

Provide certificate and private key

On the other side, a certificate and a private key can be used. The certificate and private key must be base64-encoded.

To use this variant, add the following two parameters to the custom resource yaml file:

  • agent.tls.certificate
  • agent.tls.key

If agent.tls.secretName is set, then agent.tls.certificate and agent.tls.key are ignored.

Install by using the Helm Chart

With a DaemonSet, the Helm chart adds the Instana agent to all schedulable nodes in your cluster.

  1. Sign in to the Instana UI, and then select an option to display the agent catalog; for example, on the home page, click Deploy Agent.

    If you are starting a new trial instance of Instana, the agent catalog is displayed with a prompt to select a host agent to install.

  2. Click the tile Kubernetes - Helm chart

  3. Enter the cluster name and (optionally) the agent zone that you want the cluster to be part of.

    The cluster name (<your_cluster_name>) is the customised name of the cluster monitored by this daemonset.

    The agent zone (<your_zone_name>) is used to customize the zone grouping displayed on the infrastructure map.

    The agent deployment code is updated with the values that you provide. All of the other required parameters are pre-populated in the agent deployment code, which looks like the following:

    helm install instana-agent
    --repo https://agents.instana.io/helm \
    --namespace instana-agent \
    --create-namespace \
    --set agent.key='<your_agent_key>' \
    --set agent.endpointHost='<your_host_agent_endpoint>' \
    --set agent.endpointPort=443 \
    --set cluster.name='<your_cluster_name>' \
    --set zone.name='<your_zone_name>' \
    instana-agent
    
  4. Copy and then run the agent deployment code with Helm 3.

    To configure the installation, you can specify the values on the command line by using the --set flag or can provide a YAML file with your values by using the -f flag.

    If you want to deploy the static host agent, set the flag —set agent.image.name=containers.instana.io/instana/release/agent/static in the agent deployment code.

    For a detailed list of all the configuration parameters, see Instana Helm Chart.

Setting up TLS encryption for agent endpoint

TLS encryption can be added in two ways. Either an existing secret can be provided or a certificate and private key can be used during installation.

Using existing secret

An existing secret of type kubernetes.io/tls is possible to use. Just the secretName must be provided during installation with --set 'agent.tls.secretName=<YOUR_SECRET_NAME>'. The files from the provided secret will then be included in the agent.

Provide certificate and private key

On the other side, a certificate and a private key can be added during the installation. The certificate and private key must be base64 encoded.

To use this variant, execute helm install with the following additional parameters:

--set 'agent.tls.certificate=<YOUR_CERTIFICATE_BASE64_ENCODED>'
--set 'agent.tls.key=<YOUR_PRIVATE_KEY_BASE64_ENCODED>'

If agent.tls.secretName is set, then agent.tls.certificate and agent.tls.key will be ignored.

Instana agent service

The functionality described in this section is available with the Instana Agent Helm chart v1.2.7 and above, and requires Kubernetes 1.17 and above.

The Helm chart has a special configuration option called --set service.create=true. This option creates a Kubernetes Service that exposes the following to the cluster:

Uninstalling the host agent

To uninstall the Instana host agent that is installed by using the Helm Chart, run the following command:

helm uninstall instana-agent

Then, all the resources that are related to the host agent are removed.

Install as a DaemonSet

To install and configure the Instana host agent as a DaemonSet within your Kubernetes cluster, customize the instana-agent.yaml file to create the instana-agent namespace in which the DaemonSet is created. This enables you to tag agents for quick identification or to stop all of them by deleting the namespace.

  1. Sign in to the Instana UI, and then select an option to display the agent catalog; for example, on the home page, click Deploy Agent.

    If you are starting a new trial instance of Instana, the agent catalog is displayed with a prompt to select a host agent to install.

  2. Click the tile Kubernetes - YAML

  3. Enter the cluster name and (optionally) the agent zone that you want the cluster to be part of.

    The cluster name (<your_cluster_name>) is the customised name of the cluster monitored by this daemonset.

    The agent zone (<your_zone_name>) is used to customize the zone grouping displayed on the infrastructure map.

    The agent deployment code is updated with the values that you provide. All of the other required parameters are pre-populated in the agent deployment code.

  4. Download (or copy and save) the agent deployment code as a YAML file; for example: deployment.yaml.

  5. To install Instana within your Kubernetes Cluster, run the following command:

    kubectl apply -f deployment.yaml
    

    Where deployment.yaml is the name of the file that you created in the previous step.

    Note: If you make any more edits to the deployment.yaml file, you must recreate the DaemonSet. To apply changes, run the following commands:

    kubectl delete -f deployment.yaml
    kubectl apply -f deployment.yaml
    

RBAC

To deploy for Kubernetes versions prior to 1.8 with RBAC enabled, replace rbac.authorization.k8s.io/v1 with rbac.authorization.k8s.io/v1beta1 for RBAC api version.

To grant your user the ability to create authorization roles, for example in GKE, run this command:

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

If you don't have RBAC enabled, you need to remove the ClusterRole and ClusterRoleBinding from the instana-agent.yaml file.

PodSecurityPolicy

To enable a PodSecurityPolicy for the Instana agent:

  1. Create a PodSecurityPolicy resource as defined in our Helm chart.
  2. Authorize that policy in the instana-agent ClusterRole. Note that RBAC has to be enabled with the ClusterRole and ClusterRoleBinding resources created as defined in the aforementioned instana-agent.yaml file.
  3. Enable the PodSecurityPolicy admission controller on your cluster. For existing clusters, it is recommended that policies are added and authorized before enabling the admission controller.

Pod Security Admission

Instana host agent requires the privileged Pod Security Standard. To enforce the Pod Security Standard with a built-in Pod Security Admission controller, run the following command:

kubectl label --overwrite ns instana-agent pod-security.kubernetes.io/enforce=privileged

Checking the status of the host agent

After you install the host agent, you can check the status of the host agent in the Instana UI or on the host. For more information, see Checking the status of the host agent.

Configure network access for monitored applications

Some types of applications need to reach out to the agent first. Currently they are

  • Node.js
  • Go
  • Ruby
  • Python
  • .NET Core

Those applications need to know on which IP the agent is listening. As the agent will listen on the host IP automatically, use the following Downward API snippet to pass it in an environment variable to the application pod:

spec:
  containers:
    env:
      - name: INSTANA_AGENT_HOST
        valueFrom:
          fieldRef:
            fieldPath: status.hostIP

Monitor master nodes

Per default, the agent does not get scheduled on Kubernetes master nodes, as the deployment respects the default taint node-role.kubernetes.io/master:NoSchedule that is set on most master nodes. To overwrite these add the following toleration to the agent daemonset:

kind: DaemonSet
metadata:
  name: instana-agent
  namespace: instana-agent
spec:
  template:
  ...
    spec:
      tolerations:
        - key: "node-role.kubernetes.io/master"
          effect: "NoSchedule"
          operator: "Exists"
    ...

For more direct control install the Agent separately on the master nodes. Please contact support on advice for your environment.

Monitor Kubernetes NGINX Ingress

For guidelines on how to configure the Kubernetes NGINX Ingress and our Agent for capturing NGINX metrics, see the Monitoring NGINX page. Tracing of the Kubernetes NGINX Ingress is also possible via the OpenTracing project, see Distributed Tracing for NGINX Ingress on guidelines how to set that up.

Secrets

Kubernetes has built-in support for storing and managing sensitive information. However, if you do not use that built-in capability but still need the ability to redact sensitive data in Kubernetes resources the agent secrets configuration is extended to support that.

To enable sensitive data redaction for selected Kubernetes resources (specifically annotations and container environment variables), set the INSTANA_KUBERNETES_REDACT_SECRETS environment variable to true as shown in the following agent yaml snippet:

spec:
  containers:
      env:
        - name: INSTANA_KUBERNETES_REDACT_SECRETS
          value: "true"

Then configure the agent with the desired list of secrets to match on as described in the agent secrets configuration.

It is important to note that enabling this capability can possibly cause a decrease in performance in the Kubernetes sensor.

Report to multiple backends

To add additional backends by using helm and operator, run the following command. You can use additional backends to configure your Instana agent or leave it empty.

When you run the following command, only the agent data is forwarded to multiple backends. The k8s sensor and metrics data are not forwarded to multiple backends.

  • Using helm:

       helm install instana-agent \
       --repo https://agents.instana.io/helm \
       --namespace instana-agent \
       --create-namespace \
       --set agent.key=my-key \
       --set agent.endpointHost='<your_host_agent_endpoint>' \
       --set agent.endpointPort=443 \
       --set cluster.name='<your_cluster_name>' \
       --set zone.name='zone-name' \
       --set "agent.additionalBackends[0].endpointHost=<your_host_agent_endpoint>" \
       --set "agent.additionalBackends[0].key=<your_agent_key>" \
       --set "agent.additionalBackends[0].endpointPort=443" \
       --set "agent.additionalBackends[1].endpointHost=<your_other_host_agent_endpoint>" \
       --set "agent.additionalBackends[1].key=<your_other_agent_key>" \
       --set "agent.additionalBackends[1].endpointPort=443" \
       instana-agent
    
  • Using operator:

      apiVersion: instana.io/v1
      kind: InstanaAgent
      metadata:
      name: instana-agent
      namespace: instana-agent
      spec:
      zone:
        name: zone-name
      cluster:
        name: cluster-name
      agent:
        key: <your_agent_key>
        endpointHost: <your_host_agent_endpoint>
        endpointPort: "443"
        env: {}
        additionalBackends:
        - endpointHost: <your_host_agent_endpoint>
        key: <your_agent_key>
        endpointPort: "443"
        - endpointHost: <your_other_host_agent_endpoint>
        key: <your_other_agent_key>
        endpointPort: "443"
        configuration_yaml: |
    

To enable reporting to multiple backends from a Kubernetes agent, see the docker agent configuration.

Instana agent security considerations

Since the Instana agent needs to connect to application pods and list/open own ports on bridge networks on the node it is deployed to, it requires host network access and host level process ID lookup for infrastructure correlation. This is equivalent to the permissions granted on Linux & Unix host environments and requires the following flags as set in the DaemonSet deployment:

  • privileged: true provides full access to /proc without overlay, change UID/GID for JVM attachment and access application namespaces.
  • hostPID: true provides host level PIDs in /proc, required for infrastructure correlation.
  • hostNetwork: true provides access to host level & bridge network interfaces.

In addition, when deploying to OpenShift, the Security Context Constraint (SCC) will be set to privileged to grant the afore mentioned permissions.

The following cluster role rules are required by the Kubernetes sensor to detect all resources and applications running in the Kubernetes cluster and making sure its monitored correctly:

rules:
  - nonResourceURLs:
    - "/version"
    - "/healthz"
    verbs: ["get"]- apiGroups: ["batch"]
    resources:
      - "jobs"
      - "cronjobs"
    verbs: ["get", "list", "watch"]
  - apiGroups: ["extensions"]
    resources:
      - "deployments"
      - "replicasets"
      - "ingresses"
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - "deployments"
      - "replicasets"
      - "daemonsets"
      - "statefulsets"
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources:
      - "namespaces"
      - "events"
      - "services"
      - "endpoints"
      - "nodes"
      - "pods"
      - "replicationcontrollers"
      - "componentstatuses"
      - "resourcequotas"
      - "persistentvolumes"
      - "persistentvolumeclaims"
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources:
      - "endpoints"
    verbs: ["create", "update", "patch"]
  - apiGroups: ["networking.k8s.io"]
    resources:
      - "ingresses"
    verbs: ["get", "list", "watch"]

Example YAML file

A typical instana-agent.yaml file can be downloaded form Instana public GitHub repository. It gets rendered from the Helm chart with typical defaults. Individual properties are defined as dangling anchors as layed out in the next step.

Download this file and view the latest changelog.

Troubleshooting agent deployment

If installing the agent is not successful at first, you can check log messages and troubleshooting tips. If this troubleshooting section does not answer the questions you have, contact the IBM Instana support team with information about your experience, so that we can help you and update our documentation accordingly.

For troubleshooting information that is general to all host agents, see Managing host agents / Troubleshooting.

If reinstalling the agent fails with the following message, delete the agent's cluster level objects before reinstalling the agent:

installation Instana Agent failed: rendered manifests contain a resource that already exists. Unable to continue with install: ...

For more information about deleting the agent's cluster level objects, see Uninstalling the agent.