Installing the host agent on OpenShift
- Installation methods
- Install by using the OpenShift command line
- Install by using the Helm Chart
- Install by using the Operator
- Customizing
- FAQ
Installation methods
The installation of the Instana Agent on OpenShift is similar to Kubernetes, but with some extra security steps required. There are several available methods to install the instana-agent
onto an OpenShift cluster namely via YAML file (DaemonSet) or Operator.
Current versions of installation methods
New versions of the YAML file and Operator are released fairly frequently. To keep up with the latest updates for fixes, improvements and new features, please ensure you are running the latest version of either YAML file or Operator.
This information can be found in the following locations:
Install by using the OpenShift command line
The Instana Agent can be installed into OpenShift by following the steps below:
A typical instana-agent.yaml
file looks like the following:
For the latest changelog, see Changelog for OpenShift YAML file.
---
apiVersion: v1
kind: Namespace
metadata:
name: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: instana-agent
namespace: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
---
apiVersion: v1
kind: Secret
metadata:
name: instana-agent
namespace: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
type: Opaque
data:
key: *agentKey # Replace this with your Instana agent key, encoded in base64
downloadKey: ''
---
apiVersion: v1
kind: ConfigMap
metadata:
name: instana-agent
namespace: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
data:
cluster_name: *clusterName
configuration.yaml: |
# Manual a-priori configuration. Configuration will be only used when the sensor
# is actually installed by the agent.
# The commented out example values represent example configuration and are not
# necessarily defaults. Defaults are usually 'absent' or mentioned separately.
# Changes are hot reloaded unless otherwise mentioned.
# It is possible to create files called 'configuration-abc.yaml' which are
# merged with this file in file system order. So 'configuration-cde.yaml' comes
# after 'configuration-abc.yaml'. Only nested structures are merged, values are
# overwritten by subsequent configurations.
# Secrets
# To filter sensitive data from collection by the agent, all sensors respect
# the following secrets configuration. If a key collected by a sensor matches
# an entry from the list, the value is redacted.
#com.instana.secrets:
# matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
# list:
# - 'key'
# - 'password'
# - 'secret'
# Host
#com.instana.plugin.host:
# tags:
# - 'dev'
# - 'app1'
# Hardware & Zone
#com.instana.plugin.generic.hardware:
# enabled: true # disabled by default
# availability-zone: 'zone'
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: instana-agent
namespace: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
spec:
selector:
matchLabels:
app.kubernetes.io/name: instana-agent
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
instana/agent-mode: "APM"
annotations: {}
spec:
serviceAccountName: instana-agent
hostNetwork: true
hostPID: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: instana-agent
image: "instana/agent:latest"
imagePullPolicy: Always
env:
- name: INSTANA_AGENT_LEADER_ELECTOR_PORT
value: "42655"
- name: INSTANA_ZONE
value: *zoneName
- name: INSTANA_KUBERNETES_CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: instana-agent
key: cluster_name
- name: INSTANA_AGENT_ENDPOINT
value: *endpointHost
- name: INSTANA_AGENT_ENDPOINT_PORT
value: *endpointPort
- name: INSTANA_AGENT_KEY
valueFrom:
secretKeyRef:
name: instana-agent
key: key
- name: INSTANA_DOWNLOAD_KEY
valueFrom:
secretKeyRef:
name: instana-agent
key: downloadKey
optional: true
- name: INSTANA_MVN_REPOSITORY_URL
value: "https://artifact-public.instana.io"
- name: INSTANA_AGENT_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
privileged: true
volumeMounts:
- name: dev
mountPath: /dev
- name: run
mountPath: /run
- name: var-run
mountPath: /var/run
- name: sys
mountPath: /sys
- name: var-log
mountPath: /var/log
- name: var-lib
mountPath: /var/lib
- name: var-data
mountPath: /var/data
- name: machine-id
mountPath: /etc/machine-id
- name: configuration
subPath: configuration.yaml
mountPath: /root/configuration.yaml
livenessProbe:
httpGet:
host: 127.0.0.1 # localhost because Pod has hostNetwork=true
path: /status
port: 42699
initialDelaySeconds: 300 # startupProbe isnt available before K8s 1.16
timeoutSeconds: 3
periodSeconds: 10
failureThreshold: 3
resources:
requests:
memory: "512Mi"
cpu: 0.5
limits:
memory: "768Mi"
cpu: 1.5
ports:
- containerPort: 42699
- name: leader-elector
image: "instana/leader-elector:0.5.10"
env:
- name: INSTANA_AGENT_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command:
- "/busybox/sh"
- "-c"
- "sleep 12 && /app/server --election=instana --http=localhost:42655 --id=$(INSTANA_AGENT_POD_NAME)"
resources:
requests:
cpu: 0.1
memory: "64Mi"
livenessProbe:
httpGet: # Leader elector /health endpoint expects version 0.5.8 minimum, otherwise always returns 200 OK
host: 127.0.0.1 # localhost because Pod has hostNetwork=true
path: /health
port: 42655
initialDelaySeconds: 30
timeoutSeconds: 3
periodSeconds: 3
failureThreshold: 3
ports:
- containerPort: 42655
volumes:
- name: dev
hostPath:
path: /dev
- name: run
hostPath:
path: /run
- name: var-run
hostPath:
path: /var/run
- name: sys
hostPath:
path: /sys
- name: var-log
hostPath:
path: /var/log
- name: var-lib
hostPath:
path: /var/lib
- name: var-data
hostPath:
path: /var/data
- name: machine-id
hostPath:
path: /etc/machine-id
- name: configuration
configMap:
name: instana-agent
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
rules:
- nonResourceURLs:
- "/version"
- "/healthz"
verbs: ["get"]
apiGroups: []
resources: []
- apiGroups: ["batch"]
resources:
- "jobs"
- "cronjobs"
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources:
- "deployments"
- "replicasets"
- "ingresses"
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- "deployments"
- "replicasets"
- "daemonsets"
- "statefulsets"
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- "namespaces"
- "events"
- "services"
- "endpoints"
- "nodes"
- "pods"
- "replicationcontrollers"
- "componentstatuses"
- "resourcequotas"
- "persistentvolumes"
- "persistentvolumeclaims"
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- "endpoints"
verbs: ["create", "update", "patch"]
- apiGroups: ["networking.k8s.io"]
resources:
- "ingresses"
verbs: ["get", "list", "watch"]
- apiGroups: ["apps.openshift.io"]
resources:
- "deploymentconfigs"
verbs: ["get", "list", "watch"]
- apiGroups: ["security.openshift.io"]
resourceNames: ["privileged"]
resources: ["securitycontextconstraints"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: instana-agent
labels:
app.kubernetes.io/name: instana-agent
app.kubernetes.io/version: 1.2.29
subjects:
- kind: ServiceAccount
name: instana-agent
namespace: instana-agent
roleRef:
kind: ClusterRole
name: instana-agent
apiGroup: rbac.authorization.k8s.io
In the YAML file there are the following dangling anchors, which need to be replaced with actual values:
-
*agentKey
: A base64 encoded Instana key for the cluster to which the generated data should be sentecho YOUR_INSTANA_AGENT_KEY | base64
-
endpointHost
: The IP address or hostname associated with the installation. endpointPort
: The network port associated with the installation.clusterName
: The name to be assigned to your cluster in Instana.zoneName
: The agent zone to associate with the nodes of your cluster.
For additional details relating to the agent endpoints, refer to the Host Agent Configuration.
Install by using the Helm Chart
The Instana Agent Helm chart version 1.2.0 and above supports OpenShift 4.x.
-
Sign in to Instana, click More -> Agents -> Installing Instana Agents -> Kubernetes.
From this page, you'll need your host agent endpoint and your agent key.
-
From the Technology list, select Helm chart.
-
Enter the cluster name and (optionally) the agent zone.
The cluster name (
INSTANA_KUBERNETES_CLUSTER_NAME
) is the customised name of the cluster monitored by this daemonset.The agent zone (
INSTANA_ZONE
) is used to customize the zone grouping displayed on the infrastructure map. It also sets the default name of the cluster.All of the other required parameters are pre-populated.
-
Run the following command with Helm 3:
kubectl create namespace instana-agent && \ helm install instana-agent --namespace instana-agent \ --repo https://agents.instana.io/helm \ --set agent.key='<your agent key - as described above>' \ --set agent.endpointHost='<your host agent endpoint - as described above>' \ --set cluster.name='<your-cluster-name>' \ --set zone.name='<your-cluster-name>' \ --set openshift=true \ instana-agent
To configure the installation, specify the values on the command line using the --set
flag, or provide a yaml file with your values using the -f
flag.
For a detailed list of all the configuration parameters, please see our Instana Helm Chart.
Install by using the Operator
The installation of the operator on OpenShift is similar to Kubernetes but with an additional installation method option and some prerequisites
There are two ways to install the operator:
- Creating the required resources manually as outlined in Kubernetes
- Using the Operator Lifecycle Manager (OLM)
Please perform the prerequisites steps before proceeding with installing the operator using one of the options mentioned above.
Prerequisites
You need to set up a project for the Instana Agent and configure it's permissions.
Create the instana-agent
project and set the policy permissions to ensure the instana-agent
service account is in the privileged security context.
oc login -u system:admin
oc new-project instana-agent
oc adm policy add-scc-to-user privileged -z instana-agent
Additionally, when installing the Operator including the conversion WebHook, the Cert-Manager needs to be installed. See the Cert Manager docs for how to install and configure the Cert-Manager properly. Alternatively, every release will include the instana-agent-operator-no-conversion-webhook.yaml variant for easily installing the Operator without the conversion WebHook.
Install Operator by using OLM
-
Install the Instana agent operator from OperatorHub.io, or OpenShift Container Platform, or OKD.
-
If you don't already have one, create the target namespace where the Instana agent should be installed. The agent does not need to run in the same namespace as the operator. Most users create a new namespace
instana-agent
for running the agents. -
Follow Step 4 in the Install Operator Manually section to create the custom resource for the Agent and install it.
Operator configuration
These are the configuration options you can set via the Instana Agent Custom Resource Definition and environment variables.
Customizing
Depending on your OpenShift environment you might need do some customizing.
If you cannot pull images from the IBM Cloud Container Registry (icr.io), you need to add two image streams. Open the OpenShift Container Registry, go to the instana-agent
namespace, and add the following image streams:
Name: instana-agent
Image: icr.io/instana/agent
The resulting image stream should be: docker-registry.default.svc:5000/instana-agent/instana-agent
Name: leader-elector
Image: icr.io/instana/leader-elector
The resulting image stream should be: docker-registry.default.svc:5000/instana-agent/leader-elector:0.5.4
Use the respective new image streams in the YAML.
With the node-selector you can specify where the instana-agent DaemonSet
should be deployed. Note that every worker host should have an agent install. If you configure the node-selector check if there are any conflicts with project
nodeSelector
and nodeSelector
defined in instana-agent.yaml
.
With using the ConfigMap
you can setup agent configuration that is necessary for proper monitoring.
Secrets
See Kubernetes secrets for more details.
FAQ
Why agent pod schedule is failing on OpenShift 3.9?
In OpenShift 3.9, it can happen that applying a DaemonSet configuration is resulting in unscheduled agent pods. If you see an error message similar to:
Normal SuccessfulCreate 1m daemonset-controller Created pod: instana-agent-m6lwr
Normal SuccessfulCreate 1m daemonset-controller Created pod: instana-agent-vchgg
Warning FailedDaemonPod 1m daemonset-controller Found failed daemon pod instana-agent/instana-agent-vchgg on node node-1, will try to kill it
Warning FailedDaemonPod 1m daemonset-controller Found failed daemon pod instana-agent/instana-agent-m6lwr on node node-2, will try to kill it
Normal SuccessfulDelete 1m daemonset-controller Deleted pod: instana-agent-m6lwr
Normal SuccessfulDelete 1m daemonset-controller Deleted pod: instana-agent-vchgg
Then you're missing an additional annotation to make the instana-agent
namespace able to schedule pods:
oc annotate namespace instana-agent openshift.io/node-selector=""