Installing the Instana backend
To install the Instana backend, complete several main steps to create a Core object, create an associated Unit object, and enable optional features as you need. Before you start the main steps, complete several preparation steps to create namespaces and secrets.
- Prerequisites
- Preparation steps
- Creating a Core
- Creating a Unit and Tenant
- Enabling optional features
Tip: The Instana kubectl plug-in has a command to generate YAML templates for namespaces and custom resources to help you get started.
kubectl instana template --output-dir <dir>
Prerequisites
- The Instana kubectl plug-in must be installed.
- Required data stores must be up and running. See directions.
Preparation steps
Before you can create Core and Unit, you need to create namespaces and place a set of secrets for the namespaces so that you can pull Core and Unit images from the artifact-public.instana.io
registry.
Creating namespaces
Core and Units must be installed in different namespaces. Each Core needs its own namespace. Multiple Units that belong to the same Core can be installed in the same namespace.
Namespace names can be freely chosen. The namespaces instana-core
and instana-units
are used in this guide.
The Instana Enterprise operator requires a label app.kubernetes.io/name
to be present in the namespace. The value must be the name of the namespace. The Operator adds these labels if they are missing. It makes sense to add these
labels directly, especially when GitOps for deploying is used.
Create a YAML file with a name such as namespaces.yaml
. See the following example:
apiVersion: v1
kind: Namespace
metadata:
name: instana-core
labels:
app.kubernetes.io/name: instana-core
---
apiVersion: v1
kind: Namespace
metadata:
name: instana-units
labels:
app.kubernetes.io/name: instana-units
Then, apply the file by running the following command:
kubectl apply -f namespaces.yaml
Creating image pull secrets
Unless you have your own Docker registry that mirrors artifact-public.instana.io
and doesn't require pull secrets, you need to create image pull secrets in the two namespaces that you've created by using either of the following
ways:
-
Create the secret directly.
kubectl create secret docker-registry instana-registry \ --namespace=<namespace> \ --docker-username=_ \ --docker-password=<agent_key> \ --docker-server=artifact-public.instana.io
- Replace <namespace_name> with the name of the Core's namespace or Units' namespace that you just created.
- Replace <agent_key> with your agent key.
-
Create the YAML for the secret without applying the YAML. The secret is printed without creation.
kubectl create secret docker-registry instana-registry \ --namespace=<namespace> \ --docker-username=_ \ --docker-password=<agent_key> \ --docker-server=artifact-public.instana.io \ --dry-run=client \ --output=yaml
Then, create the secret.
kubectl create -f <secret-file-name.yaml> --namespace <namespace>
- Replace <secret-file-name> with the YAML file name.
- Replace <namespace_name> with the name of the Core's namespace or Units' namespace that you just created.
Downloading the license file
Instana requires a license based on your SalesKey for activation. To obtain this license file with the Instana kubectl plug-in, run the following command:
kubectl instana license download --sales-key <SalesKey>
Alternatively, if you need to manually download the license, run the following command:
curl https://instana.io/onprem/license/download/v2/allValid?salesId=<your-SalesKey> -o license.json
This license key is a part of the config.yaml
file for Units, which will be generated later.
Creating secrets
Secret values are not configured by using Core and Unit resources. These must go into Kubernetes secrets.
For TLS certificates, a secret instana-tls
must be created in the namespace of Core.
For each Core and each Unit resource, a secret with the corresponding name must be created in the respective namespace.
The secret must contain a config.yaml
file, the structure of which resembles CoreSpec or UnitSpec, respectively, adding credentials.
Secret instana-tls
The secret instana-tls
is required for ingress configuration and must be created in the namespace of Core.
Key | Value |
---|---|
tls.crt |
The TLS certificate for the domain under which Instana is reachable. CN (Common Name) must match the baseDomain that is configured in CoreSpec. |
tls.key |
The TLS key. |
This secret must be of type kubernetes.io/tls
.
To create the secret instana-tls
, follow the steps:
-
Create the
san.conf
file as shown in the following example:[req] default_bits = 4096 prompt = no default_md = sha256 x509_extensions = req_ext req_extensions = req_ext distinguished_name = dn [ dn ] C=<two_letter_country_code> ST=<site> L=<location> O=<organization> OU=<organizational_unit> emailAddress=<email_address> CN = <base_domain> [ req_ext ] subjectAltName = @alt_names [ alt_names ] DNS.1 = <base_domain> DNS.2 = agent-acceptor.<base_domain> DNS.3 = otlp-grpc.<base_domain> DNS.4 = otlp-http.<base_domain> DNS.5 = <unit_name>-<tenant_name>.<base_domain>
Replace <base_domain> with the domain name where you want to install the Instana backend. Replace <country>, <site>, <location>, <organization>, <organizational_unit>, <email_address>, <unit_name> and <tenant_name> with your information.
-
Generate a self-signed certificate.
openssl req -x509 -newkey rsa:2048 -keyout tls.key -out tls.crt -days 365 -nodes -config san.conf
-
Create a secret that holds the newly generated certificate.
kubectl create secret tls instana-tls --namespace instana-core \ --cert=path/to/tls.crt \ --key=path/to/tls.key
Core Secret
Create a config.yaml
file with the contents as shown in the following example. The structure of the file is exactly the same as that of the CoreSpec that will be created further as follows.
Secret values must go into this Core Secret. Anything else goes into the CoreSpec.
# Diffie-Hellman parameters to use
dhParams: |
-----BEGIN DH PARAMETERS-----
<snip/>
-----END DH PARAMETERS-----
# The repository password for accessing the Instana agent repository.
# Use the download key that you received from us
repositoryPassword: mydownloadkey
# The sales key you received from us
salesKey: mysaleskey
# Seed for creating crypto tokens. Pick a random 12 char string
tokenSecret: mytokensecret
# Configuration for raw spans storage
storageConfigs:
rawSpans:
# Required if using S3 or compatible storage bucket.
# Credentials should be configured.
# Not required if IRSA on EKS is used.
s3Config:
accessKeyId: ...
secretAccessKey: ...
# Required if using Google Cloud Storage.
# Credentials should be configured.
# Not required if GKE with workload identity is used.
gcloudConfig:
serviceAccountKey: ...
# SAML/OIDC configuration
serviceProviderConfig:
# Password for the key/cert file
keyPassword: mykeypass
# The combined key/cert file
pem: |
-----BEGIN RSA PRIVATE KEY-----
<snip/>
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
<snip/>
-----END CERTIFICATE-----
# Required if a proxy is configured that needs authentication
proxyConfig:
# Proxy user
user: myproxyuser
# Proxy password
password: my proxypassword
emailConfig:
# Required if SMTP is used for sending e-mails and authentication is required
smtpConfig:
user: mysmtpuser
password: mysmtppassword
# Required if using for sending e-mail.
# Credentials should be configured.
# Not required if using IRSA on EKS.
sesConfig:
accessKeyId: ...
secretAccessKey: ...
# Optional: You can add one or more custom CA certificates to the component trust stores
# in case internal systems (such as LDAP or alert receivers) which Instana talks to use a custom CA.
customCACert: |
-----BEGIN CERTIFICATE-----
<snip/>
-----END CERTIFICATE-----
# Add more certificates if you need
# -----BEGIN CERTIFICATE-----
# <snip/>
# -----END CERTIFICATE-----
datastoreConfigs:
kafkaConfig:
adminUser: strimzi-kafka-user
adminPassword: <RETRIEVED_FROM_SECRET>
consumerUser: strimzi-kafka-user
consumerPassword: <RETRIEVED_FROM_SECRET>
producerUser: strimzi-kafka-user
producerPassword: <RETRIEVED_FROM_SECRET>
elasticsearchConfig:
adminUser: elastic
adminPassword: <RETRIEVED_FROM_SECRET>
user: elastic
password: <RETRIEVED_FROM_SECRET>
postgresConfigs:
- user: <username in Postgres data store>
password: <RETRIEVED_FROM_SECRET>
adminUser: <username in Postgres data store>
adminPassword: <RETRIEVED_FROM_SECRET>
cassandraConfigs:
- user: instana-superuser
password: <RETRIEVED_FROM_SECRET>
adminUser: instana-superuser
adminPassword: <RETRIEVED_FROM_SECRET>
clickhouseConfigs:
- user: clickhouse-user
password: <USER_GENERATED_PASSWORD>
adminUser: clickhouse-user
adminPassword: <USER_GENERATED_PASSWORD>
Replace <username in Postgres data store>
with the username specified in Creating a Postgres data store by using the CNPG Postgres Operator and
Creating a Postgres data store by using the Zalando Postgres Operator.
Notes:
-
If you set up Instana data stores by using third-party Kubernetes Operators, you need to copy the
datastoreConfigs
part of theconfig.yaml
file that is described in Reviewing the final configuration file with usernames and passwords section to this Core Secretconfig.yaml
file. -
Specify the Diffie-Hellman parameters. To generate Diffie-Hellman parameters, run the following command:
openssl dhparam -out dhparams.pem 2048
-
An encrypted key for signing or validating messages that are exchanged with the IDP must be configured. Unencrypted keys won't be accepted.
- Create the key by running the following command:
openssl genrsa -aes128 -out key.pem 2048
- Create the certificate by running the following command:
openssl req -new -x509 -key key.pem -out cert.pem -days 365
- Combine the two into a single file by running the following command:
cat key.pem cert.pem > sp.pem
- Create the key by running the following command:
-
You must create a secret with the same name in the Core's namespace. For example, if the Core object's name is
instana-core
, create a secret by running the following command:kubectl create secret generic instana-core --namespace instana-core --from-file=path/to/config.yaml
-
You can concatenate multiple certificates by configuring the
config.yaml
file. See the following example:customCACert: | -----BEGIN CERTIFICATE----- 74ZaqWwi/JDwLnpi4HnW7h6OlM39I9qeKv1o9qbGUaXdgL+IkcJB4PVgCgeQKGlZ B3ng/iOFe47dTV6Dx3D5v3j7lxuihPJXwcHtRRjSD0GBH0IJeQAL2fK3rk4ldqhI FouqgoyjdONrV0YInRdNWzpl2Nxob33B/U4pwdvKVqDzWDk17+tZEdFvaoqzXFgt hGfnmDtNiGVSLrJjbH+lwN0JHVeUSZHQ0iTfHOna5f39ConGgwIkVDVsDjfYqAW1 -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- 1fm2rJVtuImDPnFMjH75d3SwYPyca+4ZLZwnXgMjE7PJFtUe0niEr40wsPuq4i5L B6LrAoGBAPl/PnwPBdst4AhbNn8FmxLje/DZWtpmZoyITBDq129KCM4xGjS3FyDY 7l69VdaiiFDBXHVDQ6SxQ85z69rk45oGgaU0AVzOb+ZCfTocYb6/xcOVFhLc8h6E HzdlW/vjyvYij+o5hNAyo+2VV7y8DZ92V0fMaVsxQzcU+6vKy6VRAoGBAPK/Jnqg I0MWhP7zgWo4g+9TM67OxeYkXHV1UUEirjH7LQrMhomlJ7yEYUencfY1md/Fssl1 -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- 3LzEkVtMxHiusHVq7a7q8ZMMNEQpdSkPcJU7AoGBAIdw9gjU9IztBJbSbgpwweWu sP8W6C1qyfSEgRB/fEO6ec8EtnRjZFOo9m3xwOI6+Yrsn+fir7EtB41U3x8Oii8K 2Koji7YJqnWvEMfGQ7CxP3jNRn9EvfNPCaG6rkbpX4zuMN48e0xzQwvx/G3FK/d6 waqiKpCXFdwStIHUfS3bAoGAUzo5FBmlsJoBtn9fkiIBWV5a3Nkt6IF+yS+JkqzO AoIA0ICjJ+DabIUoqtpS9VQ0wcAgCo6T5SMrBWOJi7yVaFgMqfe3Sq5tochSI7DC -----END CERTIFICATE-----
You can add one or more custom CA certificates to the component trust stores. For internal systems, such as LDAP or alert receivers, Instana uses a custom CA.
Unit Secret
Create the config-unit.yaml
file with the contents as shown in the following example. The structure of the file is exactly the same as that of the UnitSpec that will be created further as follows.
Secret values must go into this Unit Secret. Anything else goes into the UnitSpec.
# The initial user of this tenant unit with admin role, default admin@instana.local.
# Must be a valid e-mail address.
# NOTE:
# This only applies when setting up the tenant unit.
# Changes to this value won't have any effect.
initialAdminUser: myuser@example.com
# The initial admin password.
# NOTE:
# This is only used for the initial tenant unit setup.
# Changes to this value won't have any effect.
initialAdminPassword: mypass
# A list of Instana licenses. Multiple licenses may be specified.
licenses: [ "license1", "license2" ]
# A list of agent keys. Specifying multiple agent keys enables gradually rotating agent keys.
agentKeys:
- myagentkey
# The download key that you received from us (in the license e-mail, this is called initial agent key).
downloadKey: mydownloadkey
Notes:
-
To configure the
licenses
field, get the licenses that were downloaded in the Downloading the license file section as follows:# cat /path/to/license.json && echo ["abcdefghijklmnopqrstuvwxyz0123456789"]
-
You must create a secret with the same name in the Unit's namespace. For example, if the Unit object's name is
tenant0-unit0
, create a secret by running the following command:kubectl create secret generic tenant0-unit0 --namespace instana-units --from-file=config.yaml=/path/to/config-unit.yaml
Creating a Core
A Core represents shared components and is responsible for configuring data store access. As a result, most configurations are going to happen here.
For more information, see API Reference.
A Core custom resource must have version instana.io/v1beta2
and kind Core
. Configurations for Core go into the spec
section.
Create a core.yaml
file. Configurations for Core go into the spec
section.
apiVersion: instana.io/v1beta2
kind: Core
metadata:
namespace: instana-core
name: instana-core
spec:
...
Basic configuration
Do some basic configurations as follows:
apiVersion: instana.io/v1beta2
kind: Core
metadata:
namespace: instana-core
name: instana-core
spec:
# The domain under which Instana is reachable
baseDomain: <instana.example.com>
# Depending on your cluster setup, you may need to specify an image pull secret.
imagePullSecrets:
- name: my-registry-secret
# This configures an SMTP server for sending e-mails.
# Alternatively, Amazon SES is supported. Please see API reference for details.
emailConfig:
smtpConfig:
from: smtp@example.com
host: smtp.example.com
port: 465
useSSL: true
CPU/Memory resources
The Operator uses a set of predefined resource profiles that determine the resources that are assigned to the individual component pods. The following profiles are available. By default, medium
is used if nothing is configured.
small
medium
large
spec:
resourceProfile: large
Additionally, you can configure custom CPU and memory resources for each component pod. In the following example, the filler component is used. Replace filler with the component that you want to use, and designate CPU and memory resources with the totals that you want to provide.
spec:
componentConfigs:
- name: filler:
resources:
requests:
cpu: 2.5
memory: 5000Mi
limits:
cpu: 4
memory: 20000Mi
Agent Acceptor
The acceptor is the endpoint that Instana agents need to reach to deliver traces or metrics to the Instana backend. The acceptor is usually a subdomain for the baseDomain
that is configured previously in the Basic configuration section.
spec:
agentAcceptorConfig:
host: ingress.<instana.example.com>
port: 443
Raw spans storage
You must use one of the following three options to store raw spans data:
- S3 (or compatible)
- Google Cloud Storage (GCS)
- Azure Filesystem
- Filesystem
S3 (or compatible)
S3 endpoint differs depending on the region.
To use S3 (or compatible) for storing raw spans data, configure Core as shown in the following example:
spec:
storageConfigs:
rawSpans:
s3Config:
# Endpoint address of the object storage.
# Doesn't usually need to be set for S3.
# S3 Endpoint Ref: https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region
endpoint:
# Region.
region: eu-central-1
# Bucket name.
bucket: mybucket
# Prefix for the storage bucket.
prefix: myspans
# Storage class.
storageClass: Standard
# Bucket name for long-term storage.
bucketLongTerm: mybucket
# Prefix for the long-term storage bucket.
prefixLongTerm: myspans-longterm
# Storage class for objects that are written to the long-term bucket.
storageClassLongTerm: Standard
# If using IRSA
# Appropriate IAM Role should be provisioned including IAM policy with sufficient privileges.
serviceAccountAnnotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role
accessKeyId
and secretAccessKey
must go into the Core secret as specified in the section Core Secret. Alternatively, IAM roles for service accounts are supported.
Google Cloud Storage
To use GCS for storing raw spans data, configure Core as follows:
spec:
storageConfigs:
rawSpans:
gcloudConfig:
# Endpoint address. Doesn't usually need to be set for S3.
endpoint:
# Region.
region: eu-central-1
# Bucket name.
bucket: mybucket
# Prefix for the storage bucket.
prefix: myspans
# Storage class.
storageClass: Standard
# Bucket name for long-term storage.
bucketLongTerm: mybucket
# Prefix for the long-term storage bucket.
prefixLongTerm: myspans-longterm
# Storage class for objects that are written to the long-term bucket.
storageClassLongTerm: Standard
# If using Workload Identity
serviceAccountAnnotations:
iam.gke.io/gcp-service-account: rawspans@myproject.iam.gserviceaccount.com
The serviceAccountKey
must go into the Core Secret as specified in the section Core Secret. Alternatively, Workload Identity is supported.
Azure Filesystem
Ensure that Azure storage account is provisioned with access to the cluster. Azure storage account key is required to provision persistent volume.
To use Azure Filesystem for storing raw spans data, complete the following steps:
- Create a secret with the storage account name and volume:
kubectl create secret generic storage-account --from-literal=azurestorageaccountname={storage_account_name} --from-literal=azurestorageaccountkey={storage_account_key} -n instana-core
- Create a persistent volume by creating
pv.yaml
:apiVersion: v1 kind: PersistentVolume metadata: name: my-nfs-volume spec: capacity: storage: 100Gi accessModes: - ReadWriteMany azureFile: secretName: storage-account shareName: <storage_account_name> readOnly: false persistentVolumeReclaimPolicy: Retain
- configure Core as shown in the following example:
spec: storageConfigs: rawSpans: pvcConfig: accessModes: - ReadWriteMany resources: requests: storage: 100Gi volumeName: my-nfs-volume
Filesystem
To use Filesystem for storing raw spans data, configure Core as follows:
spec:
storageConfigs:
rawSpans:
pvcConfig:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: my-nfs-volume
storageClassName: ""
You must configure pvcConfig
, which is a PersistentVolumeclaimSpec. The volume
must have access mode ReadWriteMany
.
Data stores
You must configure the following data stores in the spec
section of Core:
cassandra
postgres
clickhouse
elasticsearch
kafka
Notes:
-
As a minimum, addresses must be configured for each data store. See the following example:
spec: datastoreConfigs: cassandraConfigs: - hosts: - <IP address or hostname> clickhouseConfigs: - hosts: - <IP address or hostname> postgresConfigs: - hosts: - <IP address or hostname> elasticsearchConfig: hosts: - <IP address or hostname> kafkaConfig: hosts: - <IP address or hostname>
Replace <IP address or hostname> with the IP address or hostname of the data store.
-
If data stores are installed in the same cluster by using third-party Operators, you can configure
datastoreConfigs
to connect to the data stores by using in-cluster DNS hostnames as shown in the following configuration example:spec: datastoreConfigs: cassandraConfigs: - authEnabled: true datacenter: cassandra hosts: - <IP address or hostname> clickhouseConfigs: - authEnabled: true clusterName: local hosts: - <IP address or hostname> - <IP address or hostname> elasticsearchConfig: authEnabled: true clusterName: instana hosts: - <IP address or hostname> kafkaConfig: saslMechanism: SCRAM-SHA-512 authEnabled: true hosts: - <IP address or hostname> postgresConfigs: - authEnabled: true hosts: - <IP address or hostname>
-
Ensure that the accurate Cassandra data center name (default:
cassandra
) is configured in thecassandraConfigs
section of the Core spec through thedatacenter
attribute. -
For Power architecture: Ensure that the accurate Cassandra data center name (default:
datacenter1
) is configured in thecassandraConfigs
section of the Core spec through thedatacenter
attribute. -
ClickHouse (default:
local
) and Elasticsearch (default:onprem_onprem
) accept a cluster name through theclusterName
attribute.Data stores allow a
tcp
port to be configured. Additionally, anhttp
port can also be configured forclickhouse
andelasticsearch
.spec: datastoreConfigs: clickhouseConfigs: - clusterName: local hosts: - <IP address or hostname> ports: - name: tcp port: 9000 - name: http port: 8123
If no ports are configured, the following default values apply:
Data store | Default Ports |
---|---|
cassandra |
tcp=9042 |
postgres |
tcp=5432 |
clickhouse |
tcp=9000 , http=8123 |
elasticsearch |
tcp=9300 , http=9200 |
kafka |
tcp=9092 |
Overwriting data retention defaults
Overwriting the default retention settings is optional and should only be done consciously. These retention setting values are configured as properties in the CoreSpec.
Infrastructure metrics retention settings
The following retention properties are for the metric rollup tables, and the listed values are the defaults in seconds. A value of zero tells the system to not drop rollups of this time span. A zero value for smaller rollups can cause the disks to quickly fill up.
kind: Core
metadata:
name: instana-core
namespace: instana-core
spec:
properties:
- name: retention.metrics.rollup5
value: "86400"
- name: retention.metrics.rollup60
value: "2678400"
- name: retention.metrics.rollup300
value: "8035200"
- name: retention.metrics.rollup3600
value: "34214400"
...
Chart granularity is determined only based on the size of the selected time range. This means that changing the metric data retention doesn't affect the granularity of metric data that is shown in Instana dashboards. These metric retention properties are for information only and should not be changed.
Application retention settings
Application perspectives and end-user monitoring (EUM) share the short-term data retention property, with the default of 7 days. Beyond the retention period of short-term data, the long-term data is still kept for 13 months and is not configurable.
Only values greater than or equal to 7 days are valid. Values that are less than 7 days lead to a failure of the entire system.
kind: Core
metadata:
name: instana-core
namespace: instana-core
spec:
properties:
- name: config.appdata.shortterm.retention.days
value: "7"
...
See the following impact on application perspectives data:
Examples:
config.appdata.shortterm.retention.days
is changed from 7 days to 14 days:
Old data is deleted once it is older than 7 days, and new data is deleted once it is older than 14 days.config.appdata.shortterm.retention.days
is changed from 14 days to 7 days:
Old data is deleted once it is older than 14 days, and new data is deleted once it is older than 7 days.
See the following impact on end-user monitoring data:
Examples:
config.appdata.shortterm.retention.days
is changed from 7 days to 14 days:
Old and new data is deleted once it is 14 days old.config.appdata.shortterm.retention.days
is changed from 14 days to 7 days:
Old and new data is deleted once it is 7 days old.
Synthetic monitoring retention settings
Synthetic monitoring has a separate data retention property with a default of 60 days. Retention of both Synthetic test results and Synthetic test result details are controlled by this setting.
kind: Core
metadata:
name: instana-core
namespace: instana-core
spec:
properties:
- name: config.synthetics.retention.days
value: "60"
...
See the following impact on Synthetic monitoring data:
Examples:
config.synthetics.retention.days
is changed from 60 days to 90 days: Data stored before the configuration change is deleted when it is older than 60 days, and data stored after the configuration change is deleted when it is older than 90 days.config.synthetics.retention.days
is changed from 60 days to 7 days: Data stored before the configuration change is deleted when it is older than 60 days, and data stored after the configuration change is deleted once it is older than 7 days.
Applying Core configurations
After your configuration file for Core is ready, apply it as follows:
kubectl apply -f path/to/core.yaml
Check whether all pods are running before proceeding as follows:
kubectl get pods -n instana-core
Creating a Unit and Tenant
Creating a Unit and a Tenant is straightforward.
To create a Unit and Tenant, create a Unit YAML file such as unit.yaml
as shown in the following example:
apiVersion: instana.io/v1beta2
kind: Unit
metadata:
namespace: instana-units
name: tenant0-unit0
spec:
# Must refer to the namespace of the associated Core object that was created previously
coreName: instana-core
# Must refer to the name of the associated Core object that was created previously
coreNamespace: instana-core
# The name of the tenant
# Tenant name must match the regular expression pattern `^[a-z][a-z0-9]*$`
# Tenant name must not exceed a maximum length of 15 characters
# Tenant name must begin with an alphabetical character
# Tenant name can consist of alphanumeric characters
# Characters must be in lowercase
tenantName: tenant0
# The name of the unit within the tenant
unitName: unit0
# The same rules apply as for Cores. May be ommitted. Default is 'medium'
resourceProfile: large
After your configuration is ready, apply your settings as follows:
kubectl apply -f path/to/unit.yaml
Check whether all pods are running before proceeding as follows:
kubectl get pods -n instana-units
Enabling optional features
Some features are not enabled in a self-hosted backend by default. You can enable such features by using additional configuration options.
For more information about the optional features and how to enable them, see Enabling optional features.