Installing the Instana Backend
To configure the Instana backend, you complete several main steps to create a Core object, create an associated Unit object, and enable optional features. Before you start the main steps, you complete several preparation steps to create namespaces and secrets.
- Prerequisites
- Preparation steps
- Creating a Core
- Creating a Unit
- Optional Features
Tip: The kubectl plug-in has a command to generate YAML templates for namespaces and custom resources to help you get started.
kubectl instana template --output-dir <dir>
Prerequisites
- Instana kubectl plug-in must be installed.
- Required datastores must be up and running. See directions.
Preparation steps
Before you can create Core and Unit, you need to create namespaces and place a set of Secrets for the namespaces so that you can pull Core and Unit images from the artifact-public.instana.io
registry.
Creating namespaces
Core and Units must be installed in different namespaces. Each Core needs its own namespace. Multiple Units that belong to the same Core can be installed in the same namespace.
Namespace names can be freely chosen. You will use instana-core
and instana-units
in this guide.
The Instana operator requires a label app.kubernetes.io/name
to be present in the namespace. The value must be the name of the namespace. The operator adds these labels if they are missing. It makes sense to add these labels directly,
especially when GitOps for deploying is used.
apiVersion: v1
kind: Namespace
metadata:
name: instana-core
labels:
app.kubernetes.io/name: instana-core
---
apiVersion: v1
kind: Namespace
metadata:
name: instana-units
labels:
app.kubernetes.io/name: instana-units
Save this to a file, such as namespaces.yaml
, and apply it.
kubectl apply -f namespaces.yaml
Image pull secrets
Unless you have your own Docker registry that mirrors artifact-public.instana.io
and doesn't require pull secrets, you need to create image pull secrets in the two namespaces that you've created as follows:
# Create the secret
kubectl create secret docker-registry instana-registry \
--namespace=<namespace> \
--docker-username=_ \
--docker-password=<agent_key> \
--docker-server=artifact-public.instana.io
# Create the YAML for the secret without applying
kubectl create secret docker-registry instana-registry \
--namespace=<namespace> \
--docker-username=_ \
--docker-password=<agent_key> \
--docker-server=artifact-public.instana.io \
--dry-run=client \
--output=yaml
Downloading the license file
Instana requires a license based on your SalesKey for activation. This license file can be obtained by using the kubectl plug-in by running the command:
kubectl instana license download --sales-key <SalesKey>
Or, alternatively, if you need to manually download the license, you can run the following command:
curl https://instana.io/onprem/license/download/v2/allValid?salesId=<your-SalesKey> -o license.json
This license key is a part of the config.yaml
file for Units, which will be generated later.
Creating secrets
Secret values are not configured by using Core and Unit resources. These must go into Kubernetes secrets.
For TLS certificates, a secret instana-tls
must be created in the core namespace.
For each Core and each Unit resource, a secret with the corresponding name must be created in the respective namespace.
The secret must contain a config.yaml
file, the structure of which resembles CoreSpec or UnitSpec, respectively, adding credentials.
Secret instana-tls
The instana-tls
secret is required for ingress configuration, and must be created in the core
namespace.
Key | Value |
---|---|
tls.crt |
The TLS certificate for the domain under which Instana is reachable. CN (Common Name) must match the baseDomain that is configured in CoreSpec. |
tls.key |
The TLS key. |
Note: This secret must be of type kubernetes.io/tls
.
See the following example:
# Generate self signed certificate
openssl req -x509 -newkey rsa:2048 -keyout tls.key -out tls.crt -days 365 -nodes -subj "/CN=<FQDN-Of-Your-Instana>"
# Create a secret holding newly generated certificate
kubectl create secret tls instana-tls --namespace instana-core \
--cert=path/to/tls.cert \
--key=path/to/tls.key
Core Secret
Create a config.yaml
file with the contents as shown in the following example. The structure of the file is exactly the same as that of the CoreSpec that will be created further as follows.
Secret values must go into this secret. Anything else goes into the CoreSpec.
# Diffie-Hellman parameters to use
dhParams: |
-----BEGIN DH PARAMETERS-----
<snip/>
-----END DH PARAMETERS-----
# The download key you received from us
downloadKey: mydownloadkey
# The sales key you received from us
salesKey: mysaleskey
# Seed for creating crypto tokens. Pick a random 12 char string
tokenSecret: mytokensecret
# Configuration for raw spans storage
storageConfigs:
rawSpans:
# Required if using S3 or compatible storage bucket.
# Credentials should be configured.
# Not required if IRSA on EKS is used.
s3Config:
accessKeyId: ...
secretAccessKey: ...
# Required if using Google Cloud Storage.
# Credentials should be configured.
# Not required if GKE with workload identity is used.
gcloudConfig:
serviceAccountKey: ...
# SAML/OIDC configuration
serviceProviderConfig:
# Password for the key/cert file
keyPassword: mykeypass
# The combined key/cert file
pem: |
-----BEGIN RSA PRIVATE KEY-----
<snip/>
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
<snip/>
-----END CERTIFICATE-----
# Required if a proxy is configured that needs authentication
proxyConfig:
# Proxy user
user: myproxyuser
# Proxy password
password: my proxypassword
emailConfig:
# Required if SMTP is used for sending e-mails and authentication is required
smtpConfig:
user: mysmtpuser
password: mysmtppassword
# Required if using for sending e-mail.
# Credentials should be configured.
# Not required if using IRSA on EKS.
sesConfig:
accessKeyId: ...
secretAccessKey: ...
# Optional custom CA certificate to be added to component trust stores
# in case internal systems Instana talks to (e.g. LDAP or alert receivers) use a custom CA.
customCACert: |
-----BEGIN CERTIFICATE-----
<snip/>
-----END CERTIFICATE-----
# Add more certificates if you need
# -----BEGIN CERTIFICATE-----
# <snip/>
# -----END CERTIFICATE-----
- Diffie-Hellman parameters can be generated by using the following command:
An encrypted key for signing/validating messages that are exchanged with the IDP must be configured. Unencrypted keys won't be accepted.
The following commands can be used to create a combined key/cert file:
# Create the key
openssl genrsa -aes128 -out key.pem 2048
# Create the certificate
openssl req -new -x509 -key key.pem -out cert.pem -days 365
# Combine the two into a single file
cat key.pem cert.pem > sp.pem
Assuming the Core object's name is instana-core
, create a secret with the same name in the Core's namespace.
kubectl create secret generic instana-core --namespace instana-core --from-file=path/to/config.yaml
Unit Secret
Create a config.yaml
file with the contents as shown in the following example. The structure of the file is exactly the same as that of the UnitSpec that will be created further as follows.
Secret values must go into this secret. Anything else goes into the UnitSpec.
# The initial user of this tenant unit with admin role, default admin@instana.local.
# Must be a valid e-maiol address.
# NOTE:
# This only applies when setting up the tenant unit.
# Changes to this value won't have any effect.
initialAdminUser: myuser@example.com
# The initial admin password.
# NOTE:
# This is only used for the initial tenant unit setup.
# Changes to this value won't have any effect.
initialAdminPassword: mypass
# The Instana license. Can be a plain text string or a JSON array encoded as string. Deprecated. Use 'licenses' instead. Will no longer be supported in release 243.
license: mylicensestring # This would also work: '["mylicensestring"]'
# A list of Instana licenses. Multiple licenses may be specified.
licenses: [ "license1", "license2" ]
# A list of agent keys. Specifying multiple agent keys enables gradually rotating agent keys.
agentKeys:
- myagentkey
Assuming the Unit object's name is tenant0-unit0
, create a secret with the same name in the Unit's namespace.
kubectl create secret generic tenant0-unit0 --namespace instana-units --from-file=path/to/config.yaml
Creating a Core
A Core represents shared components and is responsible for configuring datastore access. As a result, most configurations are going to happen here.
See API Reference for details.
A Core custom resource must have version instana.io/v1beta2
and kind Core
. Configurations for Core go into the spec
section.
apiVersion: instana.io/v1beta2
kind: Core
metadata:
namespace: instana-core
name: instana-core
spec:
...
Basic configuration
Do some basic configurations as follows:
spec:
# The domain under which Instana is reachable
baseDomain: instana.example.com
# Depending on your cluster setup, you may need to specify an image pull secret.
imagePullSecrets:
- name: my-registry-secret
# This configures an SMTP server for sending e-mails.
# Alternatively, Amazon SES is supported. Please see API reference for details.
emailConfig:
smtpConfig:
from: smtp@example.com
host: smtp.example.com
port: 465
useSSL: true
# The operator can install network policies for restricting network traffic
# to what's required only. By default, network policies are disabled.
# Set this to true if you want to enable them. We suggest you leave this turned off
# initially until you've made sure everything works.
enableNetworkPolicies: true
CPU/Memory resources
The operator uses a set of predefined resource profiles that determine the resources that are assigned to the individual component pods. The following profiles are available. By default, medium
is used if nothing is configured.
small
medium
large
xlarge
xxlarge
spec:
resourceProfile: large
Additionally, you can configure custom CPU and memory resources for each component pod. In the following example, the filler component is used. Replace filler with the component that you want to use, and designate CPU and memory resources with the totals that you want to provide.
spec:
componentConfigs:
- name: filler:
resources:
requests:
cpu: 2.5
memory: 5000Mi
limits:
cpu: 4
memory: 20000Mi
Agent Acceptor
The acceptor is the endpoint that agents need to reach to deliver traces or metrics to Instana. This is usually a subdomain for the baseDomain
that is configured previously.
spec:
agentAcceptorConfig:
host: ingress.instana.example.com
port: 443
Raws spans storage
Three options are provided for storing raw spans data: S3 (or compatible), Google Cloud Storage, or Filesystem. One of these options must be configured.
S3 (or compatible)
spec:
storageConfigs:
rawSpans:
s3Config:
# Endpoint address of the object storage.
# Doesn't usually need to be set for S3.
endpoint:
# Region.
region: eu-central-1
# Bucket name.
bucket: mybucket
# Prefix for the storage bucket.
prefix: myspans
# Storage class.
storageClass: Standard
# Bucket name for long-term storage.
bucketLongTerm: mybucket
# Prefix for the long-term storage bucket.
prefixLongTerm: myspans-longterm
# Storage class for objects that are written to the long-term bucket.
storageClassLongTerm: Standard
# If using IRSA
serviceAccountAnnotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role
accessKeyId
and secretAccessKey
must go into the Core secret as specified in the section 4.1.3.2. Alternatively, IAM roles for service accounts are supported.
Google Cloud Storage (GCS)
spec:
storageConfigs:
rawSpans:
gcloudConfig:
# Endpoint address. Doesn't usually need to be set for S3.
endpoint:
# Region.
region: eu-central-1
# Bucket name.
bucket: mybucket
# Prefix for the storage bucket.
prefix: myspans
# Storage class.
storageClass: Standard
# Bucket name for long-term storage.
bucketLongTerm: mybucket
# Prefix for the long-term storage bucket.
prefixLongTerm: myspans-longterm
# Storage class for objects that are written to the long-term bucket.
storageClassLongTerm: Standard
# If using Workload Identity
serviceAccountAnnotations:
iam.gke.io/gcp-service-account: rawspans@myproject.iam.gserviceaccount.com
The serviceAccountKey
must go into the Core secret as specified in the section 4.1.3.2. Alternatively, Workload Identity is supported.
Filesystem
You must configure pvcConfig
, which is a PersistentVolumeclaimSpec. The volume
must have access mode ReadWriteMany
.
spec:
storageConfigs:
rawSpans:
pvcConfig:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: my-nfs-volume
storageClassName: ""
Datastores
The following datastores must be configured:
cassandra
postgres
clickhouse
elasticsearch
kafka
Note: Currently, the following restrictions apply:
- Datastore hosts must have a single NIC.
- Datastore IP addresses must be resolved.
As a minimum, addresses must be configured for each datastore.
spec:
...
datastoreConfigs:
cassandraConfigs:
- hosts:
- 10.164.0.2
clickhouseConfigs:
- hosts:
- 10.164.0.2
postgresConfigs:
- hosts:
- 10.164.0.2
elasticsearchConfig:
hosts:
- 10.164.0.2
kafkaConfig:
hosts:
- 10.164.0.2
...
ClickHouse (default: local
) and Elasticsearch (default: onprem_onprem
) require a cluster name. If your cluster names don't match the default, you can configure different names.
All databases require a tcp
port to be configured. Additionally, clickhouse
and elasticsearch
also need an http
port.
If no ports are configured, the following defaults apply:
Datastore | Default Ports |
---|---|
cassandra |
tcp=9042 |
postgres |
tcp=5432 |
clickhouse |
tcp=9000 , http=8123 |
elasticsearch |
tcp=9300 , http=9200 |
kafka |
tcp=9092 |
spec:
datastoreConfigs:
clickhouseConfigs:
- clusterName: local
hosts:
- 10.164.0.2
ports:
- name: tcp
port: 9000
- name: http
port: 8123
schemas:
- application
- logs
Overwrite data retention defaults
Overwriting the default retention settings is optional, and should only be done consciously. These values are configured as properties in the core spec.
Infra metrics retention settings
The following retention properties are for the metric rollup tables, and the listed values are the defaults in seconds. A value of zero tells the system to not drop rollups of this time span. A zero value for smaller rollups can cause the disks to quickly fill up.
kind: Core
metadata:
name: instana-core
namespace: instana-core
spec:
...
properties:
- name: retention.metrics.rollup5
value: "86400"
- name: retention.metrics.rollup60
value: "2678400"
- name: retention.metrics.rollup300
value: "8035200"
- name: retention.metrics.rollup3600
value: "34214400"
...
Application retention settings
Application Perspectives and End User Monitoring share the short-term data retention property, with the default of 7 days. Beyond the retention period of short-term data, the long-term data is still kept for 13 months and is not configurable.
Note: Only values greater than or equal to 7 days are valid. Values that are less than 7 days lead to a failure of the entire system.
kind: Core
metadata:
name: instana-core
namespace: instana-core
spec:
...
properties:
- name: config.appdata.shortterm.retention.days
value: "7"
...
See the following impact on Application Perspectives data:
Examples:
config.appdata.shortterm.retention.days
is changed from 7 days to 14 days:
Old data is deleted once it is older than 7 days, and new data is deleted once it is older than 14 days.config.appdata.shortterm.retention.days
is changed from 14 days to 7 days:
Old data is deleted once it is older than 14 days, and new data is deleted once it is older than 7 days.
See the following impact on End User Monitoring data:
Examples:
config.appdata.shortterm.retention.days
is changed from 7 days to 14 days:
Old and new data is deleted once it is 14 days old.config.appdata.shortterm.retention.days
is changed from 14 days to 7 days:
Old and new data is deleted once it is 7 days old.
Synthetic Monitoring retention settings
Synthetic Monitoring has a separate data retention property, with the default of 7 days. Retention of both Synthetic test results and synthetic test result details are controlled by this setting.
kind: Core
metadata:
name: instana-core
namespace: instana-core
spec:
...
properties:
- name: config.synthetics.retention.days
value: "7"
...
See the following impact on Synthetic Monitoring data:
Examples:
config.synthetics.retention.days
is changed from 7 days to 14 days:
Data stored before the configuration change is deleted once it is older than 7 days, and data stored after the configuration change is deleted once it is older than 14 days.config.synthetics.retention.days
is changed from 14 days to 7 days:
Data stored before the configuration change is deleted once it is older than 14 days, and data stored after the configuration change is deleted once it is older than 7 days.
Applying your settings
After your configuration file for Core is ready, you can apply it as follows:
# Applay core.yaml configuration
kubectl apply -f path/to/core.yaml
# Check if all pods are running before proceeding
kubectl get pods -n instana-core
Creating a Unit
Configuring a Unit is straightforward.
apiVersion: instana.io/v1beta2
kind: Unit
metadata:
namespace: instana-units
name: tenant0-unit0
spec:
# Must refer to the namespace of the associated Core object we created above
coreName: instana-core
# Must refer to the name of the associated Core object we created above
coreNamespace: instana-core
# The name of the tenant
tenantName: tenant0
# The name of the unit within the tenant
unitName: unit0
# The same rules apply as for Cores. May be ommitted. Default is 'medium'
resourceProfile: large
After your configuration is ready, you need to apply your settings as follows:
# Applay unit.yaml configuration
kubectl apply -f path/to/unit.yaml
# Check if all pods are running before proceeding
kubectl get pods -n instana-units
Optional Features
Logging (beta)
Logging is added to the operator setup. It is available as open beta and must be explicitly enabled by using a feature flag in the Core spec.
Note: Logging uses a new Clickhouse schema (logs
). You are highly recommended to use a separate Clickhouse cluster/host.
instana-console datastore host
For this purpose, the second host needs to be set up with a settings.hcl
file as follows:
type = "dual-clickhouse"
host_name = "<IP-accessible-from-the-k8s-cluster>"
dir {
metrics = "" // data dir for metrics
traces = "/mnt/traces" // data dir for log data
data = "/mnt/data" // data dir for any other data
logs = "/var/log/instana" // log dir
}
docker_repository {
base_url = "artifact-public.instana.io"
username = "_"
password = "<Your-agent-key>"
}
Operator Core spec
On the operator side, the Core spec needs two Clickhouse entries and feature flag:
apiVersion: instana.io/v1beta2
kind: Core
metadata:
name: instana-core
namespace: core
spec:
...
clickhouseConfigs:
- clusterName: local
hosts:
- <main DB hostname or IP>
schemas:
- application
- clusterName: local
hosts:
- <second DB hostname or IP>
schemas:
- logs
...
featureFlags:
- name: feature.logging.enabled
enabled: true
...
BeeInstana Metrics Pipeline (beta)
This feature includes a new data pipeline with additional Instana backend components and a datastore (BeeInstana) to the store of the infrastructure metrics. Based on this data, further features can be activated in the product.
- Custom Infrastructure Dashboards
- Infrastructure Explore
Run and configure BeeInstana via instana-console host
To run BeeInstana as an additional Docker container on the instana-console
database host, the following feature flag must be added to the file settings.hcl
for new installations or to the file /root/.instana/effective-settings.hcl
for existing installations.
...
feature "beeinstana" {
enabled=true
}
...
After that, the change must still be applied to the system with an instana datastores update. After the restart, an additional container named aggregator
should appear. This service must then be accessible via
port 9998
from the Instana backend.
Note: Ensure that the free space is enough in the storage location for the metric data.
An estimated disk size of 1 million metrics can be derived from this table:
Period in seconds | Retention | Disk size in GBs |
---|---|---|
10 | 1 | 72.72 |
60 | 30 | 418.88 |
300 | 90 | 289.35 |
3600 | 395 | 191.49 |
Now the BeeInstana aggregator must be exposed to the operator Core spec, and the following feature flags must be enabled.
apiVersion: instana.io/v1beta2
kind: Core
metadata:
name: instana-core
namespace: core
spec:
...
datastoreConfigs:
beeInstanaConfig:
hosts:
- <main-db-hostname/ip>
...
featureFlags:
- name: feature.beeinstana.infra.metrics.enabled
enabled: true
- name: feature.infra.explore.presentation.enabled
enabled: true
...
Run and configure BeeInstana via Operator
For self-hosted Instana environments on Kubernetes with large metric loads, you are recommended to deploy BeeInstana by using the BeeInstana operator. See the documentation for using BeeInstana on Kubernetes.
In the Instana operator, the following changes must now be made to the Core spec.
apiVersion: instana.io/v1beta2
kind: Core
metadata:
name: instana-core
namespace: core
spec:
...
datastoreConfigs:
beeInstanaConfig:
hosts:
- <beeinstana-operator-service-name>
clustered: true
...
featureFlags:
- name: feature.beeinstana.infra.metrics.enabled
enabled: true
- name: feature.infra.explore.presentation.enabled
enabled: true
...
Synthetics Monitoring (beta)
This feature includes a support for enabling Synthetics Monitoring in the self-hosted operator. Enabling this feature will let the operator deploy Synthetics Monitoring components, which can be used by a Synthetics Point-of-Presence (PoP) agent to run Synthetics tests against the Instana installation.
Configure external storage for Synthetics Monitoring
Before enabling Synthetics monitoring, external storage in the form of Cloud storage Buckets needs to be configured in the storageConfigs
section in the Core spec.
Currently, GCloud and S3 or compatible Buckets are supported.
For more information on storageConfigs spec for synthetics, refer storageConfigs
Enable Synthetics components using feature flag
To enable Synthetics components, set the following feature flags under the featureFlags
configuration in the Core spec.
featureFlags:
...
- name: feature.synthetics.enabled
enabled: true