Installing Kubernetes One-Machine Sandbox
Beginning with the release of 6.1.1 fix pack 3, a separate set of Kubernetes Sandbox deployment files are shipped with the installation package. Follow the guide to deploy the Distributed Gateway for Proof of Concept or evaluation purposes.
Before you begin
Pre-requisites:
You need to use CNCF Kubernetes for the Sandbox deployment. For now, this has been developed and tested for Kubernetes, NOT OpenShift.
Minimum recommended resources: 16GB 8 core.
This approach assumes you are using a SaaS Instana Backend. If your backend is on-prem, there will be some more work involved. See the Helm Configuration table below.
Key Considerations:
Deployment files defaults were designed for simplicity and a POC mindset
No TLS to start with. It's recommended that you first get the POC working without TLS and then you can follow the instructions for production to apply TLS to your configuration.
Pods are scaled down to a single instance to minimize the required resources. In most POCs, resource availability is limited, and the workload demands are not substantial. You can use a 1 node cluster approach where the main and worker node are the same machine.
Logging has been set to
debug. Without debug level set, you won't be able to verify if traffic is correctly moving through the Distributed Gateway. During a POC, it's recommended to leavedebugon.
Pre-configuration:
Get the INSTANA_ENDPOINT_URL and INSTANA_AGENT_KEY from the Instana UI/AWS Lambda configuration. To do this, follow the instructions. IMPORANT: don't just copy the endpoint from the Instana agent because it WILL NOT work.
Install Helm. It is highly recommended that you use Helm for configuration. Helm is an industry standard tool that will help simplify the configuration, deployment, and maintenance of the Distributed Gateway deployment.
Procedure
I. Create a Kubernetes namespace.
Kubernetes namespaces are used to logically group pods for the application. It is recommended to use a separate Kubernetes namespace for Z APM Connect Distributed Gateway for higher security and better organization. You can name the namespace anything, but for examples below, the namespace name ibm-zapm is used.
Create a namespace by using the following command for your environment.
kubectl create namespace ibm-zapmTo use kubectl, you need to specify a namespace for all commands using the following pattern:
kubectl -n <namespace name>Tip: To switch Kubernetes to a default namespace, you can use the following command:
kubectl config set-context --current --namespace=<namespace name>This will allow you to omit the -n <namespace name> flag in subsequent kubectl commands.
II. Extract the tar file.
Extract the downloaded installation package and add all the Z APM Connect DG images to the cluster's image registry.
Download 6.1.1-TIV-INSTANA-FP00007-cluster-amd64.tar.gz or 6.1.1-TIV-INSTANA-FP00007-cluster-s390x.tar.gz from IBM Fix Central for x86 or IBM Fix Central for s390x depending on which platform you are using, and move it to the desired installation location.
Run the following command to extract it to the local repository.
tar -xf 6.1.1-TIV-INSTANA-FP00007-cluster-amd64.tar.gzor
tar -xf 6.1.1-TIV-INSTANA-FP00007-cluster-s390x.tar.gzAfter extraction, the following files are generated in the directory. All images within the
imagesdirectory need to be added to the cluster's image registry.├── production │ ├── helm-deployment │ │ ├── values.yaml │ │ ├── ioz-helm-chart-6.1.1-7.tgz │ ├── manifests-deployment │ │ ├── instana-exporter.yaml (Instana Use Only) │ │ ├── kafka.yaml │ │ ├── redis.yaml │ │ ├── transaction-processor.yaml ├── sandbox │ ├── deployZapmImagesAcrossCluster.sh │ ├── helm-deployment │ │ ├── values.yaml │ │ ├── ioz-helm-chart-6.1.1-7.tgz │ ├── manifests-deployment │ │ ├── instana-exporter.yaml (Instana Use Only) │ │ ├── kafka.yaml │ │ ├── redis.yaml │ │ ├── transaction-processor.yaml ├── images │ ├── zapm-instana-exporter+6.1.1-7.tar (Instana Use Only) │ ├── kafka+3.7.0.tar │ ├── redis+6.2.14.tar │ ├── zapm-test+6.1.1-7.tar │ ├── zapm-transaction-processor+6.1.1-7.tar ├── saveZapmLogs.shNote: Kubernetes requires that the images be pulled from a respository. You need to provide a path to each image in a repository during its configuration. If you don't have a repository, use the
deployZapmImagesAcrossCluster.shscript. This simplifies the process and removes the need for a separate image repository during PoC setup.Load each of the images found in the
imagesdirectory to local Repository using script.
Make the script executable:
chmod 777 ./sandbox/deployZapmImagesAcrossCluster.shRun the command:
./sandbox/deployZapmImagesAcrossCluster.sh imagesIt will figure out which (docker or podman) you have installed and use that one. When using local repository, the image will need to be loaded to each node in the cluster. This script will prompt you to load the images to each node in your cluster. If you are using a 1 node cluster you can say n(no) when promoted for additional nodes in your cluster.
After you run the script, to see your images in your local repository, you can run:
podman imagesOR
docker imagesUsing this approach, the path to your images is now localhost. The default configuration for the sandbox yamls is already configured to pull the yamls from localhost.
III Populate the deployment files.
You can either use Helm to edit the values file if Helm is installed, or by manually editing yaml manifest files and applying them to the cluster.
Option 1: Helm Configuration:
Locate the values.yaml file.
For the sandbox, you can find this file in the installation package with this path:
/sandbox/helm-deployment/values.yaml
Then, edit the values.yaml file. If you are following the sandbox approach, for your initial deployment, you only need to edit the following fields:
Tips: The fields in the parameter column represents the location and name of the parameter. The dots . indicate the scope of where to find the object. For instance, to locate the parameter statefulsets.transactionProcessor.advertisedHostname, navigate to the following structure:
statefulsets:
transactionProcessor:
advertisedHostname:
| Parameter | Description | Field |
|---|---|---|
| statefulsets.transactionProcessor.advertisedHostname | FQDN (Fully Qualified Domain Name) of either the master node of the Kubernetes cluster. | Replace with the hostname of your master node. |
| statefulsets.instanaExporter.servers.name | Name of the Instana backend – used for differentiation only (arbitrary but must be lower case) | Provide any name to identify your Instana backend. |
| statefulsets.instanaExporter.servers.hostEndpoint | INSTANA_ENDPOINT_URL for the serverless ingestion for the Instana server. See Instana server for details. | Add the Instana endpoint. Do not use the endpoint from the agent. Follow pre-configuration instructions to get the endpoint from AWS Lambda. |
| statefulsets.instanaExporter.servers.agentKey | INSTANA_AGENT_KEY for the Instana server. See Instana server for details.server. | Add the agent key. This can be found in the same location you get the endpoint, on the Instana UI. |
| statefulsets.instanaExporter.servers.deployment | Type of Instana server. | If using saas, replace <saas> with saas. If using self-hosted, there will be additional security steps and you can refer to Installing Z APM Connect DG for Proof of Concept evaluation. |
| deployments.kafka.advertisedHostname | FQDN (Fully Qualified Domain Name) of either the master node of the Kubernetes cluster. | Replace <localhost> with the hostname of your master node. |
Option 2: Manifests Configuration
If you don't want to install helm, you can refer to the documentation for Manifests configuration using the sandbox Manifest files. Detailed sandbox instructions are not provided at this time.
IV. Start Z APM Connect Distributed Gateway
You can start Z APM Connect Distributed Gateway (Z APM Connect DG) either by using Helm or using the manual yaml deployments, depending on how you populated the deployment files.
Option 1: Starting with Helm
First time running helm
Change directories to:
/sandbox/helm-deploymentRun helm install:
helm install --namespace ibm-zapm -f values.yaml zapm ./ioz-helm-chart-6.1.1-7.tgzNOTE: If something does go wrong, correct it and then follow the instructions below to update the values.yaml.
Check if pods are running
kubectl get pods --namespace ibm-zapmTips: If you have configured Kubernetes to use the default namespace as per step 1, there's no need to include "--namespace ibm-zapm" in every command. You can check the pods without specifying “--namespace=ibm-zapm”:
kubectl get pods Update values.yaml (every time after first time)
Run this command if you need to update values.yaml and subsequently re-deploy pods.
helm upgrade --namespace ibm-zapm -f values.yaml zapm ./ioz-helm-chart-6.1.1-7.tgz`Option 2: Starting with .yaml deployment files
To start by using the manual .yaml deployments, navigate to the manifests-deployment directory and run the following command to install your deployment.
kubectl apply -f ./After applying the changes, you can check the status of the components by running the following command.
kubectl get podsResult
It may take a few minutes for everything to start fully. When the pod status is changed from Starting to Running, Z APM Connect DG is started. If any pod reports that it is unhealthy or crashing, running the kubectl inspect podName command can give further details on why the pod is not starting properly.
Information: Both the production and sandbox folders contain helm and manifest deployment configuration files. The main difference between these two folders is the default configuration values.
The production yamls use defaults that are intended for robust environments and considerations that are best for a production environment.
The default values for the sandbox aim to simplify the entire deployment process. They assume that you are running a POC or test environment with limited hardware and smaller amounts of traffic.
If you are transitioning from a PoC to a production environment, you can start with the /production/helm-deployment/values.yaml file and reference the values you used from sandbox/helm-deployment/values.yaml.
Tips: You can keep more than one value.yaml files and switch between them by running helm updgrade.