Installing Kubernetes One-Machine Sandbox
Beginning with the release of 6.1.1 fix pack 3, a separate set of Kubernetes Sandbox deployment files are shipped with the installation package. Follow the guide to deploy the Distributed Gateway for Proof of Concept or evaluation purposes.
Before you begin
Pre-requisites:
You need to use CNCF Kubernetes for the Sandbox deployment. For now, this has been developed and tested for Kubernetes, NOT OpenShift.
Minimum recommend resources: 16GB 8 core.
Key Considerations:
Deployment files defaults were designed for simplicity and a POC mindset
No TLS to start with. It's recommended that you first get the POC working without TLS and then you can follow the instructions for production to apply TLS to your configuration.
Pods are scaled down to a single instance to minimize the required resources. In most POCs, resource availability is limited, and the workload demands are not substantial. You can use a 1 node cluster approach where the main and worker node are the same machine.
Logging has been set to
debug
. Without debug level set, you won't be able to verify if traffic is correctly moving through the Distributed Gateway. During a POC, it's recommended to leavedebug
on.
Pre-configuration:
Get the connection and AppDynamics Controller information required during installation and initial configuration. See Required configuration information for the AppDynamics solution
Install Helm. It is highly recommended that you use Helm for configuration. Helm is an industry standard tool that will help simplify the configuration, deployment, and maintenance of the Distributed Gateway deployment.
Procedure
I. Create a Kubernetes namespace.
Kubernetes namespaces are used to logically group pods for the application. It is recommended to use a separate Kubernetes namespace for Z APM Connect Distributed Gateway for higher security and better organization. You can name the namespace anything, but for examples below, the namespace name ibm-zapm
is used.
Create a namespace by using the following command for your environment.
kubectl create namespace ibm-zapm
To use kubectl
, you need to specify a namespace for all commands using the following pattern:
kubectl -n <namespace name>
Tip: To switch Kubernetes to a default namespace, you can use the following command:
kubectl config set-context --current --namespace=<namespace name>
This will allow you to omit the -n <namespace name>
flag in subsequent kubectl
commands.
II. Extract the tar file.
Extract the downloaded installation package and add all the Z APM Connect DG images to the cluster's image registry.
Download 6.1.1-TIV-ZAPM-FP00004-cluster-amd64.tar.gz from IBM Fix Central.
Run the following command to extract it to the local repository.
tar -xf 6.1.1-TIV-ZAPM-FP00004-cluster-amd64.tar.gz
After extraction, the following files are generated in the directory. All images within the images directory need to be added to the cluster's image registry.
├── production │ ├── helm-deployment │ │ ├── values.yaml │ │ ├── zapm-helm-chart-6.1.1-4.tgz │ ├── manifests-deployment │ │ ├── instana-exporter.yaml (Instana Use Only) │ │ ├── kafka.yaml │ │ ├── redis.yaml │ │ ├── transaction-processor.yaml │ │ ├── ttg.yaml (AppDynamics Use Only) ├── sandbox │ ├── deployZapmImagesAcrossCluster.sh │ ├── helm-deployment │ │ ├── values.yaml │ │ ├── zapm-helm-chart-6.1.1-4.tgz │ ├── manifests-deployment │ │ ├── instana-exporter.yaml (Instana Use Only) │ │ ├── kafka.yaml │ │ ├── redis.yaml │ │ ├── transaction-processor.yaml │ │ ├── ttg.yaml (AppDynamics Use Only) ├── images │ ├── zapm-instana-exporter+6.1.1-4.tar (Instana Use Only) │ ├── kafka+3.5.1.tar │ ├── redis+6.2.13.tar │ ├── zapm-test+6.1.1-4.tar │ ├── zapm-transaction-processor+6.1.1-4.tar │ ├── zapm-ttg+6.1.1-4.tar (AppDynamics Use Only) ├── saveZapmLogs.sh
Note: Kubernetes requires that the images be pulled from a respository. You need to provide a path to each image in a repository during its configuration. If you don't have a repository, use the
deployZapmImagesAcrossCluster.sh
script. This simplifies the process and removes the need for a separate image repository during PoC setup.Load each of the images found in the
images
directory to local Repository using script.
Make the script executable:
chmod 777 ./sandbox/deployZapmImagesAcrossCluster.sh
Run the command:
./sandbox/deployZapmImagesAcrossCluster.sh images
It will figure out which (docker or podman) you have installed and use that one. When using local repository, the image will need to be loaded to each node in the cluster. This script will prompt you to load the images to each node in your cluster. If you are using a 1 node cluster you can say n
(no) when promoted for additional nodes in your cluster.
After you run the script, to see your images in your local repository, you can run:
podman images
OR
docker images
Using this approach, the path to your images is now localhost
. The default configuration for the sandbox yamls is already configured to pull the yamls from localhost. If you use this script with the sandbox deployment yamls, you are all set.
III Populate the deployment files.
You can either use Helm to edit the values file if Helm is installed, or by manually editing yaml manifest files and applying them to the cluster.
Option 1: Helm Configuration:
Locate the values.yaml
file.
For the sandbox, you can find this file in the installation package with this path:
/sandbox/helm-deployment/values.yaml
Then, edit the values.yaml
file. If you are following the sandbox approach, for your initial deployment, you only need to edit the ttg
section of values.yaml file, and the parameters for the following fields of values.yaml file.
Tips: Yaml files require two space indentation and if they aren't properly formatted, there could be errors. It's highly recommended that you edit yaml files using a code editor with a yaml plugin that will allow you to search. show line numbers, show you spaces, and special characters, etc.
To edit the ttg
section and provide the configuration details relevant to your deployment, you can refer to Table 3 Configuration parameters for ttg
section of values.yaml file in the Populating deployment files.
Then, edit the fields in the following parameter
column. To locate the parameter statefulsets.transactionProcessor.advertisedHostname
, navigate to the following structure:
statefulsets:
transactionProcessor:
advertisedHostname:
Parameter | Description | Field |
---|---|---|
statefulsets.transactionProcessor.advertisedHostname | FQDN (Fully Qualified Domain Name) of either the master node of the Kubernetes cluster. | Replace with the hostname of your master node. |
deployments.kafka.advertisedHostname | FQDN (Fully Qualified Domain Name) of either the master node of the Kubernetes cluster. | Replace <localhost> with the hostname of your master node. |
Option 2: Manifests Configuration
If you don't want to install helm, you can refer to the documentation for Manifests configuration using the sandbox Manifest files. Detailed sandbox instructions are not provided at this time.
IV. Start Z APM Connect Distributed Gateway
You can start Z APM Connect Distributed Gateway (Z APM Connect DG) either by using Helm or using the manual yaml deployments, depending on how you populated the deployment files.
Option 1: Starting with Helm
First time running helm
Change directories to:
/sandbox/helm-deployment
Run helm install:
helm install --namespace ibm-zapm -f values.yaml zapm ./zapm-helm-chart-6.1.1-4.tgz
NOTE: If something does go wrong, correct it and then follow the instructions below to update the values.yaml.
Check if pods are running
kubectl get pods --namespace ibm-zapm
Tips: If you have configured Kubernetes to use the default namespace as per step 1, there's no need to include "--namespace ibm-zapm" in every command. You can check the pods without specifying “--namespace=ibm-zapm”:
kubectl get pods
Update values.yaml (every time after first time)
Run this command if you need to update values.yaml and subsequently re-deploy pods.
helm upgrade --namespace ibm-zapm -f values.yaml zapm ./zapm-helm-chart-6.1.1-4.tgz`
Option 2: Starting with .yaml deployment files
To start by using the manual .yaml deployments, navigate to the manifests-deployment
directory and run the following command to install your deployment.
kubectl apply -f ./
After applying the changes, you can check the status of the components by running the following command.
kubectl get pods
Result
It may take a few minutes for everything to start fully. When the pod status is changed from Starting
to Running
, Z APM Connect DG is started. If any pod reports that it is unhealthy or crashing, running the kubectl inspect podName
command can give further details on why the pod is not starting properly.
Information: Both the production
and sandbox
folders contain helm and manifest deployment configuration files. The main difference between these two folders is the default configuration values.
The production yamls use defaults that are intended for robust environments and considerations that are best for a production environment.
The default values for the sandbox aim to simplify the entire deployment process. They assume that you are running a POC or test environment with limited hardware and smaller amounts of traffic.
If you are transitioning from a PoC to a production environment, you can start with the /production/helm-deployment/values.yaml
file and reference the values you used from sandbox/helm-deployment/values.yaml
.
Tips: You can keep more than one value.yaml files and switch between them by running helm updgrade
.