We introduced the sample microservices application called Stan’s Robot Shop in a previous post. In this article, we’ll take you through deploying that microservice application to a Kubernetes cluster and installing the IBM Instana™ agent. Stan’s Robot Shop code already includes any extra setup, so all infrastructure technology will be automatically discovered, and every request will be traced end to end.

Sample application install

First, choose your environment. I’ll be using Google Kubernetes Engine (GKE); you may also use minikube to run everything locally on your laptop. Once you have your Kubernetes cluster running, you’re going to install the IBM Instana agent; then deploy the Stan’s Robot Shop application.

Installing and configuring Kubernetes

To work with Kubernetes, install a copy of kubectl locally on your machine. If you’re using the GKE, you’ll also need a copy of Goggle Cloud (gcloud) installed locally.

From the GKE dashboard, create a basic cluster of 3 nodes. Click the Connect button, then copy and paste the command to a shell prompt. This process will configure the kubectl command to work against your newly created cluster. Test that everything is working so far:

$ kubectl cluster-info

Great, you have a running Kubernetes cluster and you can talk to it through kubectl.

IBM Instana agent

A deployment descriptor file is included with Stan’s Robot Shop code. You can download it through git:

$ git clone https://github.com/instana/robot-shop

If you don’t have git installed, go to the GitHub page and click the Download link on the top right. This step will download a zip file; expand it into a directory.

Under the Stan’s Robot Shop project directory, there’s a subdirectory named instana. This subdirectory is where you’ll find the YAML file that describes the agent deployment. You’ll need to edit it to configure your unique agent key, which must be Base64 encoded. Get your unique agent key from the IBM Instana dashboard under Management Portal:

$ echo -n “your unique agent key” | base64

Save the changes to the deployment. Then deploy the agent:

$ kubectl create -f instana-agent.yaml
namespace "instana-agent" created
serviceaccount "instana-admin" created
secret "instana-agent-secret" created
daemonset "instana-agent" created

The agent is deployed as a DaemonSet in its own namespace and configured to only run on nodes with the label agent=instana. There’s a helper script label.sh, which will label all nodes.

$ ./label.sh

The agent will take a moment or two to start and then report into the IBM Instana backend, after which it will appear in the IBM Instana dashboard:

Excellent work. The IBM Instana dashboard is showing the empty Kubernetes cluster. Although we haven’t yet deployed the Stan’s Robot Shop application, you can see that the IBM Instana agent has already discovered a number of running containers, which are system containers like system processes of an operating system.

Stan’s Robot Shop: Sample microservices application

All the deployment and service definition files for deploying the Stan’s Robot Shop to Kubernetes are included in the source under the K8s directory.

If you’re using the GKE, edit the deployment file for the web service: K8s/web-service.yaml

Change the type from NodePort to LoadBalancer. No changes are required if you’re using minikube.

apiVersion: v1
kind: Service
    kompose.cmd: kompose convert -f ../docker-compose.yaml
    kompose.version: 1.8.0 (0c0c027)
  creationTimestamp: null
    io.kompose.service: web
  name: web
  type: LoadBalancer
    - name: "8080"
    port: 8080
    targetPort: 8080
    nodePort: 30080
    io.kompose.service: web
  loadBalancer: {}

Create a separate namespace to put the application in and deploy the application:

$ kubectl create namespace robot-shop
$ kubectl -n robot-shop create -f K8s

It will take a few minutes for Kubernetes to download all the images and create the pods to start running the application. As the pods are created and the images start running, the IBM Instana agent will automatically discover them and dynamically load the matching sensor to start monitoring the technology. You can watch this process happen in real time on the IBM Instana infrastructure dashboard.

If you’re running through minikube, Stan’s Robot Shop will be available through the IP address of your minikube instance.

$ minikube ip

The above command will print out the IP address of your minikube instance; open your browser http://<minikube ip>:30080/

If you’re using the GKE, select Discovery & load balancing from the left menu, and then click on the web service. This step will bring up the service details.

Click the External endpoints link to open the shop in your browser. You’re an APM rock star now. You have just deployed a modern containerized microservices application with Kubernetes—with full monitoring. Don’t tell your boss how easy it really is with the IBM Instana platform; you’ll shatter the illusion.

Sample application load generation

You can click around the application to generate some traffic through the application—don’t worry you’ll not actually purchase anything. There’s also a separate load generation utility under the load-gen directory. It runs locally through a Docker image: edit load-gen.sh and set the environment variable HOST to the URL of your deployed shop, save and run the script.

As some load is put through the application, the IBM Instana platform will automatically trace every request end to end and build the service map.


Monitoring a modern containerized, orchestrated microservices application isn’t that difficult when you have Stan helping you. Watch for future posts when we cover topics such as end-user monitoring (EUM) and integrating OpenTracing spans.

Get started with IBM Instana and sign up for the free trial


More from IBM Instana

Observing Camunda environments with IBM Instana Business Monitoring

3 min read - Organizations today struggle to detect, identify and act on business operations incidents. The gap between business and IT continues to grow, leaving orgs unable to link IT outages to business impact.  Site reliability engineers (SREs) want to understand business impact to better prioritize their work but don’t have a way of monitoring business KPIs. They struggle to link IT outages to business impacts because data is often siloed and knowledge is tribal. It forces teams into a highly reactive mode…

Buying APM was a good decision (so is getting rid of it)

4 min read - For a long time, there wasn’t a good standard definition of observability that encompassed organizational needs while keeping the spirit of IT monitoring intact. Eventually, the concept of “Observability = Metrics + Traces + Logs” became the de facto definition. That’s nice, but to understand what observability should be, you must consider the characteristics of modern applications: Changes in how they’re developed, deployed and operated The blurring of lines between application code and infrastructure New architectures and technologies like Docker,…

Debunking observability myths – Part 5: You can create an observable system without observability-driven automation

3 min read - In our blog series, we’ve debunked the following observability myths so far: Part 1: You can skip monitoring and rely solely on logs Part 2: Observability is built exclusively for SREs Part 3: Observability is only relevant and beneficial for large-scale systems or complex architectures Part 4: Observability is always expensive In this post, we'll tackle another fallacy that limits the potential of observability—that you can create an observable system without observability driven by automation. Why is this a myth? The notion that…

Top 8 APM metrics that IT teams use to monitor their apps

5 min read - A superior customer experience (CX) is built on accurate and timely application performance monitoring (APM) metrics. You can’t fine-tune your apps or system to improve CX until you know what the problem is or where the opportunities are. APM solutions typically provide a centralized dashboard to aggregate real-time performance metrics and insights to be analyzed and compared. They also establish baselines to alert system administrators to deviations that indicate actual or potential performance issues. IT teams, DevOps and site reliability…