How-tos

Deploy Logistics Wizard microservices with Kubernetes and Istio

Share this post:

Logistics Wizard is one of our featured samples. It is a reimagined supply chain optimization system for the 21st century. We’ve covered this application in several posts in the past.

When we started working on this application, we picked Cloud Foundry as the deployment platform for the microservices. For Logistics Wizard, Cloud Foundry remains the right approach. But as new deployment options were added in IBM Cloud, we have been considering an alternative architecture. The last major change in Logistics Wizard had to do with using OpenWhisk to expose a weather-based recommendation service.

In this post, we look at the changes we made to deploy the ERP and Controller services to IBM Bluemix Container Service.

The IBM Bluemix Container Service provides a native Kubernetes experience. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Put simply, you can deploy pretty much any kind of applications in Kubernetes. Another component we have integrated is Istio. Istio is an open platform to connect, manage, and secure microservices. Niklas has a quick introduction to Istio to get you up to speed.

Dockerizing the ERP and Controller services

The first step to deploy our ERP and Controller services in Kubernetes is to turn them into Docker images. The ERP service being a Node.js app it is quite straightforward to turn it into a Docker image:

  • start from the Node.js Docker image,
  • add the application files,
  • install the application dependencies,
  • expose the application port,
  • and specify the command to run.

Then it’s about providing a new script for the toolchain to build the Docker image and to deploy the service to Kubernetes.

In the initial architecture, the ERP service is deployed as a Cloud Foundry application. It uses the VCAP_SERVICES environment. With Kubernetes, we use the exact same code. Kubernetes has the concept of secrets to hold sensitive information, such as credentials. Secrets are made available to your Kubernetes deployments either as a file mount or as an environment variable. We picked the latter:

bx cf create-service elephantsql turtle logistics-wizard-erp-db
bx cf create-service-key logistics-wizard-erp-db for-kube

# grab the credentials - ignoring the first debug logs of cf command
POSTGRES_CREDENTIALS_JSON=`cf service-key logistics-wizard-erp-db for-kube | tail -n+3`

# inject VCAP_SERVICES in the environment, to be picked up by the datasources.local.js
VCAP_SERVICES='
{
  "elephantsql": [
    {
      "name": "logistics-wizard-erp-db",
      "label": "elephantsql",
      "plan": "turtle",
      "credentials":'$POSTGRES_CREDENTIALS_JSON'
    }
  ]
}'
kubectl delete secret lw-erp-env
kubectl create secret generic lw-erp-env --from-literal=VCAP_SERVICES="${VCAP_SERVICES}"

In this excerpt from the toolchain script, we:

  • create a database service,
  • create a set of credentials for this service,
  • retrieve the credentials,
  • build a VCAP_SERVICES variable injecting these credentials as Cloud Foundry would do,
  • create a secret in Kubernetes to old the ERP service environment.

With this approach, no changes are required in the ERP code itself.

Later in the Kubernetes deployment file, we bind this secret to the ERP container:

        env:
          - name: VCAP_SERVICES
            valueFrom:
              secretKeyRef:
                name: lw-erp-env
                key: VCAP_SERVICES

When running, the ERP service code finds the VCAP_SERVICES environment variable as if it was running in Cloud Foundry and parses it to get the database credentials. We use the same approach for the Controller service, starting with the Python Docker image this time.

Connect the dots

The services need to talk to each other: the recommendation service and the web user interface communicate with the Controller service, the Controller service with the ERP service.

Given they are running in the Kubernetes cluster, the Controller service and the ERP service can reference each other by name thanks to Kubernetes DNS service. Therefore in the Dockerfile for the Controller, we find:

ENV ERP_SERVICE http://lw-erp:8080

where lw-erp is the name of the ERP service as defined in its Kubernetes deployment file.

The recommendation service and the web user interface communicate with the ERP service through the Controller service. They use the CONTROLLER_SERVICE environment variable to retrieve the Controller service location. The recommendation service passes the value to its OpenWhisk actions, the web user interface injects the value during its build phase.

In both cases, the CONTROLLER_SERVICE value is built by inspecting the Kubernetes cluster and looking for the Istio ingress configuration (hostIP and port):

export CONTROLLER_SERVICE=http://$(kubectl get po -l istio=ingress -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')/lw/controller

Automating the deployment

Logistics Wizard comes with a toolchain to automate the deployment of the microservices as Cloud Foundry apps. For Kubernetes, we created an alternative toolchain. This leaves the choice to deploy everything in Cloud Foundry or a mix of Cloud Foundry and Kubernetes.

This toolchain builds the Docker images of the ERP and Controller services and deploys them to Kubernetes. It uses the newly introduced Kubernetes deployer job type where you can specify a Bluemix API key and a Kubernetes cluster name to be used by the job:

The new toolchain can be found in this GitHub project:

Open the Logistics Wizard with Kubernetes toolchain

Once the toolchain has deployed the services and the web user interface, you can go through the Logistics Wizard walkthrough.

Istio comes with addons for Prometheus and Grafana. It gives us a nice default dashboard to monitor Istio-enabled services. As you go through the walkthrough, you can look at the dashboard provided by Istio. You will need to forward the dashboard port with kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 and then to point your browser to http://localhost:3000/dashboard/db/istio-dashboard.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter: @L2FProd.

More How-tos Stories

Monitoring & logging for IBM Bluemix Container Service with Sematext

In this blog post we discuss how Sematext integrates with IBM Bluemix Container Service to provide monitoring and logging visibility of your containerized applications, as they run in production. In the sections below, we demonstrate how to set up a Kubernetes cluster in Bluemix and how to set up Sematext in this cluster. IBM Cloud has monitoring and logging capabilities in the platform, but we know our customers operate in a multi-cloud or hybrid cloud environment and we are very excited to partner with Sematext, enabling operational consistency across those environments. We worked with Alen Komljen, an Automation Engineer from Sematext, to create the following content and perform the technology validation.

Continue reading

99.95% availability. Balancing release velocity and reliability

Availability and reliability are rarely at the front of developers minds when delivering new applications on Bluemix. The ease and speed of creating and deploying new features is very seductive.

Continue reading

Deploying to IBM Cloud private with IBM Cloud Developer Tools CLI

IBM Cloud private is an application platform for developing and managing on-premises, containerized applications. It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image repository, a management console, and monitoring frameworks.

Continue reading