How-tos

Deploy Logistics Wizard microservices with Kubernetes and Istio

Share this post:

Logistics Wizard is one of our featured samples. It is a reimagined supply chain optimization system for the 21st century. We’ve covered this application in several posts in the past.

When we started working on this application, we picked Cloud Foundry as the deployment platform for the microservices. For Logistics Wizard, Cloud Foundry remains the right approach. But as new deployment options were added in IBM Cloud, we have been considering an alternative architecture. The last major change in Logistics Wizard had to do with using OpenWhisk to expose a weather-based recommendation service.

In this post, we look at the changes we made to deploy the ERP and Controller services to IBM Bluemix Container Service.

The IBM Bluemix Container Service provides a native Kubernetes experience. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Put simply, you can deploy pretty much any kind of applications in Kubernetes. Another component we have integrated is Istio. Istio is an open platform to connect, manage, and secure microservices. Niklas has a quick introduction to Istio to get you up to speed.

Dockerizing the ERP and Controller services

The first step to deploy our ERP and Controller services in Kubernetes is to turn them into Docker images. The ERP service being a Node.js app it is quite straightforward to turn it into a Docker image:

  • start from the Node.js Docker image,
  • add the application files,
  • install the application dependencies,
  • expose the application port,
  • and specify the command to run.

Then it’s about providing a new script for the toolchain to build the Docker image and to deploy the service to Kubernetes.

In the initial architecture, the ERP service is deployed as a Cloud Foundry application. It uses the VCAP_SERVICES environment. With Kubernetes, we use the exact same code. Kubernetes has the concept of secrets to hold sensitive information, such as credentials. Secrets are made available to your Kubernetes deployments either as a file mount or as an environment variable. We picked the latter:

bx cf create-service elephantsql turtle logistics-wizard-erp-db
bx cf create-service-key logistics-wizard-erp-db for-kube

# grab the credentials - ignoring the first debug logs of cf command
POSTGRES_CREDENTIALS_JSON=`cf service-key logistics-wizard-erp-db for-kube | tail -n+3`

# inject VCAP_SERVICES in the environment, to be picked up by the datasources.local.js
VCAP_SERVICES='
{
  "elephantsql": [
    {
      "name": "logistics-wizard-erp-db",
      "label": "elephantsql",
      "plan": "turtle",
      "credentials":'$POSTGRES_CREDENTIALS_JSON'
    }
  ]
}'
kubectl delete secret lw-erp-env
kubectl create secret generic lw-erp-env --from-literal=VCAP_SERVICES="${VCAP_SERVICES}"

In this excerpt from the toolchain script, we:

  • create a database service,
  • create a set of credentials for this service,
  • retrieve the credentials,
  • build a VCAP_SERVICES variable injecting these credentials as Cloud Foundry would do,
  • create a secret in Kubernetes to old the ERP service environment.

With this approach, no changes are required in the ERP code itself.

Later in the Kubernetes deployment file, we bind this secret to the ERP container:

        env:
          - name: VCAP_SERVICES
            valueFrom:
              secretKeyRef:
                name: lw-erp-env
                key: VCAP_SERVICES

When running, the ERP service code finds the VCAP_SERVICES environment variable as if it was running in Cloud Foundry and parses it to get the database credentials. We use the same approach for the Controller service, starting with the Python Docker image this time.

Connect the dots

The services need to talk to each other: the recommendation service and the web user interface communicate with the Controller service, the Controller service with the ERP service.

Given they are running in the Kubernetes cluster, the Controller service and the ERP service can reference each other by name thanks to Kubernetes DNS service. Therefore in the Dockerfile for the Controller, we find:

ENV ERP_SERVICE http://lw-erp:8080

where lw-erp is the name of the ERP service as defined in its Kubernetes deployment file.

The recommendation service and the web user interface communicate with the ERP service through the Controller service. They use the CONTROLLER_SERVICE environment variable to retrieve the Controller service location. The recommendation service passes the value to its OpenWhisk actions, the web user interface injects the value during its build phase.

In both cases, the CONTROLLER_SERVICE value is built by inspecting the Kubernetes cluster and looking for the Istio ingress configuration (hostIP and port):

export CONTROLLER_SERVICE=http://$(kubectl get po -l istio=ingress -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')/lw/controller

Automating the deployment

Logistics Wizard comes with a toolchain to automate the deployment of the microservices as Cloud Foundry apps. For Kubernetes, we created an alternative toolchain. This leaves the choice to deploy everything in Cloud Foundry or a mix of Cloud Foundry and Kubernetes.

This toolchain builds the Docker images of the ERP and Controller services and deploys them to Kubernetes. It uses the newly introduced Kubernetes deployer job type where you can specify a Bluemix API key and a Kubernetes cluster name to be used by the job:

The new toolchain can be found in this GitHub project:

Open the Logistics Wizard with Kubernetes toolchain

Once the toolchain has deployed the services and the web user interface, you can go through the Logistics Wizard walkthrough.

Istio comes with addons for Prometheus and Grafana. It gives us a nice default dashboard to monitor Istio-enabled services. As you go through the walkthrough, you can look at the dashboard provided by Istio. You will need to forward the dashboard port with kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 and then to point your browser to http://localhost:3000/dashboard/db/istio-dashboard.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter: @L2FProd.

More How-tos Stories

Obey your commands: Home automation using Watson and PubNub

Integration of voice control in smart devices is buzzing, and adoption continues to grow. Voice control provides a more natural way of interacting with connected apps and devices ranging from news feeds, traffic information to acting as personal assistants in the home. These intelligent devices respond to commands spoken in our own voice and act immediately.

Continue reading

Container builds with multiple stages in IBM Cloud Container Registry

The IBM Cloud Container Registry team has been working to enable users to run their container builds in IBM Cloud. This capability was available to users of single containers or container groups, and we’re proud to announce that now cluster users can use it too. We’ve also taken the opportunity to add some new features. There’s a new command, bx cr build, and I’d like to highlight one of the new features that can help simplify your container builds.

Continue reading

Secure your mobile serverless backend with App ID

Learn how to implement user authentication and application logic with App ID and Cloud Functions in IBM Cloud.

Continue reading