Practical Advice: Key decisions when building Microservices apps in Java with Kubernetes
5 min read
By: Gang Chen
Practical Advice: Key decisions when building Microservices apps in Java with Kubernetes
As a Java developer or solution architect, your leadership team may have asked you questions like:
Do we leverage the advantages of the cloud (e.g., centralized management, compute efficiency, scalability, security, ease of maintenance)?
To encourage reuse and development efficiency, are we promoting the Microservices approach?
Are we using Docker-based container technology to simplify our deployment and scalability strategy?
If I were you, the answer will be: “Of course! We are working on it.”
Recently my team of solution engineers faced these very questions as part of developing a sample storefront application as part of the Garage Method Reference Architecture for Microservices. This application, internally called BlueCompute, was designed following Microservices principles (if interested, you can read more details of its design and peruse the implementation on GitHub in the project ibm-cloud-architecture/refarch-cloudnative).
In addition to following Microservices principles, we also decided to use Kubernetes as our container orchestration platform. Along the way, we made key design and architectural decisions that I’d like to share in question/answer format:
We believe considering these design decisions early in the project will prove helpful when building such as a system using Kuberenetes services on the IBM Bluemix platform.
Q1: As a Java developer, which framework should I use as the foundation to build Microservices style applications?
One of the popular choices for Java developers is Sprint Boot. Its standalone application approach fits well with the Microservices modular principle. Running as a single process allows a Sprint Boot application to be easily packaged and distributed as a Docker container. MicroProfile is another emerging framework, particularly favored among Enterprise Java developers.
Q2: Where should I run the Kubernetes platform?
Kubernetes can run on top of your development laptop, VM or bare metal servers in your own datacenter or on a cloud provider. Obviously, this decision has to align with your organization’s overall cloud and hosting strategy. If you are looking for a managed Kubernetes environment (or Kubernetes as a Service), the IBM Bluemix container service is a good choice. If you are looking to host the managed Kubernetes service in your own datacenter for security and compliance reasons, then you can choose IBM Spectrum Conductor for Containers or OpenShift. Of course, you can roll your own by installing and configuring a Kubernetes environment in your datacenter or cloud-based infrastructure as a service.
Depending on your situation, you may end up with a combination of different approaches. For example, running the development/test environment on public cloud with managed Kubernetes service and deploying the production workload to on-premise Kubernetes platform. This hybrid approach takes advantage of the portability feature of Docker container and Kubernetes. Personally, I’d try to take advantage of managed Kubernetes services as much as I can, removing one more task from my plate.
Q3: How do I handle application service registry and discovery?
To benefit from the many Microservices you create in a distributed, elastic cloud environment, a proper service registration and discovery strategy is critical to allow them to communicate with each other or to be consumed. As a pioneer in cloud-native Microservices implementation, Netflix has built and released a set of opinionated libraries known as Netflix OSS, such as Eureka, Zuul, Ribbon, etc. And they are well integrated with the Spring framework. Typically, services are registered to Eureka, and Ribbon/Zuul handle the service discovery and basic load-balancing. However, what I describe was nearly 5 years ago.
Today, Kubernetes natively supports service registry, discovery and load balancing. You no longer need Netflix OSS for these tasks. Instead, you design the system the Kubernetes way. You can leverage Kubernetes’ Service type to expose your Microservices, then it automatically registered into Kubernetes system via virtual IP or DNS approach (I’ll elaborate on this decision in the next question/answer). The discovery will be through the familiar DNS lookup style. For Service-based clusters, Kubernetes provides a built-in load balancer to distribute the workload among Pods (containers).
Q4: Should I use Kubernetes virtual IP or DNS based service lookup?
As noted in the prior question/answer, we decided that service registry and lookup should be done the Kubernetes way. There are two options for you: Environment variable or DNS. Environment variable is enabled out-of-box by Kubernetes primer. Under the covers, Kubernetes exposes the service virtual IP within the cluster so that other Pods/Services can invoke the target service via this environment variable entry. The downside of this approach is the sequence of the services’ creation among depending Kubernetes services matters. And in general, relying on IP addresses is not a good idea.
A better solution is the DNS-based approach. Let’s say I have an “Order” microservice and I name it
order-service as part of the name in the service’s yaml definition. The front end web service can simply invoke the Order service via REST endpoint http://order-service/id. This abstracts the application from the underlying infrastructure. To use the DNS approach, you need to install the Kubernetes DNS add-on if you are building your own Kubernetes infrastructure. But if you are using Kubernetes as a Service, such as with the IBM Bluemix Container service, the DNS service is automatically enabled in your cluster.
Q5: Kubernetes handles application resiliency, but what about gracefully handling failures of dependent services?
Kubernetes ensures application resiliency or fault-tolerant through its built-in self-recovery mechanism. Implemented as ReplicaSet, Kubernetes ensures your application Pod (containers) always trying to meet the desired state. For example, at any given time, there are always 3 order service Pods running in my cluster.
This is cool and taken care by the Kubernetes for a service. But you still need to design your Microservices to react to dependency failures. For example, if a downstream service is down, or if there’s a database or storage outage. If this happens, you don’t want this service issue to impact the entire application by blocking resources or bringing down the entire user experience. You want to gracefully degrade or fail-safe this service. For a Java-based implementation, I recommend the Netflix Hystix library that provides commands to implement failure-handling patterns like Circuit Breaker or Bulkhead, as well as a dashboard to view the system’s overall health. It integrates well with Spring Boot based container applications; the Hystrix dashboard itself can be easily packaged and deployed as containers and managed by Kubernetes.
Q6: How do I expose the services for external consumption? LoadBalancer or Ingress or NodePort?
You need to determine how your client-side application, either Web 2.0 or Mobile apps, can access your Microservices hosted on Kubernetes cluster. More than likely, you do not want to expose your core business logic or data Microservices directly to Internet—instead, you should build a BFF (Back-end for Front-end) or an API gateway that in turns determines which back end service to consume. Under this design, access to the back end data microservices are typically only through the Kubernetes internal network. The front end tier (BFF or API gateway) becomes the Internet-facing component.
Then the question becomes how to expose this front end service tier for external consumption. You have two choices in Kubernetes: LoadBalancer or Ingress (there is a third option—NodePort—but I don’t recommend it for production use).
Services with LoadBalancer provide an externally-accessible IP address that sends traffic to the correct port on your cluster nodes. LoadBalancer is easy to enable and allows your client app to access REST API via a service-defined HTTP path. The drawbacks of LoadBalancer are consuming more IPs (billable resource) for your application, relying on the Cloud vendor implementation, and lack of central entry point to handle common tasks such as TLS termination.
The other approach is Ingress. Rather than creating a LoadBalancer service for each service that you want to expose to the public, Ingress provides a unique public route that lets you forward public requests to services inside your cluster based on their individual paths. This approach allows a central entry point to you application. So you can easily add DNS route and global load balancing across multiple Kubernetes regions. It is a promising solution. But in its early stage—the Ingress controller is still not flexible enough to handle complex application/service routing pathes, terminating TLS, filtering incoming requests, etc.
In our implementation, we end up deploying another NGINX service with Ingress service type. Inside of NGINX, I apply the processing mentioned above. This might be a temporary solution while Ingress matures.
Q7: Which tool should I use for continuous integration and deployment (CI/CD) in Kubernetes cluster?
Now you are ready to code, but wait! There’s one more decision. Wouldn’t you want to do automated continuous integration and deployment (CI/CD)? If so, better have a strategy before you start.
CI/CD is a generic term; when it refers to a Microservices application in a Kubernetes cluster, it includes automated Spring Boot application build, building the Docker image, pushing the image to the Docker registry, and deploying Kubernetes Services or Pods. The flow typically starts when the developer commits a code change or a fix to an issue to source control system such as git.
This was one of the easiest decisions to make—we just used Jenkins as the tool. We eventually settled on an open source tool called Jenkins-kubernetes-plugin (jenkinsci/kubernetes-plugin on GitHub). The key benefit of this approach is that we can easily run the Jenkins build master and slave node as a Kubernetes components. This saves us from installing and maintaining an separate Jenkins environment. Again, one less task for us is good.
With this tool, the CI/CD flow goes:
Developer commits a code change in GitHub
Git webhook triggers a build to the Jenkins master deployed in your Kubernetes cluster via the plugin
The master deploys a Jenkins slave Pod in your Kubernetes cluster
The Jenkins slave Pod kicks off the build stage – building Sprint Boot app, build Docker image, push to Docker registry.
Upon finishing the build stage, the Jenkins master deploys another Jenkins slave Pod that kicks off the deploy stage.
The deploy stage (defined in a Jenkinsfile) deploys your Microservices as Kubernetes service.
All of these steps are automated.
To see how we put these decisions in action, try our reference implementation application that is built on top of IBM Bluemix Container services. Our sample storefront application was built with Microservices approach and it implements all the design principles described in this post. The readme of the ibm-cloud-architecture/refarch-cloudnative project on GitHub includes an overview of the architecture and instructions for downloading the code and installing it in your own Bluemix environment.