OpenShift is a Kubernetes distribution platform from Red Hat, similar to IBM Cloud Private, that is loaded with features to make developers’ lives easier. Features like strict security policies, logging and monitoring, and many more make OpenShift a well-rounded platform that’s ready for production, saving you the trouble of cobbling these features together yourself from vanilla Kubernetes.
However, there is one key feature that Kubernetes supports and OpenShift doesn’t (at least officially)—the ability to deploy Helm charts.
Helm is the official package manager for Kubernetes. It uses a sophisticated template engine and package versioning that is more flexible than OpenShift templates. In addition, the Helm community has contributed numerous Helm charts for common applications like Jenkins, Redis, and MySQL that have been production-tested. IBM Cloud Private, a Kubernetes-based enterprise platform for containers, has full support for Helm and its community charts. It leverages Helm to create a UI-based catalog system that makes it easier to reuse the community charts. The catalog also lets you install/uninstall Helm charts with just a couple clicks, making it much easier to install an entire software stack.
Some community Helm Charts deploy containers with privileged access, which is not supported by OpenShift. IBM Cloud Private’s flexible Pod Security Policies, on the other hand, lets you choose the level of privilege you allow your containers to have based on your requirements.
But all is not lost for OpenShift fans, as there are workarounds that you can use that won’t compromise best practices or security. That said, if you want the ability to run Helm Charts like those for IBM Middleware on OpenShift without workarounds, I recommend you try out IBM Cloud Private on OpenShift, as it leverages the best of both IBM Cloud Private and OpenShift.
That option aside, if you are going the pure OpenShift route, this guide will walk you through converting existing Helm Charts into OpenShift-compatible YAML files.
If you’re an operations engineer or developer familiar with Kubernetes, Helm, and OpenShift, and you are interested in deploying the contents of existing Helm Charts on OpenShift, this recipe will save you time. You will be able to leverage the hard work of the Helm community while maintaining container best practices—versus creating the equivalent OpenShift templates on your own.
Below are the four steps to deploy the contents of an existing Helm chart into an OpenShift cluster:
Convert existing Docker images to run as non-root.
Generate OpenShift-compatible YAML resource files from existing Helm charts.
Deploy the resource files into an OpenShift project.
Expose the services using OpenShift Routes.
The first half of this guide explains the steps for applying container best practices to a Dockerfile
so it will work on OpenShift. The second half applies the guidelines of the first half to a specific example, the IBM Microservices Reference Architecture Helm Charts (known as bluecompute-ce
), converting its existing Helm charts to OpenShift-compatible YAML files.
Note: Assuming you have installed and are familiar with the tools in the following section, you should allow 30–45 minutes to complete this how-to.
You will need a basic knowledge of Docker containers, Kubernetes, Helm, and OpenShift. You’ll also need the following resources and command-line tools:
OpenShift Cluster: Deploy a local OpenShift cluster using Minishift; the Minishift installation should also install the OpenShift CLI oc
.
kubectl, the Kubernetes CLI
helm, the Kubernetes package manager CLI: Follow the instructions here to install it on your platform.
The following sections explain the steps taken to modify your Dockerfile
and Helm Charts to run as non-root user, which is a container best practice recommended on Kubernetes-based platforms like OpenShift and IBM Cloud Private.
OpenShift enforces security best practices for containers out of the box. Some of these security practices include requiring Docker images to run as non-root and disallowing privileged containers, which can be harmful to the OpenShift cluster if they are compromised. This section explains how to make a Spring Boot-based Dockerfile
run as non-root.
Let’s look at the Dockerfile
for bluecompute
-ce
‘s inventory service for reference. You don’t need to worry about what the service does, as we are only concerned with how the Dockerfile
packages the code, followed by how to make it run as non-root.
Note: To learn more about Dockerfile
options and their syntax, refer to the official documentation.
Notice that this is a two-stage Multi-Stage Dockerfile based on the two FROM
instructions on line 2 and 18. In the first stage (lines 1-15), we are using the official gradle:4.9.0-jdk8-alpine
Docker image to download Gradle dependencies, then build and test the application’s code to generate a jar file. In the second stage (lines 17-38), we are using the official openjdk:8-jre-alpine
Docker image to copy over the generated jar file from the previous stage, copy over the Docker entry-point script from source, and also expose the application ports.
Note: The main benefit of using the multi-stage approach is a Docker image with smaller layers. To learn more about the benefits of a multi-stage Dockerfile
, read the official documentation.
The above Dockerfile
is fairly standard for Spring Boot services, and it is what we used for all the services in our Microservices Reference Architecture application. The only thing that remains to make the Dockerfile
compatible with OpenShift security policies (and follow container best practices in general) is to create a non-root user to run the application process, which the base openjdk
Docker image doesn’t do by default. To do so, we have to add the following lines before the EXPOSE
instruction:
USER 2000
Here is a quick breakdown of the above commands:
The adduser -u 2000 -G root -D blue
command creates the blue
user with a user id of 2000
and adds it to the root group (not to be confused with “sudoers”). OpenShift requires that a numeric user is used in the USER
declaration instead of the user name. This allows OpenShift to validate the authority the image is attempting to run with and prevent running images that are trying to run as root (as mentioned in the OpenShift-Specific Guidelines).
The chown -R 2000:0 $APP_HOME
command changes the ownership of the APP_HOME
folder to user 2000
(created above) and the 0
group (which is the root group). This line will allow an arbitrary numeric user, assigned by OpenShift when launching the container, to start the application process.
The chmod -R u+x $APP_HOME/app.ja
r
command assigns execution permissions for the application jar file to the user, which can be an arbitrary numeric user in the root group.
Fortunately, that’s all the required changes to get this Docker image to run on OpenShift. Finally, here is a snippet of the complete Dockerfile
:
Now that the Dockerfile
no longer requires root privileges, all that remains is to build the Docker image and push it to a Docker registry, such as IBM Cloud Container Registry or Docker Hub. Do this with the following commands:
Where ${REGISTRY_LOCATION}
is the location of your Docker Registry and openshift
is the new tag value for the image.
Before generating YAML from the Helm Charts, we have to update the Helm Charts with the newly-built Docker image. Simply edit the values.yaml
file in the Chart and change the image’s tag
value to that of the new Docker image. For example, here is an excerpt of the values.yaml
file for bluecompute-ce
‘s inventory
Helm Chart:
The image.repository
field represents the Docker image location for this chart (Docker Hub in this case) and the image.tag
field represents the Docker image’s tag. To update the tag value to openshift
, just replace the 0.6.0
value in image.tag
with openshift
, which will result in the following YAML:
Generally speaking, this is all you need to update a Helm Chart with an OpenShift compatible non-root Docker image. Most community Helm Charts don’t have complicated configurations that require root privileges. But for those that do, they usually come in the form of one-off init containers that perform some administrative tasks on the container hosts. Usually, the workaround for those is to remove those containers from the charts themselves and perform those actions on the host yourself before deploying the charts. However, as with anything in software engineering, the changes you will have to make will depend on the chart and the workload itself, and must be addressed individually.
With that caveat in mind, let’s go over how we adapted the bluecompute-ce
application to run its processes as non-root in order to work on OpenShift.
BlueCompute (known as bluecompute-ce
) is IBM’s Cloud-Native Microservices Reference Architecture, which is used to demonstrate how clients can easily deploy and run a complex microservices application on Kubernetes based platforms such as IBM Cloud Kubernetes Service and IBM Cloud Private, which are public- and private-cloud-based, respectively. Its architecture is depicted in the following diagram:
Each microservice has its own Git repository, which contains not only the application source, but also its respective Dockerfile
and Helm chart. All of the datastore Helm charts were taken from the community Helm Chart catalog to demonstrate that existing applications can leverage community-made Helm charts for datastores.
Lastly, to make deploying the entire application easier, we created the bluecompute-ce
Helm chart that declares all the individual Helm charts (including the community ones) as dependencies. This lets us deploy all of them at once with a single command. Deploying the application this way takes us 2-5 minutes compared to the 30-45 minutes it used to take us before we adopted Kubernetes.
As you can see from the diagram animation above, the application architecture doesn’t change much when deploying it to OpenShift (check out original architecture here). The main change is using an OpenShift Route to expose the web application outside the cluster instead of Kubernetes Ingress or NodePort . Also, instead of using kube-dns for service discovery, OpenShift uses core-dns , which we are not concerned with as developers.
Now that we understand the basic architecture, let’s cover the steps to deploy bluecompute-ce
into OpenShift, following the steps presented in the previous section.
As mentioned before, OpenShift enforces security best practices for containers out of the box, such as requiring Docker images to run as non-root and disallowing privileged containers, which can be harmful to the OpenShift cluster if they are compromised.
We looked at what it would take for the bluecompute-ce
services to adopt container best practices; below is a summary of our findings:
OpenJDK: Although the BlueCompute Docker images were built with official openjdk
images, the images run as the root user. We needed to edit each Dockerfile
to run as non-root, as explained in a previous section.
Elasticsearch: The community Helm chart for Elasticsearch has two init containers that increase the virtual memory max_map_count
and disable memory swapping before starting the Elasticsearch service.
These operations require the init containers to be privileged, which is not allowed in OpenShift.
Since this is a community chart, there is nothing we can do to edit the chart directly.
Therefore, in a later section, we will explain how to eliminate the need for these containers after generating YAML files out of the Helm charts.
Note that the actual elasticsearch
Docker image (not the init container one) runs as non-root, so no need to rebuild that one.
Databases: The Docker Images for mysql
, couchdb
, and mariadb
Helm charts already run as non-root, so no changes are required for these.
Luckily, the changes to be made were quite simple. We only had to convert all the bluecompute-ce
services’s Docker images to run as non-root as previously discussed. As an FYI, the changes were made to each Dockerfile
in the Spring Boot services (inventory
, catalog
, customer
, orders
, and auth
). The web
service adopted the same concepts, but was tailored for Node.js.
With these changes checked into each service’s Git repository, let’s proceed with deploying the services to OpenShift.
Now that we covered the changes for bluecompute-ce
to adopt not just OpenShift, but containers best practices, it’s time to deploy bluecompute-ce
into OpenShift.
Note: This section assumes that you have a working OpenShift cluster.
We are not going to deploy the Helm charts themselves into OpenShift. Though it is possible to deploy Helm charts into OpenShift, according to the OpenShift blog “Getting started with Helm on OpenShift,” it is not the official and supported way of deploying workloads into OpenShift.
There is also the option of converting the Helm charts into OpenShift Templates, but that would require a lot of tedious work that’s beyond the scope of this guide.
The main goal is to get bluecompute-ce
deployed into an OpenShift cluster as effortlessly as possible while following as many best practices as possible. So, in this guide, we will be converting the bluecompute-ce
Helm chart and all of its dependency charts into regular Kubernetes YAML files by using the helm template
function to deploy them into the OpenShift cluster.
bluecompute
project in OpenShiftFirst, log into the OpenShift cluster:
Next, create a new project (OpenShift parlance for Kubernetes namespace
) to deploy the bluecompute-ce
YAML files:
Now, generate the Kubernetes YAML files from the Helm Charts. To do so, run the following commands:
Where:
--namespace
represents the Kubernetes namespace to be used to render the YAML files. This means that when deploying the generated YAML files, they will have the bluecompute
OpenShift hardcoded.
web.service.type
was changed to ClusterIP
as we will be using OpenShift routes to expose the web app in a later section.
name
is just the Helm release name, which is taken into account by Helm to name resources. In this case, we are using bluecompute
.
The above commands generated YAML files that can be deployed into OpenShift. However, there is still some work to be done. The next section explains the steps.
Before deploying the bluecompute-ce
YAML files, we must remove the init containers from the Elasticsearch community Helm chart. These containers are used to increase the virtual memory max_map_count
and disable memory swapping before starting the Elasticsearch service, which require privileged access.
Since privileged containers are not allowed in OpenShift and performing those operations manually is beyond the scope of this guide, we will remove the containers to allow the Elasticsearch containers to start. Depending on your environment, if Elasticsearch crashes or fails to start, you may be required to increase the virtual memory and disable swapping manually on each OpenShift node as explained by Elastic in Virtual Memory and Enable bootstrap.memory_lock.
With that out of the way, let’s go ahead and remove the privileged init containers from the Elasticsearch YAML files. To do so, assuming you exported the bluecompute-ce
YAML files to the bluecompute-os
folder, open the following three YAML files, delete the initContainers
section, then save the files:
Make sure to delete the entirety of the initContainers
section, including both sysctl
and chown
containers and all their respective settings, if present.
Now you should be ready to deploy all of bluecompute-ce
into the bluecompute
OpenShift Project!
To deploy the bluecompute-ce
YAML files, use the command below:
Voilà, you have deployed all of bluecompute-ce
into an OpenShift cluster! To check on its status to confirm they are up and running, run the following command:
You may need to run the above command multiple times to get an updated status for all pods. Once you have an output similar to the following messages, all the pods are up and running.
Now that all of the pods are available, it’s time to expose the web service outside the OpenShift cluster in order to access the web application through a web browser.
OpenShift makes exposing the web service outside the cluster very easy by using the following command to create an OpenShift route
. It’s essentially OpenShift’s version of Kubernetes Ingress
:
Now that the service is exposed with a route, retrieve the web route URL using the following command:
You should see an output with the route URL similar to the following:
Where YOUR_CLUSTER_DOMAIN.com
is the OpenShift Cluster’s domain name and web-bluecompute
is the CNAME created for the web route.
To validate the application, open a browser window and enter the route URL from above. You should see the web application’s home page, as shown below.
You can reference these instructions to validate the web application functionality. You should be able to see a catalog, login, make orders, and see your orders listed in your profile (once you are logged in).
To clean up, run the following commands:
At the time of writing, Helm charts are not officially supported by OpenShift; therefore, the above approach is the closest you will get to deploying workloads that originated from Helm charts into OpenShift while leveraging container best practices.
If you still require the ability to run Helm Charts on OpenShift, such as those for IBM Middleware Helm Charts, I recommend you to try out IBM Cloud Private on OpenShift.
If you already adopted IBM Cloud Private on OpenShift and/or are curious about managing Kubernetes workloads across a Hybrid Cloud architecture, check out IBM Multicloud Manager. The tutorial Manage an IBM Cloud Private on Red Hat OpenShift cluster by using Multicloud Manager explains how.
If you are set on the pure OpenShift path, I encourage you start converting the Kubernetes files we generated into OpenShift Templates, which is the closest to a Helm Chart you will get in an OpenShift ecosystem.