In this post, we present the IBM Cloud Operator—a Kubernetes operator for deploying managed services on the IBM Cloud.
The IBM Cloud Operator can be used to build portable Kubernetes applications that use managed services from the IBM Cloud.
A brief background on container-based technologies and managing workloads
Container-based technologies have revolutionized the way modern applications are developed, deployed, and operated. These changes have accelerated the adoption of DevOps practices. Containers provide portability and the ability to develop and test code locally, package it together with all the required dependencies, and ship it anywhere.
With this revolution came the next great challenge—how to effectively deploy and manage the new containers workloads. Kubernetes has become the leading container orchestration system thanks to its extensibility, scalable design, robustness, and healthy community.
With Kubernetes, you can now deploy, orchestrate, and manage multiple microservices and create portable deployments that can run consistently on development, staging, and production environments.
Kubernetes relies on declarative yaml specifications of desired state, and it uses a set of controllers that run continuously to match the actual state to the desired state. Controllers manage the lifecycle of resources and bring them back to health when there is a deviation from the desired state.
Applications and managed cloud services
There are many cases where an application and all of its dependencies can be deployed in Kubernetes; however, there are also several cases where an application requires managed cloud services. For example, I may need a database and I may want to rely on a managed service operated by my cloud provider, which provides backup, restore, and security compliance.
Similarly, I may want to leverage AI services that are available on the cloud and charged per-usage, such as IBM watsonx Assistant. In this case, things become a little more complicated—I would need to create an instance of the required cloud service (or get a reference to an existing service), generate credentials (e.g., an API key and endpoint for access), and then store those in a Kubernetes secret so that my code can access it.
Those steps typically require manual interaction with the cloud via CLI or UI, which could be automated with scripts, but usually result in brittle scripting code or documentation that defeats the overall goal of a portable application.
Truly portable Kubernetes applications
So what is a truly portable Kubernetes application? It is an application where all the components—including those that are external to Kubernetes, like cloud-managed services—can be described with declarative yaml files. These can be simply deployed to Kubernetes without having to rely on complex scripts to provision external services and credentials.
With a truly portable description of my application and dependent external services, I could provide my set of Kubernetes yaml templates to another developer or operations team, who simply need to run kubectl create
to deploy the application and dependencies without additional human interactions or external scripting.
What is the magic that makes it possible to describe external managed cloud services as Kubernetes yaml? The answer lies in the extensibility of the Kubernetes API, leading to the emergence of Kubernetes operators.
What are operators?
One of the reasons Kubernetes has been so successful as the leading container orchestration project is its extensibility. The introduction of Custom Resources has allowed developers to extend the Kubernetes API to manage resources beyond native objects, such as pods and services.
Furthermore, the Kubernetes Go Client provides a powerful library to write controllers for Custom Resources. Controllers implement closed-loop control logic that runs continuously to reconcile the desired state of a resource with the observed state.
Operators combine application-specific controllers and related custom resources that codify domain-specific knowledge to manage the lifecycle of a resource. The first set of published operators were initially focused on stateful services running in Kubernetes. In recent years, however, the scope of operators has become broader, and there is now a growing community building operators for a wide variety of use cases. For example, OperatorHub.io provides a catalog of community operators handling many different kinds of software and services.
The are many reasons why operators can be appealing for developers. If a developer is already using Kubernetes to deploy and manage applications or larger solutions, operators provide a consistent resource model to define and manage all of the different components of an application. For example, if an application needs an etcd database, a developer just need to install the etcd operator and to create an EtcdCluster
custom resource. The etcd operator will then take care of deploying and managing the etcd cluster for the application, including day-2 operations, such as backup and restore.
Since operators rely on custom resources, which are Kubernetes API extensions, all the existing tooling for Kubernetes works out of the box. There is no need to learn new tools or practices; the same Kubernetes CLI (kubectl
) can be used to create, update, or delete pods and custom resources. Role-Based Access Control (RBAC) and Admission Controllers operate the same way for custom resources as well.
For more of a background on operators, see “Kubernetes Operators Explained”:
The IBM Cloud Operator
The IBM Cloud Catalog provides a broad range of capabilities, including AI, machine learning, data storage, data analytics, integration, messaging, weather, and the Internet of Things. As mentioned earlier, in order to use one of these services, a developer will need to create a service instance by provisioning it from the catalog, either using a CLI or UI.
With the IBM Cloud Operator, you can now adopt a Kubernetes-native approach to provision and configure services from the IBM Cloud Catalog as part of your Kubernetes application. The operator provides two custom resources: Service
and Binding
.
Service
creates an instance of any service from the IBM Cloud Catalog, while Binding
automates the creation of credentials for services and corresponding secrets in Kubernetes. When the user deploys the yaml for a Service
and its Binding
, the service is instantiated on IBM Cloud and its credentials are stored as a Kubernetes secret, all done automatically. The operator can, optionally, ensure the health of the external services by periodically checking on it. If either the service or its stored credentials are deleted manually, they are automatically recreated on the next health check.
The IBM Cloud Operator also provides the ability to bind to an existing service; in this case, the lifecycle of the service is managed outside of the operator. Therefore, if the service is deleted, it will not be automatically recreated.
Instantiating a Watson Translator managed service on IBM Public Cloud
Below is a showcase of instantiating a Watson Translator managed service on IBM Public Cloud. The Service yaml specifies the type of service and the plan—in this case, the free Lite plan. It also specifies that self-healing is enabled, meaning that if the service is somehow manually deleted from the Cloud, the operator will bring it back to life. We have used the awesome Kui graphic terminal to provide the graphic visualizations.
We can also verify on our IBM Cloud account that the service has been provisioned.
Now, let’s try it!
Installing operators from OperatorHub.io
OperatorHub.io provides a catalog of community operators that can be easily installed on any OpenShift 4 cluster. But what if you have a OpenShift 3.11 or upstream Kubernetes cluster? You can still access and install operators from OperatorHub by installing the Operator Lifecycle Manager (OLM) on your cluster as follows:
Installing the IBM Cloud Operator
Before installing the IBM Cloud Operator from the OperatorHub.io catalog, you need to perform a couple of steps. Since you’ll be provisioning IBM Cloud services, you need an account on IBM Cloud and the IBM Cloud CLI. Once you have your account and CLI installed, you need to log in to your IBM cloud account with the IBM Cloud CLI:
Set a target Cloud Foundry Org and Space in IBM Cloud with the command:
Check if your default resource group, org, and space are all set with the following command:
If the default resource group is not set, or if you need to use a different resource group, you can set it with the following command:
You can then generate a default configuration and secret with your IBM Cloud API key for the operator with the script:
You can now install the operator from the catalog with OLM. The catalog provides a URL for the resources to install for each operator; the IBM Cloud Operator can be installed with:
On OpenShift 4 clusters, the OpenShift console provides access to the catalog and one-click install directly from the catalog.
Once the operator is installed, you can create an instance of an IBM public cloud service using the following custom resource:
To find the value for <SERVICE_CLASS>
, you can list the names of all IBM public cloud services with the command:
Once you find the <SERVICE_CLASS>
name, you can list the available plans to select a <PLAN>
with the command:
For example, to create an instance of the Watson Translator Service, you can use the following custom resource:
And then you can create credentials and a secret for the service with the following Binding resource:
True application portability with the IBM Cloud Operator
With the IBM Cloud Operator, you can now rely on true portability for your application. If you have an application that, for example, requires the Watson Language Translator service, you can just deploy the Service
and Binding
resources along with any other application resources (e.g., deployments, config maps, roles etc.). And since the IBM Cloud Operator by default uses the defaults for your cloud account-specific context (e.g., resource group, region etc.) your set of templates are portable and can easily be deployed in different context, so you can feed those templates to a DevOps pipeline or share with other developers in your organization who can bring up the whole application and dependencies simply with kubectl create
.
Learn more about the IBM Cloud Operator
There is more to the IBM Cloud Operator than we could describe in this post. To learn about more advanced features such as self-healing and ability to link to existing services, check out the IBM Cloud Operator project documentation.