July 13, 2021 By Budi Darmawan 7 min read

In this post, we discuss the development and packaging of operators.

With all the hoopla about operators in Kubernetes, what are they? A quick Google search retrieved this definition:

“A Kubernetes operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is both deployed on Kubernetes and managed using the Kubernetes API (application programming interface) and kubectl tooling.”

For a closer look at Kubernetes operators, check out the following video:

So, an operator it is a mechanism to do something (install, build, manage an application, etc.) in Kubernetes. In my previous post, “Demystifying Operator Deployment in OpenShift,” I discussed the OperatorHub and the deployment process of an operator. In this post, I will discuss operator development and packaging.

Operator processing overview.

An operator runs in a deployment-based pod and manages a specific Custom Resource Definition (CRD). It runs in a loop, continuously checking on the assigned CRDs. For each of the CRDs, the operator pod runs the reconcile() method. The method will be invoked for each CRD that is discovered in the namespace. The typical action of the method is a flow of reading the spec of the CRD, performing actions or checking the cluster based on that spec information and then writing the evaluation results to the status section of the CRD.

Now, let’s look into more detail on how to build an operator.

Development

Let’s start with how operators are constructed. For that, you will most likely start with Operator SDK, a complete software development tool to build operators that are based on Go, Ansible or Helm. This article is focused on the Go-based operator, as it is the one that provides most functions.

First, you initialize a template scaffolding for your operator. For this, you must provide a domain (the qualifier for your operator, similar to your base DNS) and a GIT-like repository:

$ operator-sdk init --domain cloud.ibm.com —-repo github.com/vbudi000/operatorWriting scaffold for you to edit…Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.6.3
Update go.mod:
$ go mod tidy
Running make:
$ make
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
/home/go/bin/controller-gen object:headerFile=”hack/boilerplate.go.txt” paths=”./…”
go fmt ./…
go vet ./…
go build -o bin/manager main.go

The result of the init is a skeleton of configuration files with kustomization.yaml files, such as the following:

Scaffolding created by operator-sdk init.

With that scaffolding created, you then must define the main content of the operator (i.e., the operator API). The API consists of the Custom Resource Definition (CRD) and the operator controller program:

$ operator-sdk create api —-group=cloud —-version=v1alpha1 —-kind=Operator1 --resource --controller
Writing scaffold for you to edit…
api/v1alpha1/operator1_types.go
controllers/operator1_controller.go
Running make:
$ make
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
/home/go/bin/controller-gen
object:headerFile=”hack/boilerplate.go.txt” paths=”./…”
go fmt ./…
go vet ./…
go build -o bin/manager main.go

As indicated in the output above, the generated files are as follows:

  • api/<version>/<kind>_types.go: Defines the structure of the CRD
  • controllers/<kind>_controller.go: Defines the processing logic

Define CRD

The CRD is defined in the api/<version>/<kind>_types.go file. The main object of the operator called Operator1 is defined as follows:

type Operator1 struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`
        Spec   Operator1Spec   `json:"spec,omitempty"`
        Status Operator1Status `json:"status,omitempty"`
}

The development activity mainly adds the CRD fields in the Operator1Spec and Operator1Status constructs, which coincide with the spec: and status: sections of the CRD in the YAML definition. The fields definition can be qualified using the kubebuilder directives to perform generation, validation and processing specifics to the fields (see here for more info).

The following is an example for the fields:

// Operator1Spec defines the desired state of Operator1
type Operator1Spec struct {
        Foo        string           `json:"foo,omitempty"`
        // +kubebuilder:validation:Enum=Check;Install;Upgrade
        Action     string           `json:"action"`
}// Operator1Status defines the observed state of Operator1
type Operator1Status struct {
        Bar sttring `json:`bar,omitempty"`
        // +kubebuilder:validation:Enum=Initial;Ready;Failed
        Stage string `json:"stage"`
}

Once the CRD structure is finalized, you can validate and generate the YAML file to define the CRD:

$ make generate
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
/home/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."$ make manifests
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
/home/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases

The manifest that is generated is a YAML file in config/crd/bases/<group>.<domain>_<type>s.yaml. It is the generated Custom Resource Definition YAML that you can load to Kubernetes, and it allows you to create objects to be managed by the operator.

Define the controller

The controller is a Go program, and the program resides in controllers/<kind>_controller.go. In the code, the following block is the main content that you must modify — the Reconcile function:

// +kubebuilder:rbac:groups=cloud.cloud.ibm.com,resources=operator1s,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cloud.cloud.ibm.com,resources=operator1s/status,verbs=get;update;patchfunc (r *Operator1Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
        _ = context.Background()
        _ = r.Log.WithValues("operator1", req.NamespacedName)// your logic herereturn ctrl.Result{}, nil
}

As discussed above, the Reconcile function is called for each occurrence of the Custom Resource. The logic flow of the Reconcile function is to read in the Custom Resource (primarily from the spec field), perform its processing and write out to the status field. A simple Reconcile function that checks the action field and writes to the stage field is shown below:

instance := &cloudv1alpha1.Operator1{}
action := instance.Spec.Action 
stage := "Initial"
if (action == "Check") {
  // check
  stage = "Ready"
} else if (action == "Install" ) {
  // install
  stage = "Ready"
} else if (action == "Upgrade" ) {
  // upgrade
  stage = "Failed"
}
instance.Status.Stage = stage

With the controller logic defined, the container image that runs the operator controller can be built. Another set of make commands can be run by specifying the target image name:

$ make docker-build IMG=test/operator1:v0.01
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
/home/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/home/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
mkdir -p /home/operator/testbin
test -f /home/operator/testbin/setup-envtest.sh || curl -sSLo /home/operator/testbin/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.6.3/hack/setup-envtest.sh
source /home/operator/testbin/setup-envtest.sh; fetch_envtest_tools /home/operator/testbin; setup_envtest_env /home/operator/testbin; go test ./... -coverprofile cover.out
Using cached envtest tools from /home/operator/testbin
setting up env vars
?    github.com/vbudi000/operator [no test files]
?    github.com/vbudi000/operator/api/v1alpha1 [no test files]
ok   github.com/vbudi000/operator/controllers 8.042s coverage: 0.0% of statements
docker build . -t test/operator1:v0.01
[+] Building 0.2s (17/17) FINISHED
=> [internal] load build definition from Dockerfile
=> 
=> transferring dockerfile: 37B
=> [internal] load .dockerignore
=> 
=> transferring context: 2B
=> [internal] load metadata for gcr.io/distroless/static:nonroot
=> [internal] load metadata for docker.io/library/golang:1.13
=> [internal] load build context
=> 
=> transferring context: 3.69kB
=> [builder 1/9] FROM docker.io/library/golang:1.13
=> [stage-1 1/3] FROM gcr.io/distroless/static:nonroot
=> CACHED [builder 2/9] WORKDIR /workspace
=> CACHED [builder 3/9] COPY go.mod go.mod
=> CACHED [builder 4/9] COPY go.sum go.sum
=> CACHED [builder 5/9] RUN go mod download
=> CACHED [builder 6/9] COPY main.go main.go
=> CACHED [builder 7/9] COPY api/ api/
=> CACHED [builder 8/9] COPY controllers/ controllers/
=> CACHED [builder 9/9] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go
=> CACHED [stage-1 2/3] COPY --from=builder /workspace/manager .
=> exporting to image
=> 
=> exporting layers
=> 
=> writing image sha256:42c13589022a91432f48240dd5e89b360bdb7e640891a96796876846c3fc4611
=> 
=> naming to docker.io/test/operator1:v0.01Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

The result is stored in the container image in your local machine registry. You can push it up to a Docker registry so that you can run your operator.

Operator packaging

To package the operator, you use a combination of the operator-sdk and the operator package manager tool (see here for more info).

To clarify this process, let’s evaluate the following images terminology:

  • Operator runtime: The runtime image you create for running the operator controller process(created using the make docker-build command).
  • Operator bundle: The container image that contains the manifests to install and activate the operator in a Kubernetes cluster, including roles, role-binding, manager, CRD and others.
  • Operator catalog: The container image that has pointers and lists all operator bundles that are provided in this catalog.

To generate the operator bundle, run the make bundle-build command:

$ make bundle bundle-build BUNDLE_IMG=test/operator1-bundle:v0.01 IMG=test/operator1:v0.01
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.3.0
/home/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
operator-sdk generate kustomize manifests -qDisplay name for the operator (required):
> Sample Operator 1
Description for the operator (required):
> Sample Demonstration of Operator
Provider's name for the operator (required):
> IBM
Any relevant URL for the provider name (optional):
>
Comma-separated list of keywords for your operator (required):
> sample,operator
Comma-separated list of maintainers and their emails (e.g. 'name1:email1, name2:email2') (required):
> foo@bar.com
cd config/manager && /usr/local/bin/kustomize edit set image controller=test/operator1:v0.01
/usr/local/bin/kustomize build config/manifests | operator-sdk generate bundle -q --overwrite --version 0.0.1
INFO[0000] Building annotations.yaml
INFO[0000] Writing annotations.yaml in /home/operator/bundle/metadata
INFO[0000] Building Dockerfile
INFO[0000] Writing bundle.Dockerfile in /home/operator
operator-sdk bundle validate ./bundle
INFO[0000] Found annotations file                        bundle-dir=bundle container-tool=docker
INFO[0000] Could not find optional dependencies file     bundle-dir=bundle container-tool=docker
INFO[0000] All validation tests have completed successfully
docker build -f bundle.Dockerfile -t test/operator1-bundle:v0.01 .
[+] Building 0.5s (7/7) FINISHED
 => [internal] load build definition from bundle.Dockerfile
 => => transferring dockerfile: 859B
 => [internal] load .dockerignore
 => => transferring context: 2B
 => [internal] load build context
 => => transferring context: 9.49kB
 => [1/3] COPY bundle/manifests /manifests/
 => [2/3] COPY bundle/metadata /metadata/
 => [3/3] COPY bundle/tests/scorecard /tests/scorecard/
 => exporting to image
 => => exporting layers
 => => writing image sha256:108d882278d74ed882be6f0c614bd71b8fa4
 => => naming to docker.io/test/operator1-bundle:v0.01Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

The bundle contains the content of the bundle path YAMLs, which is created using the kustomize tool from the config directory:

Once you have the operator bundle image created and pushed into an image repository, you add the bundle into an operator catalog (which you can then import into OperatorHub):

$ opm index add --bundles registry/test/operator1-bundle:v0.01 --tag registry/test/myregistry:v0.01 --build-tool docker
INFO[0000] building the index                            bundles="[registry/test/operator1-bundle:v0.01]"
INFO[0005] resolved name: docker.io/vbudi/operator1-bundle:v0.01
INFO[0005] fetched digest="sha256:5e3f2f0bfefe616ad0f93ab536827"
INFO[0005] fetched digest="sha256:87d39e63de0a25170fc8a200ba5ba"
INFO[0005] fetched digest="sha256:d6b1a31370746a31d0286316c437b"
INFO[0005] fetched digest="sha256:108d882278d74ed882be6f0c614bd"
INFO[0005] fetched digest="sha256:e81445e9adf09ec39103799da0650"
INFO[0008] unpacking layer: {application/vnd.docker.image.rootfs.diff.tar.gzip sha256:d6b1a31370746a31d0286316c437b 2272 [] map[] <nil>}
INFO[0008] unpacking layer: {application/vnd.docker.image.rootfs.diff.tar.gzip sha256:e81445e9adf09ec39103799da0650 362 [] map[] <nil>}
INFO[0008] unpacking layer: {application/vnd.docker.image.rootfs.diff.tar.gzip sha256:87d39e63de0a25170fc8a200ba5ba 443 [] map[] <nil>}
INFO[0008] Could not find optional dependencies file     dir=bundle_tmp142493219 file=bundle_tmp142493219/metadata load=annotations
INFO[0008] found csv, loading bundle                     dir=bundle_tmp142493219 file=bundle_tmp142493219/manifests load=bundle
INFO[0008] loading bundle file                           dir=bundle_tmp142493219/manifests file=2-controller-manager-metrics-service_v1_service.yaml load=bundle
INFO[0008] loading bundle file                           dir=bundle_tmp142493219/manifests file=2-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml load=bundle
INFO[0008] loading bundle file                           dir=bundle_tmp142493219/manifests file=2.clusterserviceversion.yaml load=bundle
INFO[0008] loading bundle file                           dir=bundle_tmp142493219/manifests file=cloud.cloud.ibm.com_operator1s.yaml load=bundle
INFO[0008] Generating dockerfile                         bundles="[registry/test/operator1-bundle:v0.01]"
INFO[0008] writing dockerfile: index.Dockerfile247929097  bundles="[registry/test/operator1-bundle:v0.01]"
INFO[0008] running docker build                          bundles="[registry/test/operator1-bundle:v0.01]"
INFO[0008] [docker build -f index.Dockerfile247929097 -t registry/test/myregistry:v0.01 .]  bundles="[registry/test/operator1-bundle:v0.01]"

When the operator catalog image has been built successfully and pushed to a registry, you can add a CatalogSource entry to OperatorHub and start installing your operator. See “Demystifying Operator Deployment in OpenShift” for further instructions.

For example, CatalogSource entry can be entered from the OpenShift console under Administration > Cluster Settings > Global Configuration > OperatorHub > Sources and the clicking Create Catalog Source:

Once the Catalog Source is READY, you can see the operator bundle from the OperatorHub:

Summary

In this article, a very simple operator is built, defined and loaded into a registry, and allowing it to be shown in OperatorHub on a RedHat OpenShift environment.

Learn more about IBM Garage.

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters