GitOps has been around for a few years now, but it has gained traction recently because of containers and the complexity surrounding the consistent deployment and management of container runtime environments.
What is the problem that GitOps is attempting to solve? Well, it automates software operations so that enterprises can get better at software engineering. It enables application teams to release more frequently and operate cloud-native applications more effectively.
This blog will explore if GitOps can be applied to edge topologies — especially creating CI/CD pipelines that can deploy applications to far edge devices. To reiterate, edge encompasses far edge devices all the way to the cloud, with enterprise edge and network edge along the way.
GitOps is a DevOps practice that uses Git as the single source of truth where the desired configuration state is stored. The focus is on operations automation, driven from Git repositories. Although it is in the title, Git is not the only repository that can be used. It is the interfaces provided by Git that automate operations. GitOps ends up using information extracted from build metadata to determine which packages to build triggered by a particular code change:
At its core, the GitOps model uses the controller pattern. This is further aided by the operator pattern from a Kubernetes or OpenShift perspective, wherein operators are software extensions that use custom resources (link resides outside ibm.com) to manage applications and their components.
We would be amiss not to mention Argo CD, a GitOps tool that helps with GitOps workflows. Argo CD is an open-source declarative tool for the continuous integration and continuous deployment (CI/CD) of applications. Implemented as a Kubernetes controller, Argo CD continuously monitors running application definitions and configurations, comparing the current, live state on the cluster against the desired state defined in a Git repository.
But GitOps is not a single product, plugin or platform. GitOps workflows help teams manage IT infrastructure through processes they already use in application development. To borrow from a GitLab blog, GitOps requires three core components: GitOps = IaC + PRs or MRs + CI/CD
Red Hat OpenShift operators simplify the installation and automated orchestration of complex workloads. They help encode human operational logic to manage services running as Kubernetes-native applications, making day-2 operations easier. The operator is a piece of software running in a pod on the cluster, interacting with the Kubernetes API server. An OpenShift operator is essentially a custom controller and can be, in effect, an application-specific controller.
Red Hat OpenShift makes it easy for developers wanting to use GitOps by providing the necessary operators. Once deployed, they can then be viewed under the Installed Operators section in the OpenShift Console. The Red Hat OpenShift GitOps operator is the upstream operator for ArgoCD, and the Red Hat OpenShift Pipelines operator, which also gets deployed, is the upstream operator for Tekton. See Figure 3:
The operators and related APIs can then be used to kick off one or more GitOps pipelines that can deploy to different environments pulling the desired configuration outcome from Git. Environments could be the usual dev, test and prod but can also span geographical environments like the enterprise cloud, telco network or edge computing nodes.
The deployment resources are classified into three areas: infrastructure, services and applications. These areas make it easy to separate and manage the deployment of related resources:
In a previous blog, we discussed DevOps in the edge computing domain; here, we take a look at how GitOps can be applied in edge computing. We alluded to the three edges in edge domputing:
There is also the cloud or the enterprise data center. Let’s take an in-depth look at these areas. Along with the edge environments, Figure 4 also depicts the three GitOps areas: infrastructure, services and applications.
Edge computing is seeing the proliferation of OpenShift or Kubernetes clusters in most IT centers. It has the potential to reach a massive scale of hundreds to thousands of deployments per customer. The result is that enterprise IT departments must manage multiple independent or cooperative container runtime clusters running on-prem and/or on public clouds.
Ensuring clusters have the same desired state — rolling out a change and rolling back a change on multiple clouds — is a major benefit that GitOps provides to edge- and IoT-based businesses.
The GitOps paradigm is applicable at the network edge since one of the major challenges Communication Service Providers (CSPs) face is looking for orchestration, automation and management of their networks. While 5G is a boon to consumers, software-defined networks (SDNs), network slicing with different bandwidths and faster deployment have created challenges for the telco providers.
An automated deployment pipeline is one way that CSPs can bring services to customers faster. Having a central repository and a declarative approach to provisioning container infrastructure means faster time to market for new features and change requests. Such a paradigm will help the provisioning of VNFs (Virtual Network Functions) and CNFs (Cloud-Native Network Functions) at the network edge. Containerization of network components makes it possible to manage such functions. Lastly, because all configuration activity is logged and stored in Git, the ability to track changes is critical for compliance and audit purposes. There are a couple of related blogs from WeaveWorks in the references:
GitOps allows organizations to deploy to multiple targets simultaneously. It allows for the rollout of fine-grained deployments. This would be extremely useful when deploying applications to hundreds and tens of thousands of edge nodes, which come in different shapes and form factors and use varied communication protocols — especially if the edge nodes are small Eedge clusters using an Intel NUC or NVIDIA Jetson.
The GitOps framework can be beneficial in deploying applications and using the Git repository as the single source of truth. ITOps teams look for autonomous application deployment, management and operations of edge nodes, which is facilitated with the use of Red Hat OpenShift operators.
The benefit of GitOps is obvious at the network edge and the enterprise edge. The ear edge devices present a different challenge because the storage and compute capacity of some of these devices is not large enough to host GitOps services and run applications.
The release of lightweight Kubernetes distributions, such as K3s and K0s, are meant for IoT and edge use cases. The ability to deploy a lightweight Kubernetes distribution on an edge device allows us to run a GitOps tool like Argo CD. The device(s) will then be able to adopt the pull model of polling a Git repository for the desired state and synchronizing it to the live state of the cluster.
By using GitOps, you resolve the issues of infrastructure and application configuration sprawl. The built-in GitOps operator in Red Hat OpenShift makes it easy to implement an Argo CD-driven pipeline. Customers of IBM Cloud Paks, including IBM Cloud Pak for Network Automation, can make use of Red Hat operators to install resources and employ the GitOps framework to automate and control the deployment process.
The IBM Cloud Native Toolkit is a great starting point. It is an open-source collection of assets that enable application development and ops deployment.
Special thanks to Hollis Chui and Kavitha Bade for reviewing the article.