In the last couple of years, we have seen more and more enterprises adopt GitOps, a DevOps paradigm that uses Git as a repository.

GitOps has been around for a few years now, but it has gained traction recently because of containers and the complexity surrounding the consistent deployment and management of container runtime environments.

What is the problem that GitOps is attempting to solve? Well, it automates software operations so that enterprises can get better at software engineering. It enables application teams to release more frequently and operate cloud-native applications more effectively.

This blog will explore if GitOps can be applied to edge topologies — especially creating CI/CD pipelines that can deploy applications to far edge devices. To reiterate, edge encompasses far edge devices all the way to the cloud, with enterprise edge and network edge along the way.

Please make sure to check out all the installments in this series of blog posts on edge computing:

What is GitOps?

GitOps is a DevOps practice that uses Git as the single source of truth where the desired configuration state is stored. The focus is on operations automation, driven from Git repositories. Although it is in the title, Git is not the only repository that can be used. It is the interfaces provided by Git that automate operations. GitOps ends up using information extracted from build metadata to determine which packages to build triggered by a particular code change:

Figure 1. GitOps overview.

At its core, the GitOps model uses the controller pattern. This is further aided by the operator pattern from a Kubernetes or OpenShift perspective, wherein operators are software extensions that use custom resources to manage applications and their components.

We would be amiss not to mention Argo CD, a GitOps tool that helps with GitOps workflows. Argo CD is an open-source declarative tool for the continuous integration and continuous deployment (CI/CD) of applications. Implemented as a Kubernetes controller, Argo CD continuously monitors running application definitions and configurations, comparing the current, live state on the cluster against the desired state defined in a Git repository.

But GitOps is not a single product, plugin or platform. GitOps workflows help teams manage IT infrastructure through processes they already use in application development. To borrow from a GitLab blog, GitOps requires three core components: GitOps = IaC + PRs or MRs + CI/CD

  • IaC: Infrastructure as Code (IaC) is the practice of keeping all infrastructure configuration stored as code. GitOps uses a Git repository as the single source of truth for infrastructure definitions. Git tracks all code management changes.
  • PRs or MRs: GitOps uses pull requests (PRs) or merge requests (MRs) as the change mechanism for all infrastructure updates. This is where teams can collaborate via reviews and comments and where formal approvals take place.
  • CI/CD: GitOps automates infrastructure updates using a Git workflow with continuous integration (CI) and continuous delivery (CD). When new code is merged, the CI/CD pipeline enacts the change in the environment. Any configuration drift, such as manual changes or errors, is overwritten by GitOps automation so the environment converges on the desired state defined in Git, thus providing continuous operations (CO):

    Figure 2. CI/CD/CO

GitOps in Red Hat OpenShift

Red Hat OpenShift operators simplify the installation and automated orchestration of complex workloads. They help encode human operational logic to manage services running as Kubernetes-native applications, making day-2 operations easier. The operator is a piece of software running in a pod on the cluster, interacting with the Kubernetes API server. An OpenShift operator is essentially a custom controller and can be, in effect, an application-specific controller.

GitOps operator

Red Hat OpenShift makes it easy for developers wanting to use GitOps by providing the necessary operators. Once deployed, they can then be viewed under the Installed Operators section in the OpenShift Console. The Red Hat OpenShift GitOps operator is the upstream operator for ArgoCD, and the Red Hat OpenShift Pipelines operator, which also gets deployed, is the upstream operator for Tekton. See Figure 3:

Figure 3. GitOps-related operators in Red Hat OpenShift.

The operators and related APIs can then be used to kick off one or more GitOps pipelines that can deploy to different environments pulling the desired configuration outcome from Git. Environments could be the usual dev, test and prod but can also span geographical environments like the enterprise cloud, telco network or edge computing nodes.

The deployment resources are classified into three areas: infrastructure, services and applications. These areas make it easy to separate and manage the deployment of related resources:

  • Infrastructure is where the required namespaces and storage units are defined.
  • Services is where the various operators needed to set up the instances are described.
  • Applications is where the application to be deployed are enumerated.

GitOps in edge computing

In a previous blog, we discussed DevOps in the edge computing domain; here, we take a look at how GitOps can be applied in edge computing. We alluded to the three edges in edge domputing:

  • Enterprise edge
  • Network edge
  • Device edge (or far edge)

There is also the cloud or the enterprise data center. Let’s take an in-depth look at these areas. Along with the edge environments, Figure 4 also depicts the three GitOps areas: infrastructure, services and applications.

Cloud/enterprise data center

Edge computing is seeing the proliferation of OpenShift or Kubernetes clusters in most IT centers. It has the potential to reach a massive scale of hundreds to thousands of deployments per customer. The result is that enterprise IT departments must manage multiple independent or cooperative container runtime clusters running on-prem and/or on public clouds.

Ensuring clusters have the same desired state — rolling out a change and rolling back a change on multiple clouds — is a major benefit that GitOps provides to edge- and IoT-based businesses.

Network edge

The GitOps paradigm is applicable at the network edge since one of the major challenges Communication Service Providers (CSPs) face is looking for orchestration, automation and management of their networks. While 5G is a boon to consumers, software-defined networks (SDNs), network slicing with different bandwidths and faster deployment have created challenges for the telco providers.

An automated deployment pipeline is one way that CSPs can bring services to customers faster. Having a central repository and a declarative approach to provisioning container infrastructure means faster time to market for new features and change requests. Such a paradigm will help the provisioning of VNFs (Virtual Network Functions) and CNFs (Cloud-Native Network Functions) at the network edge. Containerization of network components makes it possible to manage such functions. Lastly, because all configuration activity is logged and stored in Git, the ability to track changes is critical for compliance and audit purposes. There are a couple of related blogs from WeaveWorks in the references:

Figure 4. GitOps in edge computing.

Enterprise edge

GitOps allows organizations to deploy to multiple targets simultaneously. It allows for the rollout of fine-grained deployments. This would be extremely useful when deploying applications to hundreds and tens of thousands of edge nodes, which come in different shapes and form factors and use varied communication protocols — especially if the edge nodes are small Eedge clusters using an Intel NUC or NVIDIA Jetson.

The GitOps framework can be beneficial in deploying applications and using the Git repository as the single source of truth. ITOps teams look for autonomous application deployment, management and operations of edge nodes, which is facilitated with the use of Red Hat OpenShift operators.

Device edge (or far edge)

The benefit of GitOps is obvious at the network edge and the enterprise edge. The ear edge devices present a different challenge because the storage and compute capacity of some of these devices is not large enough to host GitOps services and run applications.

The release of lightweight Kubernetes distributions, such as K3s and K0s, are meant for IoT and edge use cases. The ability to deploy a lightweight Kubernetes distribution on an edge device allows us to run a GitOps tool like Argo CD. The device(s) will then be able to adopt the pull model of polling a Git repository for the desired state and synchronizing it to the live state of the cluster.


By using GitOps, you resolve the issues of infrastructure and application configuration sprawl. The built-in GitOps operator in Red Hat OpenShift makes it easy to implement an Argo CD-driven pipeline. Customers of IBM Cloud Paks, including IBM Cloud Pak for Network Automation, can make use of Red Hat operators to install resources and employ the GitOps framework to automate and control the deployment process.

The IBM Cloud Native Toolkit is a great starting point. It is an open-source collection of assets that enable application development and ops deployment.

Special thanks to Hollis Chui and Kavitha Bade for reviewing the article.

Learn more

Related articles


More from Cloud

Kubernetes version 1.28 now available in IBM Cloud Kubernetes Service

2 min read - We are excited to announce the availability of Kubernetes version 1.28 for your clusters that are running in IBM Cloud Kubernetes Service. This is our 23rd release of Kubernetes. With our Kubernetes service, you can easily upgrade your clusters without the need for deep Kubernetes knowledge. When you deploy new clusters, the default Kubernetes version remains 1.27 (soon to be 1.28); you can also choose to immediately deploy version 1.28. Learn more about deploying clusters here. Kubernetes version 1.28 In…

Temenos brings innovative payments capabilities to IBM Cloud to help banks transform

3 min read - The payments ecosystem is at an inflection point for transformation, and we believe now is the time for change. As banks look to modernize their payments journeys, Temenos Payments Hub has become the first dedicated payments solution to deliver innovative payments capabilities on the IBM Cloud for Financial Services®—an industry-specific platform designed to accelerate financial institutions' digital transformations with security at the forefront. This is the latest initiative in our long history together helping clients transform. With the Temenos Payments…

Foundational models at the edge

7 min read - Foundational models (FMs) are marking the beginning of a new era in machine learning (ML) and artificial intelligence (AI), which is leading to faster development of AI that can be adapted to a wide range of downstream tasks and fine-tuned for an array of applications.  With the increasing importance of processing data where work is being performed, serving AI models at the enterprise edge enables near-real-time predictions, while abiding by data sovereignty and privacy requirements. By combining the IBM watsonx data…

The next wave of payments modernization: Minimizing complexity to elevate customer experience

3 min read - The payments ecosystem is at an inflection point for transformation, especially as we see the rise of disruptive digital entrants who are introducing new payment methods, such as cryptocurrency and central bank digital currencies (CDBC). With more choices for customers, capturing share of wallet is becoming more competitive for traditional banks. This is just one of many examples that show how the payments space has evolved. At the same time, we are increasingly seeing regulators more closely monitor the industry’s…