DevOps at the Edge

5 min read

How do DevOps and Edge computing interact?

Several people have pointed out, rightfully so, that Edge computing blogs mainly deal with the operational side of things, such as device registration and operations, network management, etc.

This fourth blog in the series on Edge computing touches on DevOps in an attempt to answer questions like the following:

  • Is there any coding to be done when it comes to Edge computing solutions?
  • What do Edge applications look like?
  • Is there a programming model for Edge applications?

If the distributed architecture of Edge computing means running applications and services on those Edge devices, there has to be a methodology to code, test, deploy, and run those apps.

For more background on Edge computing, please see the following video:

Non-functional requirements (NFRs)

The non-functional requirements (NFRs) for DevOps or DevSecOps are well known—continuous integration, continuous delivery, continuous testing, and continuous deployment, all in a secure framework.

There are a few nuances when it comes to developing and deploying Edge applications and solutions—scaling, types of devices, application footprint, operating speed and disconnected.

While there is a need for providing a framework to securely deliver consistent and reliable software to the devices quickly, something unique to Edge is the scaling factor. Imagine the second largest bank in the United States—which has 16,220 ATMs across the country—wanting to apply an update to its ATMs.

There is a plethora of Edge devices. They come in different shapes, sizes, makes, and models from hundreds of manufacturers. Coding to the lowest common denominator makes things interesting. Some are audio devices, some are visual devices, while others have audio-visual capabilities. But the mantra developers have to keep in mind is “write once, deploy everywhere.”

Then, there is the size of Edge applications. We have all heard of cloud-native development when dealing with the cloud. With Edge, we are looking at Edge-native applications, which tend to have a smaller footprint. And, more importantly, they have to operate at very high speeds, especially when it comes to inferencing at the Edge. Note that there are hardware accelerators that significantly improve the performance of inference applications on the Edge.

Finally, the Edge, unlike the cloud, can be unstable and even disconnected by design. There can be many points of failure in an Edge solution. “Keep it simple” is not a cliché, but a rule of Edge-native applications since they have to be ready to scale back to the cloud at any point. Any and all data at the Edge should be considered ephemeral.

Tools, toolchains, and frameworks

Providing tools and a framework for securely delivering consistent and reliable software as fast as possible to all connected devices will be key. The previous blog in this series, “Architecting at the Edge,” showed the IBM Edge Computing Reference Architecture.

ec1

 

In it, you will see that Linux and Docker containers are the most common technologies in vogue when it comes to Edge-based applications, along with Kubernetes for container orchestration.

One could avail of any toolchain that deploys to Kubernetes or Docker (examples can be found here). A Git repository, an Eclipse or Web IDE, and a delivery pipeline with Jenkins and Terraform would be the main components in the toolchain.

There are two very distinct endpoints in an Edge computing solution—Edge servers and Edge devices. We envision two toolchains. The first toolchain would be used to deploy the Edge server infrastructure.

A sample DevOps toolchain for deploying Edge server infrastructure.

A sample DevOps toolchain for deploying Edge server infrastructure.

The second toolchain would have at least two pipelines that would be used to deploy applications to the Edge servers (which are Kubernetes-based container platforms) and the other would deploy to Edge devices based on ARM architecture (running Docker-based applications).

A sample DevOps toolchain for deploying Edge applications.

A sample DevOps toolchain for deploying Edge applications.

Edge applications

Let’s look at developing our first Edge app—you guessed it, a Hello World service, running on a device like Raspberry Pi. One needs a Docker Hub ID and access to GitHub. The following are the high-level deployment steps:

  • Set up IBM Edge infrastructure
    • Install Edge Exchange and Agreement Bot (agbot)
  • Develop Edge application
    • Build, test, push, and publish the service to the IBM Edge Exchange
  • Deploy Edge application
    • Register Edge Node to run this pattern
    • Install the agent on Edge device and configure it to point to the Edge Exchange
    • Device will make an agreement with one of the Edge Exchange agbots
    • Run the pattern and observe the output

Given that the Hello World is a simple application, it would only be deployed to the Edge devices. If it were an Artificial Intelligence (AI) application, the full-blown version of it would be deployed on the Edge server while a leaner version would get deployed on the Edge devices.

The code for the simple Hello World service is shown below. It outputs a line that says “Hello World” every three seconds:

#!/bin/sh
# Very simple sample edge service.
while true; do
echo "$HZN_DEVICE_ID says: Hello World!!"
sleep 3
done

The detailed steps and code for the HelloWorld service can be found on GitHub.

The link to instructions on how to deploy workloads to the Edge can be found in the references section.

Edge policies

Policies are rules or constraints that provide much finer control over deployment placement of Edge services by Edge node owners, service code developers, and deployment owners. Policies are used to restrict where a service can be run, requiring a particular hardware setup such as CPU/GPU constraints, memory constraints, specific sensors, actuators, or other peripheral devices.

With the Exchange created, the Service coded, the Node registered, and Policies specified, the device—in this case the Raspberry Pi—should be continuously transmitting the Hello World message as shown below:

Aug 15 18:21:21 raspberrypi workload-58c94e71fece2d994e187d07b6bd179cc798fa0b79ddfe8c017c6fbc4cd9a47f_ibm.helloworld[452]: mynode says: Hello World!!
Aug 15 18:21:24 raspberrypi workload-58c94e71fece2d994e187d07b6bd179cc798fa0b79ddfe8c017c6fbc4cd9a47f_ibm.helloworld[452]: mynode says: Hello World!!
Aug 15 18:21:27 raspberrypi workload-58c94e71fece2d994e187d07b6bd179cc798fa0b79ddfe8c017c6fbc4cd9a47f_ibm.helloworld[452]: mynode says: Hello World!!

The IBM Cloud architecture center offers up many hybrid cloud and multicloud reference architectures, including the newly published Edge computing reference architecture.

For more information on Edge computing, see the first three parts of this series and a few other important references:

Thanks to David Booz and Steven Cotugno for reviewing the article.

Be the first to hear about news, product updates, and innovation from IBM Cloud