Clusters at the Edge

6 min read

Most of the earlier blogs in this ongoing series on edge computing talked about devices.

Devices of all kinds — audio devices, video devices, IoT-type devices, and various sensors, and actuators. This post will discuss containers deployed on small edge clusters that act as edge nodes — yes, running Kubernetes clusters on Raspberry Pi-class machines or in small form factors like the Intel NUC with enough computing and storage. Specifically, clusters running a Kubernetes-based downstream distribution, such as K3S or MicroK8S, or a slimmed-down three-node Red Hat OpenShift Container Platform (OCP) on a small cluster as an edge node in an on-premises location, far from a data center.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Why do we need an edge cluster?

Computation-intensive applications require high computing capacity for data processing and storage that IoT devices cannot cater to, given that they are often resource-constrained. Therefore, companies can use clusters at the edge as dynamic, robust environments where the operations teams can meet the required storage, computation, low latency, high performance, and high bandwidth. Also, some highly available on-premises shared services may require the scalability that Kubernetes clusters are designed to provide, such as edge cloud deployments.

An edge cluster can often be a logical resourcing boundary for a business. The high-end edge devices are also expensive for businesses to invest in. Numerous legacy fixed-function or dedicated devices may already have been deployed and budgeted for. The edge cluster technology offers businesses a way to modernize and future-proof their applications using an edge-native approach. It is done by connecting such devices with a small footprint edge cluster running a device management or IoT platform solution. The edge cluster would then be managed and operated like an Edge device as described in a previous blog post, “Policies at the Edge.”

Clusters at the edge offer the following key benefits:

  • Scalable: Provide the ease and ability for customers to add additional capacity to the cluster based on demand. It can be set up to auto-scale, thereby reducing the operations team's time and effort.
  • Distributed: Faster insight and action as more of the data is processed and analyzed by the edge cluster, thus avoiding the expense and the time to send data to the cloud.
  • Compliant: Processing data at the edge cluster also aids with regulatory compliance needs, data residency, and data isolation.
  • Secure: Secure communication between all the app servers hosted on the cluster.
  • Highly available: Increased reliability is provided by the failback options on the cluster and the ability to create new nodes as the need arises for the application's continued smooth operation.

Examples of use cases requiring edge clusters

Let's walk through an example in the retail industry. A retailer wants to pull a specific product off the store shelf or otherwise make it unavailable to purchase due to a safety recall. They would need to update their central inventory system and push out that update to take effect in multiple stores.

An edge cluster in the store, coupled with IoT devices, cameras, and sensors, is well suited to support this scenario. While the store inventory system flags the particular SKU (Stock Keeping Unit) as withdrawn, the store manager is also notified to pull the physical product(s) from the shelves. Simultaneously, the Point of Sale (POS) system is updated to invalidate that particular product barcode. A well-architected edge cluster solution would enable such quick action, at scale, without costly delays or human errors.

Figure 1 shows the edge cluster deployed in a typical grid layout store with security and inventory cameras, point of sale systems (POS), an entry sensor, and temperature sensors in freezers:

Figure 1: Store edge cluster architecture.

Figure 1: Store edge cluster architecture.

Let's look at another example, this one in the transportation sector. A cargo ship that has limited connectivity to the network carries hundreds of reefers. Reefer containers are, simply put, large refrigerators carried by container ships to move temperature-sensitive goods such as meat, vegetables, and pharmaceuticals without spoilage. The contents determine the temperature to be maintained inside these reefers.

The reefer containers not only have to maintain a stable temperature inside but also control humidity and promote adequate airflow. Monitoring and managing the thermostats, fans, and other vital componentry of the reefer is best done by an edge cluster, even in the limited connectivity scenario onboard the ship. This configuration will also allow on-vessel monitoring and alerting without requiring cloud-based infrastructure. When the ship reaches a port, the edge cluster would reconnect with the edge hub on the shipping dock or in the cloud. Managing these reefers and other devices at scale, with little to no human intervention, is only possible via a well-architected edge cluster solution.

Prepping the edge cluster

An edge cluster can be small enough to fit on an available shelf in a confined space like a QSR (quick-service restaurant), a train, an ambulance, a kiosk, stores, warehouses, and production floors. We can deploy workloads relevant to that environment. These could be video detection applications, temperature-sensing applications, ticketing applications, mission-critical voice-services, or even AR/VR applications. 

As a representative example of a modest Kubernetes cluster installation for the scenarios above, here are some details about how to implement using K3s. Keep in mind that you could also use similar Kubernetes distributions like minikube or microk8s. The list below shows a basic 1-master, 2-worker node K3s (https://k3s.io/) cluster setup. We show the minimal set of commands that are needed to be run on each component. The 3-node topology is shown in the table below:

The 3-node topology is shown in the table below:

K3S_URL is the IP address of the master node along with the port, where 6443 is the HTTPS listen port. Note that, by default, K3s uses containerd containers and not Docker.

Master

$export K3S_URL="https://192.168.0.248:6443" $curl -sfL https://get.k3s.io | sh - $sudo cat /var/lib/rancher/k3s/server/node-token K10e1d3513c6e47d402450465d7726ee6ac1240d62dc11521726aba73461e230bbe::server:79917a97f717e29d5858a6d45b5adccd

Worker 1

$export K3S_KUBECONFIG_MODE="644" $export K3S_URL="https://192.168.0.248:6443" $export K3S_TOKEN="K10e1d3513c6e47d402450465d7726ee6ac1240d62dc11521726aba73461e230bbe::server:79917a97f717e29d5858a6d45b5adccd"

Worker 2

$export K3S_KUBECONFIG_MODE="644" $export K3S_URL="https://192.168.0.248:6443" $export K3S_TOKEN="K10e1d3513c6e47d402450465d7726ee6ac1240d62dc11521726aba73461e230bbe::server:79917a97f717e29d5858a6d45b5adccd"

Completing the edge cluster

From an IBM Edge Application Manager (IEAM) perspective, an edge cluster is similar to an edge device because both have an edge agent installed on them. Figure 2 shows the high-level components of an edge cluster:

Figure 2: Edge cluster architecture.

Figure 2: Edge cluster architecture.

Edge clusters are basically edge nodes that are Kubernetes-based clusters. So, what is the rationale for setting up a small cluster as an edge node? Each edge node — in this case, the edge cluster — is registered with the exchange under the edge cluster owner's organization. The registration consists of an ID and security token that applies only to that edge cluster. An autonomous agent process runs on that edge cluster and enforces policies set by the edge cluster owner. Simultaneously, autonomous agreement bots (or agbots) are assigned deployment policies to deploy services to these edge clusters.

The above describes steps in the IBM Edge Application Manager product, which allows for deployment of edge services on an edge cluster, via a Kubernetes operator, thereby enabling the same autonomous deployment mechanisms used with edge devices. This means that the full power of Kubernetes as a container management platform is available for edge services.

This link in the product knowledge center details the steps to install an edge agent on an edge cluster. After which, pertinent applications can be installed on that edge cluster. As you might have guessed, IEAM supports Kubernetes, K3s, MiniKube, Microk8s, and Red Hat OpenShift.

This blog described the unique business value provided by deploying clusters at the edge — not necessarily the far edge, but edge nodes in remote on-premises locations, nonetheless. To reiterate, edge node clustering capabilities help you manage and deploy workloads from a management hub cluster to remote instances of OCP or other Kubernetes-based clusters.

An edge cluster enables use cases at the edge that require co-location of compute with business operations or that require more scalability, availability, and compute capability than what can be supported by an edge device. Furthermore, it is not uncommon for edge clusters to provide application services needed to support services running on edge devices due to their close proximity to those devices.

A note about Podman

There is a newer, open-source container management tool for developing, managing, and running Open Container Initiative (OCI) containers. Called Podman (short for Pod Manager), it is an option well-suited for edge clusters. Podman is provided as part of the libpod library and can be used to create and maintain containers. While it can run Docker containers, currently, it only runs on Linux-based operating systems. You can learn more about Podman here.

The IBM Cloud architecture center offers up many hybrid and multicloud reference architectures. Look for the edge computing reference architecture here.

You can also view the newly published, edge-related automotive reference architecture.

Special thanks to Joe Pearson and David Booz for reviewing the article.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Learn more

Related resources

Be the first to hear about news, product updates, and innovation from IBM Cloud