Most of the earlier blogs in this ongoing series on edge computing talked about devices.

Devices of all kinds — audio devices, video devices, IoT-type devices, and various sensors, and actuators. This post will discuss containers deployed on small edge clusters that act as edge nodes — yes, running Kubernetes clusters on Raspberry Pi-class machines or in small form factors like the Intel NUC with enough computing and storage. Specifically, clusters running a Kubernetes-based downstream distribution, such as K3S or MicroK8S, or a slimmed-down three-node Red Hat OpenShift Container Platform (OCP) on a small cluster as an edge node in an on-premises location, far from a data center.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Why do we need an edge cluster?

Computation-intensive applications require high computing capacity for data processing and storage that IoT devices cannot cater to, given that they are often resource-constrained. Therefore, companies can use clusters at the edge as dynamic, robust environments where the operations teams can meet the required storage, computation, low latency, high performance, and high bandwidth. Also, some highly available on-premises shared services may require the scalability that Kubernetes clusters are designed to provide, such as edge cloud deployments.

An edge cluster can often be a logical resourcing boundary for a business. The high-end edge devices are also expensive for businesses to invest in. Numerous legacy fixed-function or dedicated devices may already have been deployed and budgeted for. The edge cluster technology offers businesses a way to modernize and future-proof their applications using an edge-native approach. It is done by connecting such devices with a small footprint edge cluster running a device management or IoT platform solution. The edge cluster would then be managed and operated like an Edge device as described in a previous blog post, “Policies at the Edge.”

Clusters at the edge offer the following key benefits:

  • Scalable: Provide the ease and ability for customers to add additional capacity to the cluster based on demand. It can be set up to auto-scale, thereby reducing the operations team’s time and effort.
  • Distributed: Faster insight and action as more of the data is processed and analyzed by the edge cluster, thus avoiding the expense and the time to send data to the cloud.
  • Compliant: Processing data at the edge cluster also aids with regulatory compliance needs, data residency, and data isolation.
  • Secure: Secure communication between all the app servers hosted on the cluster.
  • Highly available: Increased reliability is provided by the failback options on the cluster and the ability to create new nodes as the need arises for the application’s continued smooth operation.

Examples of use cases requiring edge clusters

Let’s walk through an example in the retail industry. A retailer wants to pull a specific product off the store shelf or otherwise make it unavailable to purchase due to a safety recall. They would need to update their central inventory system and push out that update to take effect in multiple stores.

An edge cluster in the store, coupled with IoT devices, cameras, and sensors, is well suited to support this scenario. While the store inventory system flags the particular SKU (Stock Keeping Unit) as withdrawn, the store manager is also notified to pull the physical product(s) from the shelves. Simultaneously, the Point of Sale (POS) system is updated to invalidate that particular product barcode. A well-architected edge cluster solution would enable such quick action, at scale, without costly delays or human errors.

Figure 1 shows the edge cluster deployed in a typical grid layout store with security and inventory cameras, point of sale systems (POS), an entry sensor, and temperature sensors in freezers:

Figure 1: Store edge cluster architecture.

Let’s look at another example, this one in the transportation sector. A cargo ship that has limited connectivity to the network carries hundreds of reefers. Reefer containers are, simply put, large refrigerators carried by container ships to move temperature-sensitive goods such as meat, vegetables, and pharmaceuticals without spoilage. The contents determine the temperature to be maintained inside these reefers.

The reefer containers not only have to maintain a stable temperature inside but also control humidity and promote adequate airflow. Monitoring and managing the thermostats, fans, and other vital componentry of the reefer is best done by an edge cluster, even in the limited connectivity scenario onboard the ship. This configuration will also allow on-vessel monitoring and alerting without requiring cloud-based infrastructure. When the ship reaches a port, the edge cluster would reconnect with the edge hub on the shipping dock or in the cloud. Managing these reefers and other devices at scale, with little to no human intervention, is only possible via a well-architected edge cluster solution.

Prepping the edge cluster

An edge cluster can be small enough to fit on an available shelf in a confined space like a QSR (quick-service restaurant), a train, an ambulance, a kiosk, stores, warehouses, and production floors. We can deploy workloads relevant to that environment. These could be video detection applications, temperature-sensing applications, ticketing applications, mission-critical voice-services, or even AR/VR applications.

As a representative example of a modest Kubernetes cluster installation for the scenarios above, here are some details about how to implement using K3s. Keep in mind that you could also use similar Kubernetes distributions like minikube or microk8s. The list below shows a basic 1-master, 2-worker node K3s (https://k3s.io/) cluster setup. We show the minimal set of commands that are needed to be run on each component. The 3-node topology is shown in the table below:

K3S_URL is the IP address of the master node along with the port, where 6443 is the HTTPS listen port. Note that, by default, K3s uses containerd containers and not Docker.

Master

$export K3S_URL="https://192.168.0.248:6443" $curl -sfL https://get.k3s.io | sh - $sudo cat /var/lib/rancher/k3s/server/node-token K10e1d3513c6e47d402450465d7726ee6ac1240d62dc11521726aba73461e230bbe::server:79917a97f717e29d5858a6d45b5adccd

Worker 1

$export K3S_KUBECONFIG_MODE="644" $export K3S_URL="https://192.168.0.248:6443" $export K3S_TOKEN="K10e1d3513c6e47d402450465d7726ee6ac1240d62dc11521726aba73461e230bbe::server:79917a97f717e29d5858a6d45b5adccd"

Worker 2

$export K3S_KUBECONFIG_MODE="644" $export K3S_URL="https://192.168.0.248:6443" $export K3S_TOKEN="K10e1d3513c6e47d402450465d7726ee6ac1240d62dc11521726aba73461e230bbe::server:79917a97f717e29d5858a6d45b5adccd"

Completing the edge cluster

From an IBM Edge Application Manager (IEAM) perspective, an edge cluster is similar to an edge device because both have an edge agent installed on them. Figure 2 shows the high-level components of an edge cluster:

Figure 2: Edge cluster architecture.

Edge clusters are basically edge nodes that are Kubernetes-based clusters. So, what is the rationale for setting up a small cluster as an edge node? Each edge node — in this case, the edge cluster — is registered with the exchange under the edge cluster owner’s organization. The registration consists of an ID and security token that applies only to that edge cluster. An autonomous agent process runs on that edge cluster and enforces policies set by the edge cluster owner. Simultaneously, autonomous agreement bots (or agbots) are assigned deployment policies to deploy services to these edge clusters.

The above describes steps in the IBM Edge Application Manager product, which allows for deployment of edge services on an edge cluster, via a Kubernetes operator, thereby enabling the same autonomous deployment mechanisms used with edge devices. This means that the full power of Kubernetes as a container management platform is available for edge services.

This link in the product knowledge center details the steps to install an edge agent on an edge cluster. After which, pertinent applications can be installed on that edge cluster. As you might have guessed, IEAM supports Kubernetes, K3s, MiniKube, Microk8s, and Red Hat OpenShift.

This blog described the unique business value provided by deploying clusters at the edge — not necessarily the far edge, but edge nodes in remote on-premises locations, nonetheless. To reiterate, edge node clustering capabilities help you manage and deploy workloads from a management hub cluster to remote instances of OCP or other Kubernetes-based clusters.

An edge cluster enables use cases at the edge that require co-location of compute with business operations or that require more scalability, availability, and compute capability than what can be supported by an edge device. Furthermore, it is not uncommon for edge clusters to provide application services needed to support services running on edge devices due to their close proximity to those devices.

A note about Podman

There is a newer, open-source container management tool for developing, managing, and running Open Container Initiative (OCI) containers. Called Podman (short for Pod Manager), it is an option well-suited for edge clusters. Podman is provided as part of the libpod library and can be used to create and maintain containers. While it can run Docker containers, currently, it only runs on Linux-based operating systems. You can learn more about Podman here.

The IBM Cloud architecture center offers up many hybrid and multicloud reference architectures. Look for the edge computing reference architecture here.

You can also view the newly published, edge-related automotive reference architecture.

Special thanks to Joe Pearson and David Booz for reviewing the article.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Learn more

Related resources

Was this article helpful?
YesNo

More from Cloud

A clear path to value: Overcome challenges on your FinOps journey 

3 min read - In recent years, cloud adoption services have accelerated, with companies increasingly moving from traditional on-premises hosting to public cloud solutions. However, the rise of hybrid and multi-cloud patterns has led to challenges in optimizing value and controlling cloud expenditure, resulting in a shift from capital to operational expenses.   According to a Gartner report, cloud operational expenses are expected to surpass traditional IT spending, reflecting the ongoing transformation in expenditure patterns by 2025. FinOps is an evolving cloud financial management discipline…

IBM Power8 end of service: What are my options?

3 min read - IBM Power8® generation of IBM Power Systems was introduced ten years ago and it is now time to retire that generation. The end-of-service (EoS) support for the entire IBM Power8 server line is scheduled for this year, commencing in March 2024 and concluding in October 2024. EoS dates vary by model: 31 March 2024: maintenance expires for Power Systems S812LC, S822, S822L, 822LC, 824 and 824L. 31 May 2024: maintenance expires for Power Systems S812L, S814 and 822LC. 31 October…

24 IBM offerings winning TrustRadius 2024 Top Rated Awards

2 min read - TrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. “Earning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. It’s based entirely on reviews and customer sentiment,” said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher from…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters