IBM Edge Computing for Servers

Edge computing is about placing enterprise applications closer to where data is created.

Overview

Edge computing is an important emerging paradigm that can expand your operating model by virtualizing your cloud beyond a data center or cloud computing center. Edge computing moves application workloads from a centralized location to remote locations, such as factory floors, warehouses, distribution centers, retail stores, transportation centers, and more. Essentially, edge computing provides you with the capability to move application workloads anywhere that computing is needed outside of your data centers and cloud hosting environment.

IBM Edge Computing for Servers provides you with edge computing features to help you manage and deploy workloads from a hub cluster to remote instances of IBM Cloud Private or other Kubernetes-based clusters.

IBM Edge Computing for Servers uses IBM Multicloud Manager to control the deployment of containerized workloads to the edge servers, gateways, and devices that are hosted by IBM Cloud Private clusters at remote locations.

IBM Edge Computing for Servers also includes support for an edge computing profile. This supported profile can help you reduce the resource usage of IBM Cloud Private when IBM Cloud Private is installed for use in hosting a remote edge server. This profile places the minimum services that are needed to support robust remote management of these server environments and the enterprise-critical applications you are hosting there. With this profile, you are still able to authenticate users, collect log and event data, and deploy workloads in a single or set of clustered worker nodes.

Benefits of edge computing

For more information about the IBM Edge computing solutions, see IBM solutions for 5G and edge computing.

Examples

Edge computing is about bringing your work close to where data is created, and where actions need to be taken. For example, if you operate a factory, your factory floor equipment can include sensors for recording any number of data points that provide details about how your plant is operating. The sensors can record the number of parts that are being assembled per hour, the time that is required for a stacker to return to its starting position, or the operating temperature of a fabricating machine. The information from these data points can be beneficial in helping you determine whether you are operating at peak efficiency, identify the quality levels that you are achieving, or predict when a machine is likely to fail and require preventive maintenance.

In another example, if you have workers in remote locations whose job can result in them working in hazardous situations, such as hot or loud environments, being near exhaust or production fumes, or heavy machinery, you might need to monitor the environment conditions. You can collect information from various sources that can be used at the remote locations. The data from this monitoring can be used by supervisors to determine when to instruct workers to take breaks, rehydrate, or to shut down equipment.

In a further example, you can use video cameras to monitor properties, such as to identify foot traffic into retail stores, restaurants, or entertainment venues, to serve as a security monitor to record acts of vandalism or other unwanted activities, or to recognize emergency conditions. If you also collect data from the videos, you can use edge computing to process video analytics locally to help your workers respond quicker to opportunities and incidents. Restaurant workers can better estimate how much food to prepare, retail managers can determine whether to open additional check-out counters, and security personnel can respond faster to emergencies or alert first-responders.

In all of these cases, sending the recorded data to a cloud computing center or data center can add latency to the data processing. This loss of time can have negative consequences when you are trying to respond to critical situations or opportunities.

If the recorded data is data that does not require any special or time-sensitive processing, you can incur substantial network and storage costs for unnecessarily sending this normal data.

Alternatively, if any collected data is also sensitive, such as personal information, you increase the risk of exposing that data every time you move the data to another location from where that data was created.

Additionally, if any of your network connections are unreliable, you also run the risk of interrupting critical operations.

Architecture

The goal of edge computing is to harness the disciplines that have been created for hybrid cloud computing to support remote operations of edge computing facilities. IBM Edge Computing for Servers is designed for that purpose.

A typical deployment of IBM Edge Computing for Servers includes an instance of IBM Cloud Private that is installed on your hub cluster data center. This IBM Cloud Private instance is used to host an IBM Multicloud Manager controller within the hub cluster. The hub cluster is where the management of all of your remote edge servers occurs. IBM Edge Computing for Servers uses IBM Multicloud Manager to manage and deploy workloads from the hub cluster to the remote Kubernetes-based edge servers when remote operations are required.

These edge servers can be installed in remote on-premises locations to make your application workloads local to where your critical business operations physically occur, such as at your factories, warehouses, retail outlets, distribution centers, and more. An instance of IBM Cloud Private and an IBM Multicloud Manager Klusterlet are needed at each of the remote locations where you want to host an edge server. The IBM Multicloud Manager Klusterlet is used to remotely manage the edge servers.

The following diagram depicts the high-level topology for a typical edge computing setup that uses IBM Cloud Private and IBM Multicloud Manager:

IBM Edge Computing for Servers topology

The following diagram shows the typical high-level architecture for an IBM Edge Computing for Servers system:

IBM Edge Computing for Servers architecture

The following diagrams show the high-level architecture for the typical deployments of IBM Edge Computing for Servers components:

Concepts

edge computing: A distributed computing model that takes advantage of compute available outside of traditional and cloud data centers. An edge computing model places a workload closer to where the associated data is created and where actions are taken in response to analysis of that data. Placing data and workload on edge devices reduces latencies, lowers demands on network bandwidth, increases privacy of sensitive information,and enables operations during network disruptions.

edge device: A piece of equipment, such as an assembly machine on a factory floor, an ATM, an intelligent camera, or an automobile, that has integrated compute capacity on which meaningful work can be performed and data collected or produced.

edge gateway: An edge server that has services that perform network functions such as protocol translation, network termination, tunneling, firewall protection, or wireless connections. An edge gateway serves as the connection point between an edge device or edge server and the cloud or a larger network.

edge node: Any edge device, edge server, or edge gateway where edge computing takes place.

edge server: A computer in a remote operations facility that runs enterprise application workloads and shared services. An edge server can be used to connect to an edge device, connect to another edge server, or serve as an edge gateway for connecting to the cloud or to a larger network.

edge service: A service that is designed specifically to be deployed on an edge server, edge gateway, or edge device. Visual recognition, acoustic insights, and speech recognition are all examples of potential edge services.

edge workload: Any service, microservice, or piece of software that does meaningful work when it runs on an edge node.