What does it mean to deploy an Edge computing solution, where does it make sense, and what are the common requirements?
The previous blog post (“Rounding out the Edges”) in this three-part series on Edge computing introduced a plethora of terms: NFV (network function virtualization), SDN (software-defined networks), MEC (multi-access edge compute), NaaS (Networking-as-a-Service), and more.
Now, it’s time to bring all these components together and create a deployable architecture.
For even more background on Edge computing, see the following video from Rob High, IBM Fellow and Vice President and CTO of IBM Edge Computing:
Edge servers and devices
The Edge server is a Kubernetes-based platform on 1GB+ hardware that can scale from 10s to 1000s of servers. It has the ability to schedule, manage, and distribute workloads on devices at remote and sometimes disconnected locations, like an oil rig, a warehouse, an airplane, a cruise ship, etc.
Edge devices can communicate amongst themselves and also communicate back to the Edge server. Examples of Edge devices—and there are millions of them—would be video cameras, sensors, drones, industrial robots, any other ‘applied for purpose’ devices.
The challenge lies in the management and monitoring of all these devices – especially at scale, and in dynamic environments. In Figure 1, the small outermost circles depict devices. These are typically controlled by the Edge servers (depicted as medium-sized circles) or can communicate directly with the large Edge management node.
It is important to note that these Edge devices not only run the applications but also have intelligent compute resources, which allows for AI functions and visual or acoustic analytics to be done at the point where things are happening and data is captured. These could be Docker-based apps capable of running on any type of hardware.
We have alluded to this as systems with situational awareness. Sense-making systems that compose technologies and can take actionable insights to assist humans are the underlying idea behind Edge computing patterns. Businesses that have a better understanding of people, assets, and work at the Edge will make better decisions.
What kind of applications can these devices actually host and run? The obvious answer is small-footprint apps. Deciding which business logic should run in the cloud and which needs to be moved out to the Edge becomes an important question. In this particular context, you will hear about Edge workloads.
The term "Edge workload" is often used as a synonym for an Edge service. More generally, an Edge workload is any bit of software functionality that has utility when running on an Edge node. Most often, this functionality is delivered as a microservice but it can have other forms as well. Visual recognition, acoustic insights, and speech recognition are all examples of Edge services.
Edge solutions are usually multi-layered distributed architectures encompassing and balancing the workload between the Edge layer, the Edge cloud or Edge network, and the Enterprise layer. Furthermore, when we talk about the Edge, there are the Edge devices and the local Edge servers.
You will notice that Edge computing architectures are an expansion of IoT (Internet of Things) architectures and use terms like OT for operational technology. Specifically, you will see the convergence of IT (Information Technology) and OT environments (i.e., the integration of the Internet, computers, and Wi-Fi access points into industrial networks). The transformation and convergence of IT and OT technologies has potential to deliver enormous value over the next decade.
That said, not all Edge computing layers are needed in every scenario. In a disconnected scenario, for example (as in the case of an oil rig platform or when Cloud connectivity is not there), Edge devices can and will continue to operate since they only have to communicate with the Edge servers.
Edge is IoT at enormous scale because with the huge increase in devices and the data being generated by these devices, there will be bottlenecks and latency issues with current architectures.
Edge computing addresses these challenges, as you know, by moving the processing to the edge of the network. With that in mind, Figure 2 shows a high-level layout of Edge computing architecture. From left to right, the four layers or regions depict the following:
- Edge devices: The Edge and IoT devices are equipped to run analytics, apply AI rules, and even store some data locally to support operations at the Edge. The devices could handle analysis and real-time inferencing without involvement of the Edge server or enterprise layer. This is possible because devices can use any Software-as-a-Service (SaaS). Driven by economic considerations and form factors, an Edge device typically has limited compute resources. It is common to find Edge devices with ARM or x86 class CPUs with one or two cores, 128 MB of memory, and perhaps 1GB of local persistent storage.
- Edge servers: Edge servers are used to deploy apps to the devices. They are in constant communication with the devices by using agents installed on each of the devices. These Edge servers maintain a pulse on the plethora of devices, and if something more than inferencing is needed, data from the devices is sent to the Edge server for further analysis. These are general-purpose racked computers located in remote operations facility, like a factory, retail store, hotel, distribution center, or bank. They could have 8, 16, or more cores of compute capacity, 16GB of memory, and several hundred GBs of local storage.
- Edge Cloud: New networking technologies have resulted in the Edge Cloud (or micro data center), which can be viewed as a local cloud for devices to communicate with. Telecommunication companies might call it the Edge network. It reduces the distance that data from the devices must travel and thus decreases latency and addresses bandwidth issues, especially with the advent of 5G. This region also offers more analytical capabilities and additional storage for analytical and data models.
- Enterprise hybrid multicloud: This region offers the classic enterprise-level model storage and management, device management, and especially enterprise-level analytics and dashboards. This can be hosted in the Cloud or in an on-premises data center.
Edge reference architecture
Now that we know the core set of components that might be needed in an Edge solution, we can attempt to create an Edge computing reference architecture as shown in Figure 3.
Notice the push towards moving compute and storage components to the left of the diagram (i.e., the Edge) and away from the data center. This movement to the “Edge” of the network, away from the data center and closer to the user, cuts down on the amount of time it takes to exchange information and make decisions, when compared with traditional centralized cloud computing.
You will also notice the same software components duplicated in each layer or region; this addresses the earlier question regarding applications. A simplistic view is to envision the footprint of those deployed applications as being small, medium, and large when going from left to right.
While you can do real-time inferencing at the Edge devices, rule-based analytics, machine learning, and/or model improvement has to be done at the layers to the right—like the Edge server or even the Enterprise. Similarly, there is limited storage on the Edge devices, so AI and data models are stored in the Edge server or the Edge Cloud.
While some would like to keep devices (specifically IoT devices) dumb, I think otherwise, let’s see how much intelligence we can put into those tiny devices. The challenge is the management and maintenance of those devices.
The IBM Cloud architecture center offers up many hybrid and multicloud reference architectures. Look for the addition of a new Edge Cloud reference architecture soon that addresses specific industry solutions.
We would like to know what you think. For more information on Edge computing, see the first two parts of this series and a few other important references:
- Part 1: “Cloud at the Edge”
- Part 2: “Rounding Out the Edges”
- Emerging Innovation Spaces dimension
- Cloud adoption
- Keep the Edges Dumb: Business Logic for IoT Sensors
Thanks to Ryan Anderson for reviewing the article.