Part two of a two-part series examining these important cloud computing architectures.
My previous post described the differences between the following cloud environments:
In general, I see these three patterns as stepping stones as customers progress through their transformation to cloud. Taking advantage of the flexibility and consistency of distributed cloud is a natural progression of utilizing hybrid cloud and multicloud. Building on distributed cloud, customers can gain additional efficiency by utilizing the edge.
The next step: Edge computing
Edge computing fits in across hybrid cloud and multicloud and is enhanced with distributed cloud. Basically, edge computing refers to the idea of running servers closer to where the data is created.
To best understand edge computing, consider the source of where data is created. While data primarily exists in the cloud — where applications are running along with operations tools, analytics, and more — real data is created by users interacting with devices. When you’re performing even simple tasks like clicking an application on your phone, you’re making data.
The following video provides a deeper dive on edge computing:
How edge devices and edge servers work together
Edge devices drive edge computing, and they're already widespread in use and continuing to multiply constantly. To get a sense of the magnitude, imagine you’re running a delivery company and visiting a warehouse with the employees using the following devices:
- Shipping equipment monitors
- Camera monitors
- Mobile phones
All of these edge devices are collecting data that is accessible digitally. Edge devices are different from edge servers, which are pieces of IT equipment designed for edge computing. In our example, edge servers could sit in the warehouse and collect and process data from those edge devices.
Using an edge server rather than forwarding that data to the cloud for processing and delivery back to edge devices gives you the following advantages:
- Process and compute data where it is created
- Reduce overall latency
- Separate responsibilities
Many executives at telecommunications companies are looking closely at edge computing for these reasons. A lot of their computations need to occur on the edge, like at cell phone towers. For the warehouse example where you are generating all that data from edge devices, you could set up a Kubernetes cluster in an edge server running within that warehouse.
How distributed cloud plays a part in edge computing
Assume the scenario that your delivery company is scaling quickly, with multiple warehouses and millions of edge devices. Each of those warehouses has its own edge server. How do you start to maintain those servers and devices individually? An ops administrator can’t perform that task easily, especially if the setup involves “bring your own” or “manage your own” clusters as you grow in scale.
With distributed cloud, a single control plane allows you to gain consistency across container-based platforms, like Kubernetes and Red Hat OpenShift. For example, you could register raw resources — such as VM hosts at the edge — and then use a centralized operational environment to deploy Kubernetes clusters on demand.
For the edge computing use case, this is a lifesaver. You can manage ever-growing edge environments from a single control plane, even though the infrastructure is “distributed” across multiple “edges.” Servers sitting in your warehouses get the same operational consistency as your cloud-based Kubernetes clusters.
That consistency is a big reason to incorporate distributed cloud with edge computing. You can certainly conduct edge computing without distributed cloud, but you’ll pay the price with significant overhead.
Distributed cloud is ideal — not only for edge computing, but also with hybrid cloud, multicloud, or both. To learn more about IBM’s distributed cloud offering, IBM Cloud Satellite, click the link below.