What do oil platforms, automobiles, and smart phones have in common? It may not be obvious, but they are in fact all important examples of edge computing.

The concept of edge computing has evolved significantly over the last half century. In the past, an oil platform in the North Sea might offer an excellent example of the “edge” of an organization’s IT infrastructure or internal network. The North Sea lies between the UK and northern Europe and boasts 184 offshore rigs–the highest number of any region in the world.[1] Because network bandwidth may be minimal and connectivity intermittent at best in the remote, stormy North Sea, some compute resources are on the oil platforms themselves, far away from company headquarters. But building a full data center in the North Sea isn’t necessary; it just wastes money and resources.

Traditionally, branch offices, factories, remote operational locations, research stations, and similar environments were all common examples of edge locations. But the advent of new technologies and architectures such as smart devices and the Internet of Things (IoT) is ushering in a whole new paradigm of edge computing. Now cars, for example, have essentially become edge locations. And one of the most common objects on our planet these days–the smart phone–is an edge device.

Interestingly, artificial intelligence (AI) is going to drive an explosion of edge computing demand. Imagine when an edge-located car is an AI-powered autonomous driving (AD) vehicle. Along with all the telemetry cars already produce and broadcast to home base, when the car is an AI/AD system, it is expected that data volumes will reach terabytes per vehicle per day and hundreds of exabytes across entire AD initiatives.

Such data streams will overwhelm IoT backbones. Then, add to that flood of data all the increased traffic and amplified smart device user expectations from the rollout of new 5G networks. Bandwidth may increase substantially with 5G, but demand and usage are set to really explode.

Edge computing offers a powerful strategy to help alleviate future network congestion driven by new technologies such as AI, the IoT, and 5G. What if edge devices didn’t call home? Or at least called the neighbors first? What if a new breed of edge installations evolved where intermediate compute resources functioned to intercept much of the raw data streaming in from AD vehicles and smart phones and rich media-augmented reality games and entertainment clients?

Perhaps not so different from North Sea oil platforms or remote office locations, this new breed of IT infrastructure edge installations would provide the initial data processing resources much closer to the individual AD vehicle, smart phone, and gaming console. Local computing resources could help manage automobile traffic flows or provide augmented reality data for smart phones, for example, without the need to “phone home” to a central location.

How would we benefit? Clearly, network traffic and potential congestion and contention would be reduced. Data needed for analytics-driven business decision-making back at corporate HQ would still be available, just not on as short a fuse. Just as importantly, local processing would slash the response times of nearby remote client applications, leading to better end-user experiences. In fact, the business success of the 5G rollout, for example, may hinge largely on the quality of end-user experiences. When movies download faster, when AD vehicles inspire confidence, when game avatars move fluidly, when shopping apps respond instantly, end users are much happier–and buy more.

But the 21st century versions of edge computing won’t happen by magic, though the results may seem almost magical. Edge compute solutions still will struggle against the same sorts of challenges as those in the North Sea. Cost is always a big concern. And cost is always tied to efficiency, which is often linked to scalability and flexibility. Performance can’t be sacrificed. And who will be out there on the edge managing remote installations?

Find the answer in our blog post on IBM Storage solutions for the edge.

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters