A closer look at edge analytics—the process of collecting and analyzing data from IoT-type devices and then creating actionable insights in real-time.

This is the eighth installment of a blog series on edge computing, and in one of the prior posts, we talked about machine learning modeling at the edge. In it, we mentioned how machine learning (ML) models were built and deployed to the Edge nodes. But what about the video feeds and other unstructured data that are being generated by all those Internet of Things (IoT) type devices? Can all that data be analyzed, and can results be produced in real-time? How is that done? If it cannot be analyzed at the edge in real-time, where is that data sent, what is the format of that data, and how quickly can it be analyzed? Finally, does that data need to be stored, and, if so, where is it all stored and why? This blog post attempts to answer such questions. Some call it “edge analytics” or “AI at the edge.”

Please make sure to check out all the installments in this series of blog posts on edge computing:

What is edge analytics?

The definition of edge analytics is simply the process of collecting, analyzing, and creating actionable insights in real-time, directly from the IoT devices generating the data. Some might argue that this is edge computing; in fact, edge analytics takes things to the next level, wherein more data is captured and complex analytics are done before quick actions are taken. Edge computing is akin to the if/then construct in software programming; edge analytics takes the what if approach.

The artificial intelligence (AI) purists would say that edge analytics deals with predicting (inference)—that is, applying knowledge from a trained neural network model and using it to infer a result.

Where should data be analyzed?

The fact is that data generation is outpacing the network capacity. So, we have to be intelligent about what data to analyze, what data to send to the cloud for storage, and—most importantly—where the data should be analyzed. While the easiest answer to these questions is, “it depends,” there are business and technical reasons and recommendations.

Two factors dictate that answer: how critical is it to analyze the data in real-time and whether additional analysis needs to be done with that data. Then, there’s that storage requirement (or not) to meet business and jurisdictional compliance requirements.

Some say that cloud is not a great place for real-time analytics. So, sending all the data to the cloud is not the answer because most of the data stored in the cloud is never analyzed. It ends up in some database or bit bucket, and it just stays there.

Taking the example of a remote camera capturing video, some of the pros and cons of analytics on the edge vs. analytics on the server are captured in the table below:

Analytics depends on situational awareness

Situational awareness is the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status. That definition is borrowed from Wikipedia and the three levels of situational awareness are shown in the graphic below. Given that time is the most important aspect in situational awareness, by extension, we can say time is a driving force for analytics, especially analytics at the edge:

Figure 1: Three levels of situational awareness.

Events at the edge would entail analyzing what the camera is seeing or what the sensor is sensing in real-time so that decisions can made quickly, and immediate action can be taken. When two cars are on a collision path, there is no time to send the information to the cloud or notify someone; the consequences of staying on the current path can be envisioned, and a collision can be avoided by taking immediate action(s). When the smart camera watching a painting robot in an auto manufacturing plant sees the wrong amount of paint being applied on a car body part, it necessitates a corrective action. All this is possible only with pre-built models deployed on such devices or systems.

But what about new or hitherto un-envisioned situations? In construction zones, cameras can be trained to detect someone not wearing a hard hat and either sound an alarm or notify the site supervisor. Entry sensors can detect if people are wearing a badge or carrying weapons, etc. In a natural disaster like a pandemic, we would want those same devices to detect health-related items like face masks, gloves, etc.

The existing models would have to be enhanced or new machine learning (ML) models would have to be deployed so those edge devices would detect and analyze such situations and take the necessary action. The resulting action is programmable and depends on the specific situation. Alarms could be activated, or the appropriate personnel could be notified, or people could be barred from entering. That is the power of edge analytics.

Edge analytics: What and how

Issuing an alert when a device reaches a certain threshold is rather simple, but the true value lies in producing a visual analysis of multiple data variables in real-time and finding predictive meaning in the data stream. This can help businesses identify potential outliers or issues they need to drill into and perform further analysis.

Edge analytics is not always visual—there are many other data-producing facets like shock and vibration analysis, noise detection, temperature sensing, pressure gauges, flow meters, and audio and tone analysis. Collision avoidance systems in cars do so with sensors and not cameras. While edge analytics applications need to work on edge devices that can have memory, processing power, or communication constraints, these devices would be hooked up to an edge server/gateway where the containerized applications run.

Different protocols are used to transmit data from the devices to the server or gateway (typically known as the first mile). These are some of the common protocols, but it is not a comprehensive set:

  • HTTP/HTTPS: Hypertext Transfer Protocol/Secure are stateless communications protocols that are the foundation of the Internet.
  • MQTT: Message Queuing Telemetry Transport is a lightweight publish/subscribe machine-to-machine messaging connectivity protocol.
  • RTSP: Real-Time Streaming Protocol is a stateful protocol used for video contribution.
  • Streams over HTTP: One of many HTTP-based adaptive protocols.
  • WebRTC: A combination of standards, protocols, and JavaScript and HTML5 APIs that enables real-time communications.
  • Zigbee: A wireless technology that uses the packet-based radio protocol intended for low-cost, battery-operated devices in industrial settings.

The software stack will vary depending on the use case for a particular industry, but broadly speaking, topologies of edge analytics usually involve a combination of products. At the far edge, there would be visual, audio, or sensory devices—some that are capable of running a containerized inference model. They would send data to an inference server, possibly running IBM Visual Insights and IBM Edge Application Manager. Non-visual data would be sent to an event backbone using IBM Event Streams or Apache Kafka. And software products like IBM Watson that train/retrain models, plus middleware like IBM Cloud Pak for Data and AI could aggregate, cleanse, and analyze data in the next layer over.

Keep in mind the situational awareness graphic shown above; from perception to action, edge analytics has to operate in real-time. The block architecture diagram shows various components in play, with latency times shown in milliseconds between the different layers:

Figure 2: Edge analytics component architecture.

Edge analytics: When and where

It turns out that humans are highly tuned, and at a cognitive level, we operate in the millisecond range (and sometimes in microseconds). So, responses and decisions by machines and devices have to come close to that and not take 100 or 500 milliseconds by sending data to the cloud.

One of the key requirements of edge analytics is to improve computing experiences by lowering the latency of responses. The other aspect is scalability. The ever-growing numbers of sensors and network devices will generate more and more data. That will increase the strain on the central data analytics resources. Edge analytics enables organizations to scale out their processing and analytics capabilities by decentralizing to the locations where the data is actually collected.

Lastly, edge analytics is not a replacement for central data analytics. Both options can and will supplement each other in delivering data insights. Earlier, we alluded to the fact that there are certain scenarios where edge analytics is preferred, and there are certain scenarios where central data modeling and analytics is the better answer because latency is accepted as detailed analytics is required. The main goal of edge analytics is to provide real-time (or as close to real-time) business insights as possible.

Learn more

The IBM Cloud architecture center offers up many hybrid and multicloud reference architectures, including the edge computing reference architecture. You can also view the newly published, edge-related, automotive reference architecture.

Please make sure to check out all the installments in this series of blog posts on edge computing and the additional resources:

Thank you to David Booz for reviewing the article and Andy Gibbs for providing the inspiration for the block architecture diagram.

More from Cloud

Data center consolidation: Strategy and best practices

7 min read - The modern pace of data creation is staggering. The average organization produces data constantly—perhaps even continuously—and soon it’s investing in servers to provide ample storage for that information. In time, and probably sooner than expected, the organization accrues more data and outgrows that server, so it invests in multiple servers. Or that company could tie into a data center, which is built to accommodate even larger warehouses of information. But the creation of new data never slows for long. And…

Hybrid cloud examples, applications and use cases

7 min read - To keep pace with the dynamic environment of digitally-driven business, organizations continue to embrace hybrid cloud, which combines and unifies public cloud, private cloud and on-premises infrastructure, while providing orchestration, management and application portability across all three. According to the IBM Transformation Index: State of Cloud, a 2022 survey commissioned by IBM and conducted by an independent research firm, more than 77% of business and IT professionals say they have adopted a hybrid cloud approach. By creating an agile, flexible and…

Tokens and login sessions in IBM Cloud

9 min read - IBM Cloud authentication and authorization relies on the industry-standard protocol OAuth 2.0. You can read more about OAuth 2.0 in RFC 6749—The OAuth 2.0 Authorization Framework. Like most adopters of OAuth 2.0, IBM has also extended some of OAuth 2.0 functionality to meet the requirements of IBM Cloud and its customers. Access and refresh tokens As specified in RFC 6749, applications are getting an access token to represent the identity that has been authenticated and its permissions. Additionally, in IBM…

How to move from IBM Cloud Functions to IBM Code Engine

5 min read - When migrating off IBM Cloud Functions, IBM Cloud Code Engine is one of the possible deployment targets. Code Engine offers apps, jobs and (recently function) that you can (or need) to pick from. In this post, we provide some discussion points and share tips and tricks on how to work with Code Engine functions. IBM Cloud Code Engine is a fully managed, serverless platform to (not only) run your containerized workloads. It has evolved a lot since March 2021, when…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters