Cloud computing moves to the edge in 2016

Share this post:

wpid-thumbnail-581cfd3dba4167ce54d4e9f0ea3dd732.jpegThe year 2016 will be exciting in terms of applied technologies. We see a lot of technologies maturing and moving from lab exercises to real-world business technologies that solve real-life customer problems – especially in the areas of digital transformation, API, cloud, analytics, and the Internet of Things (IoT).

In particular, we see the following areas evolving faster than others:

Year of the Edge (Decentralization of Cloud)

Cloud has become the ubiquitous digital platform for many enterprises in their quest to provide a single unified digital platform. Integrating the core IT with the shadow IT has been the main focus for the last few years, but in 2016 we anticipate the next step in this process. We started seeing companies moving from the central cloud platforms toward the edge, or toward decentralizing the cloud. This is partly because, with the proliferation of IoTs, operations technologies (OT) and decision intelligence need to be closer to the field than to the central platform.

Cloud has become the massive centralized infrastructure that is the control point for compute power, storage, process, integration, and decision making for many corporations. But as we move toward IoT proliferation, we need not only to account for billions of devices sitting at the edge, but also to provide quicker processing and decision making capabilities that will enable the operations technologies. Areas of low or no Internet connectivity need to be self-sufficient to enable faster decision making based on localized and/or regionalized data intelligence.

An IDC study estimates that, by 2020, we will have 40+ zettabytes of data. IDC also predicts that by 2020, about 10 percent of the world’s data will be produced by edge devices. Unprecedented and massive data collection, storage, and intelligence needs will drive a major demand for speed at the edge. Services need to be connected to clients, whether human or machine, with very low latency, yet must retain the ability to provide a holistic intelligence. In 2016, the expansion of the cloud – moving a part of cloud capabilities to the edge – will happen.

Because of the invention of micro services, containers, and APIs, it is easy to run these smaller, self-contained, purpose-driven services that specifically target only certain functions that are needed at the edge. The ability to use containers for mobility and the massive adoption of Linux will enable much thicker, monolithic services previously running centralized to be “re-shaped” into a collection of smaller, purpose-driven micro services.  Each of these can be deployed and run on the edge as needed and on-demand. Spark is an excellent example of this because it is focused on real-time streaming analytics, which is a natural “edge service.”

M2M Communications Will Move to the Main Stage

The proliferation of billions of smart devices around the edge will drive direct machine-to-machine (M2M) communications instead of the centralized communication model. The majority of the IoT interactions are still about humans (such as the concept of the quantified self) and a human element also is involved in the decision making somewhere, even if it is not about quantified self.

We predict that the authoritative decision making source will begin moving slowly toward machines. This will be enabled by the M2M interactions. The emergence of cognitive intelligence themes (such as IBM Watson) and machine-learning concepts (such as BigML) will drive this adoption. Currently, trust and security are major factors preventing this from happening on a large scale. By enabling a confidence-score-based authoritative source, we can eliminate the human involvement and the ambiguity in decision making. This will enable autonomous M2M communication, interaction, decision making, intelligence, and data sharing, which will lead to replication of intelligence for quicker localized decisions. In addition, when there is a dispute, the central authoritative source, with cognitive powers, can step in to resolve the issues and make the process smoother – without the need for human intervention.

This centralized cognitive intelligence also can manage the devices, secure them, and maintain their trust. It can help eliminate rogue devices from the mix, give a lower rating to untrusted devices, eliminate the data sent by breached devices, and give a lower score to the devices in the wild versus a higher score to the devices maintained by trusted parties.

Smart Contracts to Enable Smarter Commerce

Another trend that is gaining a lot of traction is smart, automated commerce. Even though the edge devices are growing to billions in numbers, the monetization of those devices are still sporadic. There is no consistent way to commercialize those edge IoT devices. This is where the Blockchain concept can help. The edge IoT devices can create smart contracts and publish their details – such as pricing, terms, length, delivery mechanisms, and payment terms – to the Blockchain network. The data consumer can browse the list of published smart contracts, choose a good match, and auto-negotiate the contract. Once the terms are agreed upon and the electronic agreement is signed, the data supplier can start delivering the goods to the consumer and get paid from the source automatically. The lack of a need for human intervention will make commerce faster and smarter. This automation also gives an option to the data consumer to evaluate the value of data being received on a constant basis. Re-negotiation or cancellation of the contract at any time without a longer time binding makes smart contracts more attractive. On the flip side, the data provider also can choose to cancel or re-negotiate the contract, based on contract violation, market demand, deemed usage, etc.

Another important aspect of edge IT and edge processing, which includes IoT and Fog computing, is about monetization and commercialization. Currently, most IoT companies are popularizing their IoT gadget and solution set based on how innovative they are. The commercialization aspect of the gadgets themselves is very limited, however, and will not deliver the true IoT concept. Once companies figure out the value of their data, offering Data as a Service or even Data Insights as a Service will become more popular. Once this happens, we predict that companies will rush to maintain infrastructure to create an open source economy, in which data and data-based insights can be easily produced and sold.

FACTS-based Smarter Systems Finally Come to Fruition

IoT helps bridge the gap between IT and operations technologies (OT). Currently, most of the core IT decisions about OT are based either on old data (data that is more than seconds old) or on some estimation. The current decisions in the field regarding OT are made based on isolated data sets that are partial in nature and delayed. This leads to subjective decisions.

Going forward, based on growing decentralization of cloud and M2M communications, as well as real-time interaction of the OT data set with the core IT, decisions will be made closer to full ecosystem based real-time data. This will lead to objective decisions. These fast, accurate, complete, trusted, scalable (FACTS) real-time systems will make core IT business decisions in real time and enforce them at the OT level. As discussed above, Apache Spark allows the necessary services such as analytics, data intelligence, security, and privacy all to be containerized and moved closer to the edge instead of centralized processing. This allows for the edges not only to make decisions based on the events happening elsewhere in the enterprise, but also to make decisions faster, more complete, and accurate, all the time.

Andy Thurai is Program Director for API, IoT, and Connected Cloud with IBM,
Mac Devine currently serves as Vice President of SDN Cloud Services, CTO of IBM Cloud Services Division.
Reprinted with permission from Data Informed.

IBM Cloud

More Cognitive stories

LogDNA and IBM find synergy in cloud

You know what they say: you can’t fix what you can’t find. That’s what makes log management such a critical element in the DevOps process. Logging provides key information for software developers on the lookout for code errors. While working on their third startup in 2013, Chris Nguyen and Lee Liu realized that traditional log […]

Continue reading

Quantum-safe cryptography: What it means for your data in the cloud

Quantum computing holds the promise of delivering new insights that could lead to medical breakthroughs and scientific discoveries across a number of disciplines. It could also become a double-edged sword, as quantum computing may also create new exposures, such as the ability to quickly solve the difficult math problems that are the basis of some […]

Continue reading

A strategic approach to adopting cloud-native application development

Approximately three out of four non-cloud applications will move to the cloud within the next three years, according to a recent IBM report titled “The enterprise outlook on cloud-native development”. In today’s modern enterprise, optimizing the application cycle is critical: it can help companies keep up with consumer expectations, keep business operations agile, and speed […]

Continue reading