The traditional way of implementing data virtualization is based on data federation, where the central coordinator of the network collects and partially processes the data sent by all nodes as described below. In this case, the coordinator becomes a bottleneck in the computation.
IBM’s new solution for data virtualization creates a network of data sources based on a self-organizing constellation around peer nodes. The workload is distributed to peer nodes and edge nodes, which perform most of the computation, leaving only the finishing touches to the coordinator, as shown in the figure below. It is much more efficient than federation and guarantees virtually limitless scalability.
As expected in the first quarter of the year, there are many “top trends for 2021”. This year, I am reading these with a greater focus than before looking for clues of what may happen post this pandemic.Although I have seen some interesting predictions, like Hyper / Full automation using for example neuromorphic computing, emulating […]
I just finished reading IBM’s Science & Technology Outlook (STO) 2021. The report starts with a statement that really resonated with me. “COVID-19’s impact on the world has emphasized the importance of science.” Reading the report what stands out to me is the emphasis on scientific method and approach for discovery. As well as the […]
When IBM was recently invited to give their yearly guest lecture at Stockholm School of Economics within the course “Humans vs. Algorithms: Judgment, Prediction and Nudges”, AI & ethics had a spot in the schedule, with extra focus on interactive assistants. As you may have guessed, centenarians were missing both in front of and behind […]