The traditional way of implementing data virtualization is based on data federation, where the central coordinator of the network collects and partially processes the data sent by all nodes as described below. In this case, the coordinator becomes a bottleneck in the computation.
IBM’s new solution for data virtualization creates a network of data sources based on a self-organizing constellation around peer nodes. The workload is distributed to peer nodes and edge nodes, which perform most of the computation, leaving only the finishing touches to the coordinator, as shown in the figure below. It is much more efficient than federation and guarantees virtually limitless scalability.
On October 4th, IBM announced a revamped skilling program available for partners. The skilling and badging program is now available to our partners in the same way that it is available for IBMers, at no cost. This is something that our partners have shared, they want more expertise – more opportunities to sharpen their technical […]
One of the trending buzzwords of the last years in my world is “Data Democratization”. Which this year seems to have been complemented by “Data Fabric” and “Data Mesh”. What it is really about the long-standing challenge of making data available. It is another one of these topics that often gets the reaction “How hard […]
As expected in the first quarter of the year, there are many “top trends for 2021”. This year, I am reading these with a greater focus than before looking for clues of what may happen post this pandemic.Although I have seen some interesting predictions, like Hyper / Full automation using for example neuromorphic computing, emulating […]