Data & AI

Event Summary: How to unlock the value of your data

Share this post:

 

There’s no other way to call it. Digital transformation is an all or nothing strategy. It’s less about adding technology to disparate business units and much more about transforming the business for the digital age, where digital is a core way of working. But according to speakers at a recent IBM and CorporateLeaders round table event – ‘The Data Fabric: How to Unlock the Value of Your Data’ – achieving a data fabric is all about the way data is stitched together.

“We’ve moved quickly from data warehousing to data virtualization and data lakes,” said speaker Herman Nielens, Data & AI Architect at IBM. “Now the concept of the data fabric does not look at replacing those, but rather a novel way of thinking about managing data. It’s on the level of operational elements such as data moving in a data mesh but also ties in tightly with the business perspective through concepts like self-service catalogs and multi-modal analytics tooling.”

According to Nielens the key challenge for organizations is actually finding the data they need. “I call it uncovering and discovering,” he says. “You need to know what kind of data you are dealing with and what the quality of the data is.”

 

Breaking down data silos

Nielens reflected on the fact that lots of data currently sits in silos and is stored on solutions provided by different vendors. This, plus data complexity, requires the need for clever and multi-skilled data scientists. “These people are not easy to find though,” he said, especially because they should also know your business very well.” He added: “So if you can have your business analysts, citizen scientists and data scientist join forces, you can explore more hypotheses resulting in more usable insights.” Dealing with data privacy was identified as an added complication “and not only data privacy but also data confidentiality,” Nielens said – the totality of which are all challenges when making data and insights available through the data fabric.
Nielens also explained how “IBM is working on simplifying the Babylonic confusion and learning curves resulting from having many different data sources”. IBM is rationalizing their own databases to use the same language and “If there are third party databases, we can include those using Data Virtualization”, he said.

Nielens concluded: “Essentially implementing a data fabric is about intelligently automating the orchestration, organization and integration of disparate data sources in a governed, secure and self-service manner, so anyone and any application can create or consume insights derived from that data.”

IBM implements the data fabric through its Cloud Pak for Data platform, which Nielens described as a “hybrid, multi-cloud solution, that can be deployed as either a private cloud solution in the corporate data center, or in any public cloud (IBM or other), on top of IaaS or PaaS; or even as SaaS, or… as a mix of the above”.

 

Data scientists are critical to the process

The second presentation on the data fabric was provided by Steven Bex, Director & Managed Analytics Leader at Deloitte, who argued the main challenge was moving from ad-hoc data-driven events to ones that are continuous and will bring business value for many years: “One needs to look at it as a full journey of all things with many different parts that come into play,” he explained. “We are all in this constant form of transformation and migration of at least all of our systems. So how are we going to deal with that?” he asked as well.

One consideration, he said, was checking that data being processed is still relevant, as well as checking which new raw data goes into the system. “I do think one needs to have experts or at least data scientists present,” he agreed with Nielens. These jobs are not something you want to give to a business user.”

But what else should businesses look out for? “We must all oversee the evolution of data stewards,” he said, “These are people with the responsibility of bridging technical and business knowledge divide.” He added: The best transformations will see the administrators of the data integrators (rather than the data integrators themselves), have the most work.” He said: “So the focus will be – ‘how can the business support them?’” “Companies need to be able to explain how their data model gets them to a certain conclusion about the business’s future. Lineage of data will be an important thing to solve as well. Then last but not least, one has to ask, ‘how can I trust the process?’”

 

A data fabric is made of different layers

Not surprisingly, such open-ended questions created significant debate from the attendees. Data privacy was a topic that came up a lot. One person asked if AI models could predict difference governances being brought in. Data masking was suggested as the answer – but masking it while also knowing what the data can still be used for. “The problem,” said one attendee “is that if you want to train an algorithm, you’re training it to see everything. And so, it’s what you do here that decides how you deal with it.”
Another wanted to know how to make the jump from preparing data models for dashboards to getting it fit for the enterprise level “different people in the data visualization all doing their part” was the short answer. One meanwhile argued the whole concept itself was slightly confusing. “Is data fabric just another data platform?” he asked. But as one attendee simply concluded, data fabrics can be thought of simply. He said it was merely a matter of changing your perspective: “Just make sure your enterprise data model flows through your data fabric,” he said. “Because it is a fabric, and the fabric is made of different layers.”

To know more about IBM and Deloitte collaboration, discover DAPPER here >

Director Managed Analytics @ Deloitte Belgium

Herman Nielens

Data & AI Architect @ IBM

More Data & AI stories

Data-driven asset management with IBM Maximo Application Suite and Cloud Pak for Data

IBM has enhanced its Enterprise Asset Management platform, IBM Maximo Application Suite (MAS), with IBM Cloud Pak for Data: a supporting platform which provides a framework for combining a variety of data from different areas of an organization. How does IBM Cloud Pak for Data help organizations gain additional asset management insights from available data? […]

Continue reading

Being a data driven organization: What does this mean at Allianz?

Reading Time: 8 minutes From calculating risks and premiums to understanding customer behavior, data is of vital importance in the insurance business. Allianz, a multinational financial services company that focuses on insurance and asset management, has recently transformed its operations on a data level to reinvent its insurance business. Read an extract from a recent […]

Continue reading

Hardware is not dead – Power10 is paving the way for hybrid cloud and energy savings

  IBM has recently launched the first Power10 server. A new generation of servers, which due to changes in structure, components and functionality, provide significant improvements in terms of performance, computing power and energy consumption. These servers are paving the way for a more sustainable and flexible business based on a hybrid cloud. But what […]

Continue reading