The relationship between the transactional and analytical data was always a bit tensed. One maintained respectful distance from other, usually communicating through their common friend – data pipeline. It was going steady for at least two decades. But the uneasiness is growing now. And why so? Firstly, data pipelines can be slow. By the time the transactional data arrives at a data platform, like data warehouse or data lake, it is already stale. Thus, the insight generated may not be relevant anymore. In the current era of agile business, real-time insight matters. Secondly, it is a one-way route. The value travels from the transactional to the analytical world, but not vice versa. It is important that the feedback loop is established so that the generated insight can contribute to the transactional world directly.

None of the traditional data platform architectures, such as data warehouse or data lake, tried to bring these two worlds closer. They were built for the analytical world. Even the new data lakehouse architecture will have the same lacuna. But how about data fabric?

Build your data architecture

Gartner clearly mentions that “Data fabric covers both transactional and analytical systems”. Our definition of data fabric is completely aligned to it: “A loosely coupled collection of distributed services, which enables the right data to be made available in the right shape, at the right time and place, from heterogeneous sources of transactional and analytical natures, across any cloud and on-premises platforms, usually via self-service, while meeting non-functional requirements including cost effectiveness, performance, governance, security and compliance.”

But traditionally that was never done in data platforms. It was always exclusive to the analytical hemisphere of data world. How do we address this? How are we going to bring transactional systems under the new data platform architecture called data fabric?

Two technological trends impacting businesses

Two major technological trends are impacting enterprises across the world. First is cloud transformation. Many enterprises have moved the infrastructure and non-business-critical applications to cloud, reaping immense financial benefit. But that was just beginning. Now, the business-critical applications are being modernized and made ready to be moved to cloud. But in this trend, the focus was always on application and infrastructure. The data often came as a second thought, or at times was ignored.

The second trend is around data centricity. Here the enterprises are trying to transform themselves into a state where their operations and processes will be run based on data. The raw material of enterprise data is being refined and transformed into information and then insights. The idea is these insights will drive the business decisions, resulting in business benefits positively impacting both top and bottom line of the balance sheet.

But these two trends are residing in two parallel universes. The first one is often a CIO agenda, whereas the second one often is a business agenda led by multiple CXOs. As a result, they do not meet each other. A huge opportunity is lost.

The question is can we weave them together under a holistic organization goal? Answer is yes. And data fabric is expected to play a key role there. Let us see how.

Convergence with data fabric

In current stage of cloud transformation, as the business-critical applications are being modernized, a new opportunity arises. Almost all applications are associated with one or multiple databases. Like the applications, these databases are old and in severe need of modernization. Unless they are also modernized, full benefit cannot be reaped. Yet often they are ignored because of the fear of opening new can of worms. It is important to address that along with applications.

In modernizing these databases, multiple actions can be taken:

  • the data model can be refreshed
  • the data can be cleansed
  • the monolithic humongous database can be broken into smaller manageable databases
  • new polyglot technologies (e.g. document, key-value pair, graph, object store etc.) can be adopted instead of using old file based or RDBMS technologies

This new reformed database can expose its data through API, virtualization, messaging and other mechanism. Each of these exposures can be published as a ‘data asset’ or ‘data product’ in the marketplace of data fabric. Discovery and consumption will follow naturally.

Data fabric building a bridge

Through an example, let us look at how both transactional and analytical systems can participate in data fabric.

Let us consider a retail organization. For the initial scope they considered three business-critical transactional systems.

  • T1 is a custom Java based storefront application running from mobile and web front end.
  • T2 is the financial and accounting system of the retailer. It is running out of SAP.
  • T3 is a Salesforce CRM. It also integrates Einstein for running some analytics.

Similarly on analytics side, apart from Einstein mentioned above, let us include few more data & AI platforms in this discussion.

  • A1 is a customer data lake built in recent years. It runs on Azure Databricks.
  • A2 is an old Netezza appliance-based data warehouse to handle data marts for financial and regulatory reporting purpose.
  • A3 is a new IBM Cloud Pak for Data based analytics platform being erected to develop new AI use cases.

While the basic systems were serving the purpose well, over the last few years, the enterprise observed they were lacking differentiation in the market. Their innovation was not best in the market. And a primary reason was lack of discoverable, trustable, and consumable data. Most of their data integration are point-to-point. Since discoverability was an issue, there were many duplications of effort to integrate and process the same data.

While in recent times the CIO had started a program for the application modernization, the data area was not covered there. To resolve this situation, the CIO and CDO jointly sponsored a data fabric program. An enterprise data marketplace was developed, where all participating transactional and analytical applications were supposed to publish their ‘data products’. Initially the above-mentioned six systems were earmarked for the data fabric participation. Let us see how these systems would go through transformation for this participation.

T1, an old and monolithic application, is considered one of the first applications to be modernized. Microservices based architecture was adopted. The large Sybase database was broken into multiple databases. The master and reference data are mostly stored in Azure Cosmos DB. The transactional data was stored in SQL Server. The microservices were exposed as APIs for the consumption of different channels. Same APIs (e.g., ‘A’, ‘C’ in the diagram above) are also published. It also published the raw sales data as a file product ‘D’ in the marketplace.

T2, being an ERP, remained AS-IS. However, it started to publish the periodic accounts data as a file (product ‘E’ in the diagram) through the data marketplace. A2 ingested those files from marketplace.

T3 started to publish real-time customer data changes through data streaming. Those events are published in marketplace as product ‘F’. T1 subscribed those events to reflect the latest customer data in real time. At the same time, from the repository of Einstein, the file extracts of Salesforce CRM are published as product ‘G’.

A1 consumed raw sales data (‘D’) and raw customer data (‘G’). It produced conformed customer data and conformed sales data and published them as virtualized objects ‘H’ and ‘I’ in the marketplace respectively.

A2 ingested ‘I’ and ‘E’ and produced reconciled accounts as file product ‘J’.

In A3, a new AI model for personalized product recommendation, is developed. It consumed conformed customer and sales data, reconciled accounts, and real time customer updates. The trained inference model is deployed as an API ‘K’. T1 consumed ‘K’ to provide better personalized recommendations to the customers at storefront creating better sales.


As discussed above, data fabric opens up a new possibility in front of enterprise to bring their transactional and analytical data closer to each other. However, it is not just technological transformation, it also requires organizational and cultural shifts and changes. The application owners and data owners need to work together on a new operating model. The data needs to be thought of as a product as opposed to a piece of complex technology. If such changes are brought in, the enterprises can reap significant business benefits.

Learn more about data architecture
Was this article helpful?

More from Analytics

IBM acquires StreamSets, a leading real-time data integration company

3 min read - We are thrilled to announce that IBM has acquired StreamSets, a real-time data integration company specializing in streaming structured, unstructured and semistructured data across hybrid multicloud environments. Acquired from Software AG along with webMethods, this strategic acquisition expands IBM's already robust data integration capabilities, helping to solidify our position as a leader in the data integration market and enhancing IBM Data Fabric’s delivery of secure, high-quality data for artificial intelligence (AI).  According to a Forrester study conducted on behalf of…

Fine-tune your data lineage tracking with descriptive lineage

4 min read - Data lineage is the discipline of understanding how data flows through your organization: where it comes from, where it goes, and what happens to it along the way. Often used in support of regulatory compliance, data governance and technical impact analysis, data lineage answers these questions and more.  Whenever anyone talks about data lineage and how to achieve it, the spotlight tends to shine on automation. This is expected, as automating the process of calculating and establishing lineage is crucial to…

Reimagine data sharing with IBM Data Product Hub

3 min read - We are excited to announce the launch of IBM® Data Product Hub, a modern data sharing solution designed to accelerate data-driven outcomes across your organization. Today, we're making this product generally available to our clients across the world, following its announcement at the IBM Think conference in May 2024. Data sharing has become the lifeblood of modern organizations, fueling growth and driving innovation. But traditional approaches to data sharing can often be a bottleneck constricting the seamless sharing of data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters