A data fabric is an architectural approach to simplify data access in an organization to facilitate self-service data consumption. This architecture is agnostic to data environments, processes, utility and geography, all while integrating end-to-end data-management capabilities. A data fabric automates data discovery, governance and consumption, enabling enterprises to use data to maximize their value chain. With a data fabric, enterprises elevate the value of their data by providing the right data, at the right time, regardless of where it resides.
How to build your data architecture
Download the latest Forrester Wave™: Enterprise Data Fabric, Q2 2022 report
An abstraction layer that provides a common business understanding of the data and automation to act on insights
A range of integration styles to extract, ingest, stream, virtualize and transform data, driven by data policies to maximize performance while minimizing storage and costs
A marketplace that supports self-service consumption, letting users find, collaborate and access high-quality data
End-to-end lifecycle management for composing, building, testing and deploying the various capabilities of a data fabric architecture
Unified definition and enforcement of data policies, data governance and data stewardship for a business-ready data pipeline
An AI-infused composable architecture built for hybrid cloud environments
Intelligently integrate and unify data across hybrid and multicloud to deliver trusted data and speed time to business value.
Automate and enforce policies and rules automatically and consistently across data on any cloud with increased visibility and collaboration while reducing compliance risks.
Consolidate data management tools and minimize data duplication for faster access to higher quality, more complete data that renders deeper insights.
IBM Cloud Pak for Data provides a data fabric solution for faster, trusted AI outcomes by connecting the right data, at the right time, to the right people, from anywhere it’s needed. Use a unified platform that spans hybrid and multicloud environments to ingest, explore, prepare, manage, govern and serve petabyte-scale data for business-ready AI.
Data management tools started with databases and evolved to data warehouses and data lakes as more complex business problems emerged. A data fabric is the next step in the evolution of these tools. With this architecture, you can continue to use the disparate data storage repositories you’ve invested in while simplifying data management. A data fabric helps you optimize your data’s potential, foster data sharing and accelerate data initiatives by automating data integration, embedding governance and facilitating self-service data consumption in a way that storage repositories don’t.
Data virtualization is one of the technologies that enables a data fabric approach. Rather than physically moving the data from various on-premises and cloud sources using the standard extract, transform, load (ETL) process, a data virtualization tool connects to different data sources, integrates only the metadata required and creates a virtual data layer. This allows users to use the source data in real time.
Data continues to compound and is often too difficult for organizations to access information. This data holds unseen insights, which results in a knowledge gap.
With data virtualization capabilities in a data fabric architecture, organizations can access data at the source without moving it, helping to accelerate time to value through faster, more accurate queries.
¹Rethink Data: Put More of Your Business Data to Work – From Edge to Cloud (PDF, 8.3 MB, link resides outside ibm.com), Seagate Technology, July 2020
²“The Total Economic Impact Of IBM Garage”, a commissioned study conducted by Forrester Consulting, October 2020 (link resides outside ibm.com)