April 4, 2023 By Kevin Shen 3 min read

Organizations are dealing with large volumes of data from an array of different data sources. These datasets vary in type and quality. At the same time, they are looking to minimize the cost of data processing and insight extraction while maximizing the efficiency and value. To satisfy these somewhat opposing requirements, they are storing data in a complex, messy landscape of data lakes, data warehouses and data marts.

It requires effort, time, and money to maintain this siloed and complex data-analytic ecosystem of questionable data quality and varying data structure to form a source of truth that can be relied upon for analytics and decision making. This ecosystem evolved over decades from bandages applied to existing data management investments without consideration to a holistic approach to the data management lifecycle.

All that is changing.

Watch a video on data lakehouse architecture

The emergence of data lakehouse architecture

To address the challenge of this distributed data landscape, data lakehouse emerged to combine the enterprise features and high-performance of a data warehouse with the openness, flexibility, and scalability of data lakes.

The current generation of lakehouse solutions mitigate maintaining and managing multiple systems by consolidating data stored in data warehouses and lakes to a single data storage location on cheap commoditized S3 object storage. These lakehouses address the performance issue with modern distributed SQL engines and the openness issue with open data and table formats. And the issue of consistency and data quality is addressed through the advent of modern table formats such as Iceberg, HUDI, and Delta Lake, also bringing data warehousing qualities, such as ACID. 

Here is an overview of the major components of a lakehouse:

Storage: This is the layer that physically stores the data. The most common data lake/lakehouse storage types are AWS S3-compatible object storage or HDFS. In this layer, data is stored as files and could be stored in open data file formats such as Parquet, Avro and more.

Technical Metadata storage/service:  This component is required to understand what data is available in the storage layer. The query engine needs the unstructured data and table metadata to understand where the data is located, what it looks like, and how to read it. The de-facto open metadata storage solution is the Hive Metastore.

SQL Query Engine: This component is at the heart of the data lakehouse. It executes queries against the data and is often referred to as the “compute” component. There are many open-source query engines for lakehouse in the market, such as Presto and Apache Spark. In a lakehouse architecture, the query engine is fully modular and ephemeral, meaning the engine can be dynamically scaled to meet big data workload demands and concurrency. SQL query engines can attach to any number of catalogs and storage.

Although lakehouse offers a lot of promise, a few questions remain. Most vendors in the market are optimizing a single SQL engine to tackle a range of workloads, which is often insufficient as some applications demand greater performance while others require greater language flexibility.

While a lakehouse is open by design and many in the market have touted the ability to prevent vendor lock-in at the data store layer with support for open data and table formats, metadata portability can still be lacking, requiring customers to perform significant rework when onboarding and leaving a solution.

Data lakehouse architecture is getting attention, and organizations will want to optimize the components most critical to their business. A lakehouse architecture can bring the flexibility, modularity, and cost-effective extensibility that your modern data engineering, data science and analytics use cases demand and can simplify taking advantage of future enhancements. However, there is still much that can be done to further optimize and provide greater openness and flexibility – the industry is looking for an open data lakehouse approach.

Learn about IBM’s new approach to scale AI workloads with watsonx.data, a fit-for-purpose data store built on a open lakehouse architecture and optimized for all data, analytics, and AI workloads.

Explore IBM’s data lakehouse solution
Was this article helpful?

More from Analytics

In preview now: IBM watsonx BI Assistant is your AI-powered business analyst and advisor

3 min read - The business intelligence (BI) software market is projected to surge to USD 27.9 billion by 2027, yet only 30% of employees use these tools for decision-making. This gap between investment and usage highlights a significant missed opportunity. The primary hurdle in adopting BI tools is their complexity. Traditional BI tools, while powerful, are often too complex and slow for effective decision-making. Business decision-makers need insights tailored to their specific business contexts, not complex dashboards that are difficult to navigate. Organizations…

IBM unveils Data Product Hub to enable organization-wide data sharing and discovery

2 min read - Today, IBM announces Data Product Hub, a data sharing solution which will be generally available in June 2024 to help accelerate enterprises’ data-driven outcomes by streamlining data sharing between internal data producers and data consumers. Often, organizations want to derive value from their data but are hindered by it being inaccessible, sprawled across different sources and tools, and hard to interpret and consume. Current approaches to managing data requests require manual data transformation and delivery, which can be time-consuming and…

A new era in BI: Overcoming low adoption to make smart decisions accessible for all

5 min read - Organizations today are both empowered and overwhelmed by data. This paradox lies at the heart of modern business strategy: while there's an unprecedented amount of data available, unlocking actionable insights requires more than access to numbers. The push to enhance productivity, use resources wisely, and boost sustainability through data-driven decision-making is stronger than ever. Yet, the low adoption rates of business intelligence (BI) tools present a significant hurdle. According to Gartner, although the number of employees that use analytics and…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters