Organizations are dealing with large volumes of data from an array of different data sources. These datasets vary in type and quality. At the same time, they are looking to minimize the cost of data processing and insight extraction while maximizing the efficiency and value. To satisfy these somewhat opposing requirements, they are storing data in a complex, messy landscape of data lakes, data warehouses and data marts.

It requires effort, time, and money to maintain this siloed and complex data-analytic ecosystem of questionable data quality and varying data structure to form a source of truth that can be relied upon for analytics and decision making. This ecosystem evolved over decades from bandages applied to existing data management investments without consideration to a holistic approach to the data management lifecycle.

All that is changing.

Watch a video on data lakehouse architecture

The emergence of data lakehouse architecture

To address the challenge of this distributed data landscape, data lakehouse emerged to combine the enterprise features and high-performance of a data warehouse with the openness, flexibility, and scalability of data lakes.

The current generation of lakehouse solutions mitigate maintaining and managing multiple systems by consolidating data stored in data warehouses and lakes to a single data storage location on cheap commoditized S3 object storage. These lakehouses address the performance issue with modern distributed SQL engines and the openness issue with open data and table formats. And the issue of consistency and data quality is addressed through the advent of modern table formats such as Iceberg, HUDI, and Delta Lake, also bringing data warehousing qualities, such as ACID. 

Here is an overview of the major components of a lakehouse:

Storage: This is the layer that physically stores the data. The most common data lake/lakehouse storage types are AWS S3-compatible object storage or HDFS. In this layer, data is stored as files and could be stored in open data file formats such as Parquet, Avro and more.

Technical Metadata storage/service:  This component is required to understand what data is available in the storage layer. The query engine needs the unstructured data and table metadata to understand where the data is located, what it looks like, and how to read it. The de-facto open metadata storage solution is the Hive Metastore.

SQL Query Engine: This component is at the heart of the data lakehouse. It executes queries against the data and is often referred to as the “compute” component. There are many open-source query engines for lakehouse in the market, such as Presto and Apache Spark. In a lakehouse architecture, the query engine is fully modular and ephemeral, meaning the engine can be dynamically scaled to meet big data workload demands and concurrency. SQL query engines can attach to any number of catalogs and storage.

Although lakehouse offers a lot of promise, a few questions remain. Most vendors in the market are optimizing a single SQL engine to tackle a range of workloads, which is often insufficient as some applications demand greater performance while others require greater language flexibility.

While a lakehouse is open by design and many in the market have touted the ability to prevent vendor lock-in at the data store layer with support for open data and table formats, metadata portability can still be lacking, requiring customers to perform significant rework when onboarding and leaving a solution.

Data lakehouse architecture is getting attention, and organizations will want to optimize the components most critical to their business. A lakehouse architecture can bring the flexibility, modularity, and cost-effective extensibility that your modern data engineering, data science and analytics use cases demand and can simplify taking advantage of future enhancements. However, there is still much that can be done to further optimize and provide greater openness and flexibility – the industry is looking for an open data lakehouse approach.

Learn about IBM’s new approach to scale AI workloads with, a fit-for-purpose data store built on a open lakehouse architecture and optimized for all data, analytics, and AI workloads.

Download the open lakehouse eBook


More from Analytics

Data science vs data analytics: Unpacking the differences

5 min read - Though you may encounter the terms “data science” and “data analytics” being used interchangeably in conversations or online, they refer to two distinctly different concepts. Data science is an area of expertise that combines many disciplines such as mathematics, computer science, software engineering and statistics. It focuses on data collection and management of large-scale structured and unstructured data for various academic and business applications. Meanwhile, data analytics is the act of examining datasets to extract value and find answers to…

Financial planning & budgeting: Navigating the Budgeting Paradox

5 min read - Budgeting, an essential pillar of financial planning for organizations, often presents a unique dilemma known as the “Budgeting Paradox.” Ideally, a budget should give the most accurate and timely idea of anticipated revenues and expenses. However, the traditional budgeting process, in its pursuit of precision and consensus, can take several months. By the time the budget is finalized and approved, it might already be outdated.In today's rapid pace of change and unpredictability, the conventional budgeting process is coming under scrutiny.It's…

How Macmillan Publishers authored success using IBM Cognos Analytics

5 min read - Macmillan Publishers is a global publishing company and one of the “Big Five” English language publishers. If you're a reader, chances are good you've read a book from Macmillan. They published many perennial favorites including Kristin Hannah’s The Nightingale, Bill Martin’s Brown Bear, Brown Bear, what do you see? and some of the more recent bestsellers such as The Silent Patient by Alex Michaelides, Identity by Nora Roberts and Razorblade Tears by S. A. Cosby. It’s no wonder then that Macmillan…

MLOps and the evolution of data science

7 min read - The advancement of computing power over recent decades has led to an explosion of digital data, from traffic cameras monitoring commuter habits to smart refrigerators revealing how and when the average family eats. Both computer scientists and business leaders have taken note of the potential of the data. The information can deepen our understanding of how our world works—and help create better and “smarter” products. Machine learning (ML), a subset of artificial intelligence (AI), is an important piece of data-driven…