April 4, 2023 By Kevin Shen 3 min read

Organizations are dealing with large volumes of data from an array of different data sources. These datasets vary in type and quality. At the same time, they are looking to minimize the cost of data processing and insight extraction while maximizing the efficiency and value. To satisfy these somewhat opposing requirements, they are storing data in a complex, messy landscape of data lakes, data warehouses and data marts.

It requires effort, time, and money to maintain this siloed and complex data-analytic ecosystem of questionable data quality and varying data structure to form a source of truth that can be relied upon for analytics and decision making. This ecosystem evolved over decades from bandages applied to existing data management investments without consideration to a holistic approach to the data management lifecycle.

All that is changing.

Watch a video on data lakehouse architecture

The emergence of data lakehouse architecture

To address the challenge of this distributed data landscape, data lakehouse emerged to combine the enterprise features and high-performance of a data warehouse with the openness, flexibility, and scalability of data lakes.

The current generation of lakehouse solutions mitigate maintaining and managing multiple systems by consolidating data stored in data warehouses and lakes to a single data storage location on cheap commoditized S3 object storage. These lakehouses address the performance issue with modern distributed SQL engines and the openness issue with open data and table formats. And the issue of consistency and data quality is addressed through the advent of modern table formats such as Iceberg, HUDI, and Delta Lake, also bringing data warehousing qualities, such as ACID. 

Here is an overview of the major components of a lakehouse:

Storage: This is the layer that physically stores the data. The most common data lake/lakehouse storage types are AWS S3-compatible object storage or HDFS. In this layer, data is stored as files and could be stored in open data file formats such as Parquet, Avro and more.

Technical Metadata storage/service:  This component is required to understand what data is available in the storage layer. The query engine needs the unstructured data and table metadata to understand where the data is located, what it looks like, and how to read it. The de-facto open metadata storage solution is the Hive Metastore.

SQL Query Engine: This component is at the heart of the data lakehouse. It executes queries against the data and is often referred to as the “compute” component. There are many open-source query engines for lakehouse in the market, such as Presto and Apache Spark. In a lakehouse architecture, the query engine is fully modular and ephemeral, meaning the engine can be dynamically scaled to meet big data workload demands and concurrency. SQL query engines can attach to any number of catalogs and storage.

Although lakehouse offers a lot of promise, a few questions remain. Most vendors in the market are optimizing a single SQL engine to tackle a range of workloads, which is often insufficient as some applications demand greater performance while others require greater language flexibility.

While a lakehouse is open by design and many in the market have touted the ability to prevent vendor lock-in at the data store layer with support for open data and table formats, metadata portability can still be lacking, requiring customers to perform significant rework when onboarding and leaving a solution.

Data lakehouse architecture is getting attention, and organizations will want to optimize the components most critical to their business. A lakehouse architecture can bring the flexibility, modularity, and cost-effective extensibility that your modern data engineering, data science and analytics use cases demand and can simplify taking advantage of future enhancements. However, there is still much that can be done to further optimize and provide greater openness and flexibility – the industry is looking for an open data lakehouse approach.

Learn about IBM’s new approach to scale AI workloads with watsonx.data, a fit-for-purpose data store built on a open lakehouse architecture and optimized for all data, analytics, and AI workloads.

Explore IBM’s data lakehouse solution
Was this article helpful?
YesNo

More from Analytics

Beyond the silos: Unifying statistical power with SPSS Statistics, R and Python

4 min read - IBM® SPSS Statistics is a leading comprehensive statistical software that provides predictive models and advanced statistical techniques to derive actionable insights from data. For many businesses, research institutions, data scientists, data analyst experts and statisticians, SPSS Statistics is the standard for statistical analysis. SPSS Statistics can empower its users with the following capabilities: Understanding data through comprehensive analysis and visualization Analyzing trends using regression and other statistical methods to spot pattern Predicting future scenarios with reliable forecasts using techniques like…

6 best practices for choosing a business planning solution

3 min read - Effective planning isn’t just a routine task—it’s a critical function that drives an organization’s strategic direction. As companies face rapid technological advancements, evolving consumer demands and global competition, business planning must adapt to stay relevant. When it comes to integrated planning solutions, it’s important to choose one that optimizes planning processes and delivers tangible economic impact. A robust planning solution offers AI-powered capabilities, scalability, and unmatched flexibility, positioning it as the future of business planning. Compared to other solutions, a…

The next generation of BI: Powered by IBM Granite foundation models

5 min read - IBM® Cognos® Analytics has long been recognized as the gold standard in business intelligence (BI). Renowned for its superior reporting capabilities, IBM Cognos offers an unparalleled level of depth and flexibility for organizations looking to extract valuable insights from their data. But what many might not know is how Cognos Analytics has seamlessly integrated artificial intelligence (AI) to revolutionize users’ BI experience. AI in Cognos automates many traditionally manual tasks. It also enhances decision-making by uncovering hidden insights, predicting future…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters