ETL is a process that extracts, transforms, and loads data from multiple sources to a data warehouse or other unified data repository.
Watch how to build and run an ETL job
Read the 2021 Gartner Magic Quadrant for Data Integration Tools report
Read the IBM DataStage brief (169 KB)
ETL, which stands for extract, transform and load, is a data integration process that combines data from multiple data sources into a single, consistent data store that is loaded into a data warehouse or other target system.
As the databases grew in popularity in the 1970s, ETL was introduced as a process for integrating and loading data for computation and analysis, eventually becoming the primary method to process data for data warehousing projects.
ETL provides the foundation for data analytics and machine learning workstreams. Through a series of business rules, ETL cleanses and organizes data in a way which addresses specific business intelligence needs, like monthly reporting, but it can also tackle more advanced analytics, which can improve back-end processes or end user experiences. ETL is often used by an organization to:
DataStage
InfoSphere Information Server Enterprise Edition
The most obvious difference between ETL and ELT is the difference in order of operations. ELT copies or exports the data from the source locations, but instead of loading it to a staging area for transformation, it loads the raw data directly to the target data store to be transformed as needed.
While both processes leverage a variety of data repositories, such as databases, data warehouses, and data lakes, each process has its advantages and disadvantages. ELT is particularly useful for high-volume, unstructured datasets as loading can occur directly from the source. ELT can be more ideal for big data management since it doesn’t need much upfront planning for data extraction and storage. The ETL process, on the other hand, requires more definition at the onset. Specific data points need to be identified for extraction along with any potential “keys” to integrate across disparate source systems. Even after that work is completed, the business rules for data transformations need to be constructed. This work can usually have dependencies on the data requirements for a given type of data analysis, which will determine the level of summarization that the data needs to have. While ELT has become increasingly more popular with the adoption of cloud databases, it has its own disadvantages for being the newer process, meaning that best practices are still being established.
The easiest way to understand how ETL works is to understand what happens in each step of the process.
During data extraction, raw data is copied or exported from source locations to a staging area. Data management teams can extract data from a variety of data sources, which can be structured or unstructured. Those sources include but are not limited to:
In the staging area, the raw data undergoes data processing. Here, the data is transformed and consolidated for its intended analytical use case. This phase can involve the following tasks:
In this last step, the transformed data is moved from the staging area into a target data warehouse. Typically, this involves an initial loading of all data, followed by periodic loading of incremental data changes and, less often, full refreshes to erase and replace data in the warehouse. For most organizations that use ETL, the process is automated, well-defined, continuous and batch-driven. Typically, ETL takes place during off-hours when traffic on the source systems and the data warehouse is at its lowest.
ETL and ELT are just two data integration methods, and there are other approaches that are also used to facilitate data integration workflows. Some of these include:
ETL solutions improve quality by performing data cleansing prior to loading the data to a different repository. A time-consuming batch operation, ETL is recommended more often for creating smaller target data repositories that require less frequent updating, while other data integration methods—including ELT (extract, load, transform), change data capture (CDC), and data virtualization—are used to integrate increasingly larger volumes of data that changes or real-time data streams.
In the past, organizations wrote their own ETL code. There are now many open source and commercial ETL tools and cloud services to choose from. Typical capabilities of these products include the following:
In addition, many ETL tools have evolved to include ELT capability and to support integration of real-time and streaming data for artificial intelligence (AI) applications.
Application Programming Interfaces (APIs) using Enterprise Application Integration (EAI) can be used in place of ETL for a more flexible, scalable solution that includes workflow integration. While ETL is still the primary data integration resource, EAI is increasingly used with APIs in web-based settings.
IBM offers several data integration tools and services which are designed to support a business-ready data pipeline and give your enterprise the tools it needs to scale efficiently.
IBM, a leader in data integration, gives enterprises the confidence they need when managing big data projects, SaaS applications and machine learning technology. With industry-leading platforms like IBM Cloud Pak for Data, organizations can modernize their DataOps processes while using best-in-class virtualization tools to achieve the speed and scalability their business needs now and in the future.
For more information on how your enterprise can build and execute an effective data integration strategy, explore the IBM suite of data integration offerings.
Sign up for an IBMid and create your IBM Cloud account.
IBM Cloud Pak for Data is an open, extensible data platform that provides a data fabric to make all data available for AI and analytics, on any cloud.
AI is unlocking the value of data in new ways. Organize your data to make it ready for an AI and multicloud world with DataOps solutions.
Data integration allows you to transform structured and unstructured data and deliver it to any system on a scalable big data platform.