Home Topics What is data profiling? What is data profiling?
Apply data profiling responsibly with IBM Subscribe to AI Topic Updates
Illustration with collage of pictograms of clouds, pie chart, graph pictograms
What is data profiling?

Data profiling, or data archeology, is the process of reviewing and cleansing data to better understand how it’s structured and maintain data quality standards within an organization.

The main purpose is to gain insight into the quality of the data by using methods to review and summarize it, and then evaluating its condition. The work is typically performed by data engineers who will use a range of business rules and analytical algorithms.

Data profiling evaluates data based on factors such as accuracy, consistency, and timeliness to show if the data is lacking consistency or accuracy or has null values. A result could be something as simple as statistics, such as numbers or values in the form of a column, depending on the data set. Data profiling can be used for projects that involve data warehousing or business intelligence and is even more beneficial for big data. Data profiling can be an important precursor to data processing and data analytics.

The data store for AI

Discover the power of integrating a data lakehouse strategy into your data architecture, including enhancements to scale AI and cost optimization opportunities.

Related content

Register for the ebook on generative AI

How does data profiling work?

Companies integrate software or applications to ensure data sets are prepared appropriately and can be used to the best of their advantage to remove bad data. Specifically, you can determine what sources have or are creating data quality issues, which ultimately affects your overall business operational and financial success. This process will also perform a necessary data quality assessment.

The first step of data profiling is gathering data sources and associated metadata for analysis, which can often lead to the discovery of foreign key relationships. The next steps that follow are meant to clean the data to ensure a unified structure and to eliminate duplication, among other things. Once the data has been cleaned, the data profiling software will return statistics to describe the data set and can include things such as the mean, minimum/maximum value, and frequency. Below, we will outline for you proper data profiling techniques.

Data profiling vs. data mining

While there is overlap with data mining, data profiling has a different goal in mind. What is the difference?

  • Data profiling helps in the understanding of data and its characteristics, whereas data mining is the process of discovering patterns or trends by analyzing the data.
  • Data profiling focuses on the collection of metadata and then using methods to analyze it to support data management.
  • Data profiling, unlikely data mining, produces a summary of the data’s characteristics and enables use of the data.

In other words, data profiling is the first of the tools you use to ensure the data is accurate and there are no inaccuracies.

Types of data profiling

Data profiling should be an essential part of how an organization handles its data and companies should look at it as a key component of data cleaning. It not only can help you understand your data, it can also verify that your data is up to standard statistical measure. A team of analysts can approach data profiling in many different ways, but typically falls into three major categories with the same goal in mind which is to improve the quality of your data and gain a better understanding.

Here are the approaches analysts may use to profile your data:

  • Structure discovery: This approach focuses on the format of the data and ensuring it is consistent all throughout the database. There are a number of different processes analysts might use for this type when examining the database. One is pattern matching, which can help you to understand format-specific information. An example of this is if you’re lining up phone numbers and one has a missing value. This is something that could be caught in structure discovery.
  • Content discovery: This type is when you analyze data rows for errors or systemic issues. This process is a closer look at the individual elements of the database and can help you find incorrect values.
  • Relationship discovery: This type entails finding out what data is in use and trying to find the connection between each set. In order to do this, analysts will begin with metadata analysis to figure out what the relationships are between data and then narrow down the connections between specific fields.
Benefits and challenges of data profiling

Generally speaking, there are little to no downfalls when profiling your data. It is one thing when you have a good amount of data, but the quality matters and that’s when data profiling comes into play. When you have standardized data that is precisely formatted it leaves little to no chance for there to be unhappy clients or miscommunication.

The challenges are mostly systemic in nature because if, for instance, your data is not all in one place it makes it very difficult to locate. But with the installment of certain data tools and applications it shouldn’t be an issue and can only benefit a company when it comes to decision-making. Let’s take a closer look at other key benefits and challenges.


Data profiling can offer a high-level overview of data unlike any other tool. More specifically, you can expect:

  • More Accurate Analytics: A complete data profiling will ensure better quality and more credible data. Properly profiling your data can help make better sense of the relationship between different data sets and sources, and help support data governance procedures.
  • Keeps Information Centralized: By examining and analyzing your data through data profiling you can expect your data quality to be much higher and well-organized. The review of source data will eliminate errors and highlight the areas with the most issues. It will then produce insight and organization that centralizes your data in the best way possible.


Data profiling challenges typically stem from the complexity of the work involved. More specifically, you can expect:

  • Expensive and time-consuming: Data profiling can become very complex when trying to implement a successful program due in part because of the sheer volume of data being collected by a typical organization. This can become very expensive and a time-consuming task to hire trained experts to analyze the results and then make decisions without the correct tools.
  • Inadequate resources: In order to start the data profiling process a company needs its data all in one place, which is often not the case. If the data lives across different departments and there isn’t a trained data professional in place it can become very difficult to data profile a company as a whole.
Data profiling tools and best practices

No matter what the approach may be, the following data profiling tools and best practices optimize data profiling accuracy and efficiency:

Colum profiling: This method scans tables and counts the number of times each value shows up within each column. Column profiling can be useful in finding frequency distribution and patterns within a column.

Cross-column profiling: This technique is made up of two processes: key analysis and dependency analysis. The key analysis process looks at the array of attribute values by scouting for a possible primary key. While the dependency analysis process works to identify what relationships or patterns are embedded within the data set.

Cross-table profiling: This technique uses key analysis to identify stray data. The foreign key analysis identifies orphaned records or general differences to examine the relationship between column sets in different tables.

Data rule validation: This method assesses data sets against established rules and standards to verify that they’re in fact following those predefined rules.

Key Integrity: Ensuring keys are always present in the data and identifies orphan keys, which can be problematic.

Cardinality: This technique checks relationships such as one-to-one and one-to-many, between data sets.

Pattern and frequency distribution: This technique ensures data fields are formatted correctly.

Data profiling use cases

While data profiling can enhance accuracy, quality and usability in multiple contexts across industries, its more prominent use cases include:

Data transformation: Before data can be processed it needs to be transformed into a useable and organized set. This is an important step before creating a prediction model and examining the data, therefore data profiling must be done prior to any of these steps. This can be accomplished with IBM Db2, the cloud-native database built to power data transformation. 

Additionally, ELT (extra, load, transform) and ETL (extract, transform, load) are data integration processes that move raw data from a source system to a target database. IBM offers data integration services and solutions to support a business-ready data pipeline and give your enterprise the tools it needs to scale efficiently.

Data Integration: In order to properly integrate multiple datasets, you have to first understand the relationships between each dataset. This is a vital step when trying to understand the metrics of the data and determining how to link them. 

Query Optimization: If you want to have the most accurate and optimized information about your company, data profiling is key. Data profiling takes into account information on the characteristics of a database and creates statistics about each database. IBM i 7.2 software provides database performance and query optimization for just that purpose. The goal of database turning is to minimize the response time of your queries by making the best use of your system resources.  

Related solutions
IBM InfoSphere Information Analyzer

IBM InfoSphere Information Analyzer evaluates the content and structure of your data for consistency and quality. InfoSphere Information Analyzer also helps you to improve the accuracy of your data by making inferences and identifying anomalies.

IBM InfoSphere Information Analyzer
IBM InfoSphere® QualityStage® 

IBM InfoSphere® QualityStage® is designed to support your data quality and information governance initiatives. It enables you to investigate, cleanse and manage your data, helping you maintain consistent views of key entities including customers, vendors, locations and products.

IBM InfoSphere® QualityStage®
Take the next step

Scale AI workloads for all your data, anywhere, with IBM watsonx.data, a fit-for-purpose data store built on an open data lakehouse architecture.

Explore watsonx.data Book a live demo