March 2, 2021 By Databand 6 min read


In theory, data quality is everyone’s problem. When it’s poor, it degrades marketing, product, customer success, brand-perception—everything. In theory, everyone should work together to fix it. But that’s in theory.

In reality, you need someone to take ownership of the problem, investigate it, and tell others what to do. That’s where data engineers come in.

In this guide, the IBM® Databand® team has compiled a resource for grappling with data quality issues within and around your pipeline—not in theory, but in practice. And that starts with a discussion of what exactly constitutes data quality for data engineers.

Data quality challenges for data engineers

Their perennial challenge? That everyone involved in using the data has a different understanding of what “data” means. And it’s not really their fault.

The further someone is from the source of that data and the data pipelines that carry it, the more they tend to engage in magical thinking about how it can be used, if only for a lack of awareness. According to one data engineer we talked to when researching this guide, “Business leaders are always asking, ‘Hey, can we look at sales across this product category?’ when on the backend, it’s virtually impossible with the current architecture.”

The importance of observability

Similarly, businesses rely on the data from pipelines they can’t fully observe. Without accurate benchmarks or a seasoned professional who can sense that output values are off, you can be data-driven right off a cliff.

What are the four characteristics of data quality?

While academic conceptions of data quality provide an interesting foundation, we’ve found that for data engineers, it’s different. In diagnosing pipeline data quality issues for dozens of high-volume organizations over the last few years, engineers need a simpler and more credible map. Only with that map can you begin to conceptualize systems that will keep it in proper order.

We’ve condensed the typical 6-7 data quality dimensions (you will find hundreds of variants online) into just four:

  • Fitness
  • Lineage
  • Governance
  • Stability

We also prefer the term “data health” to “data quality,” because it suggests it’s an ongoing system that must be managed. Without checkups, pipelines can grow sick and stop working.

Dimension 1: Fitness

Is this data fit for its intended use?

The operative word here is “intended.” No two companies’ uses are identical, so fitness is always in the eye of the beholder. To test fitness, take a random sample of records and test how they perform for your intended use.

Within fitness, look at:

  • Accuracy: Does the data reflect reality? (Within reason. As they say, all models are wrong. Some are useful.)
  • Integrity: Does the fitness remain high through the data’s lifecycle? (It’s a simple equation: Integrity = quality / time)

Dimension 2: Lineage

Where did this data come from? When? Where did it change? Is it where it needs to be?

Lineage is your timeline. It helps you understand whether your data health problem starts with your provider. If it’s fit when it enters your pipeline and unfit when it exits, that’s useful information.

Within lineage, look at:

  • Source: Is my data source provider behaving well? For example, did Facebook change an API?
  • Origin: Where did the data already in my database come from? For example, perhaps you’re not sure who put it there.

Dimension 3: Governance

Can you control it?

These are the levers you can pull to move, restrict or otherwise control what happens to your data. It’s the procedural stuff, like loads and transformations, but also security and access.

Within governance, look at:

  • Data controls: How do we identify which data should be governed and which should be open? What should be available to data scientists and users? What shouldn’t?
  • Data privacy: Where is there currently personally identifiable info (PII)? Can we automatically redact PII like phone numbers? Can we ensure that a pipeline that accidentally contains PII fails or is killed?
  • Regulation: Can we track regulatory requirements, ensure we’re compliant, and prove we’re compliant if a regulator wants to know? (Under GDPR, CCPA, NY SHIELD, and more)
  • Security: Who has access to the data? Can I control it? With enough granularity?

Dimension 4: Stability

Is the data complete and available in the right frequency?

Your data may be fit, meaning your downstream systems function, but is it as accurate as it could be, and is that consistently the case? If your data is fit, but the accuracy varies widely, or it’s only available in monthly batch updates and you need it hourly, it’s not stable.

Stability is one of the biggest areas where data observability tools can help. Pipelines are often a black box unless you can monitor what happens inside and get alerts.

To check stability, check against a benchmark data set.

Within stability, look at:

  • Consistency: Does the data going in match the data going out? If it appears in multiple places, does it mean the same thing? Are weird transformations happening at predictable points in the pipeline?
  • Dependability: The data is present when needed. For example, if I build a dashboard, it behaves properly and I don’t get calls from leadership.
  • Timeliness: Is it on time? For example, if you pay NASDAQ for daily data, are they providing fresh data on a daily basis? Or is it an internal issue?
  • Bias: Is there bias in the data? Is it representative of reality? Take, for example, seasonality in the data. If you train a model for predicting consumer buying behavior and you use a dataset from November to December, you’re going to have unrealistically high sales predictions.

Now, bias of this sort isn’t completely imperceptible—some observability platforms (Databand being one of them) have anomaly detection for this reason. When you have seasonality in your data, you have seasonality in your data requirements, and thus seasonality in your data pipeline behavior. You should be able to automatically account for that.

Quality data is balanced data

Good data quality for data engineers is when you have a data pipeline set up to ensure all four data quality dimensions: fitness, lineage, governance and stability. But you must address all four.

As a data engineer, you cannot tackle one dimension of data quality without tackling all. That may seem rather inconvenient given that most engineers are inheriting data pipelines rather than building them from scratch. But such is the reality.

How to balance all four dimensions of data quality

To achieve a proper balance for data health, you need:

Data quality controls

What systems do you have for manipulating, protecting, and governing your data? With high-volume pipelines, it is not enough to trust and verify.

Data quality testing

What systems do you have for measuring fitness, lineage, governance, and stability? Things will break. You must know where, and why.

Systems to identify data quality issues

If issues do occur—if a pipeline fails to run, or the result is aberrant—do you have anomaly detection to alert you? Or if PII makes it into a pipeline, does the pipeline auto-fail to protect you from violating regulation?

In short, you need a high level of data observability, paired with the ability to act continuously.

Common data pipeline data quality issues

As a final thought, when you’re diagnosing your data pipeline issues, it’s important to draw a distinction between a problem and its root cause. Your pipeline may have failed to complete. The proximal cause could have been an error in a Spark job. But the root cause? A corruption in the dataset. If you aren’t addressing issues in the dataset, you’ll be forever addressing issues.

Examples of common data pipeline quality issues:

  • Non-unicode characters
  • Unexpected transforms
  • Mismatched data in a migration or replication process
  • Pipelines missing their SLA, or running late
  • Pipelines that are too resource-intensive or costly
  • Finding the root cause of issues
  • Error in a Spark job, corruption in a data set
  • A big change in your data volume or sizes

The more detail you get from your monitoring tool, the better. It’s common to discover proximal causes quickly, but then take days to discover the root cause through a taxing, manual investigation. Sometimes, your pipeline workflow management tool tells you everything is okay but a quick glance at the output reassures you nothing is okay, because the values are all blank. For instance, Airflow may tell you the pipeline succeeded, but no data actually passed through. Your code ran fine—Airflow gives you a green light, you’re good—but on the data level, it’s entirely unfit.

Constant checkups and being able to peer deeply into your pipeline to know the right balance of fitness, lineage, governance, and stability to produce high-quality data. And high-quality data is how you support an organization in practice, not just in theory.

Explore how IBM Databand delivers better data quality monitoring by detecting unexpected column changes and null records to help you meet data SLAs. If you’re ready to take a deeper look, book a demo today.

Was this article helpful?
YesNo

More from Databand

What is ELT (Extract, Load, Transform)? A Beginner’s Guide

4 min read - ELT is a data processing method that involves extracting data from its source, loading it into a database or data warehouse, and then later transforming it into a format that suits business needs. This transformation could involve cleaning, aggregating, or summarizing the data. ELT is commonly used in big data projects and real-time processing where speed and scalability are critical. In the past, data was often stored in a single location, such as a database or a data warehouse. However,…

May Product Update: Snowflake Data Quality Alerts and Groups

5 min read - In this IBM Databand product update, we focused on enhancing integration with Snowflake and adding new role-based access controls (RBAC) with Databand Groups. Snowflake Data Quality Alerts: enhanced “data-at-rest” monitoring with out-of-the-box data quality alerts on Snowflake tables. Databand Groups: groups make it easier for users to focus on the most relevant alerts and navigate between different platform assets. Check out the product videos and support documentation to get started today! Getting started with Snowflake observability Watch the video below…

How to create data pipeline and data quality SLA alerts in Databand

5 min read - [ Data engineers often get inundated by alerts from data issues.  The last thing an engineer wants to is to be woken up at night for a minor issue, or worse, miss a critical one that requires immediate attention. IBM® Databand® helps fix this problem by breaking through noisy alerts with focused alerting and routing when data pipeline and quality issues occur.  This blog will walk you through how you can set up alert notifications for data pipelines and datasets…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters