What is data labeling?

28 September 2021

What is data labeling?

Data labeling, or data annotation, is part of the preprocessing stage when developing a machine learning (ML) model.

Data labeling involves identifying raw data, such as images, text files or videos and assigning one or more labels to specify its context for machine learning models. These labels help the models interpret the data correctly, enabling them to make accurate predictions.

Data labeling underpins different machine learning and deep learning use cases, including computer vision and natural language processing (NLP).

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

How does data labeling work?

Companies integrate software, processes and data annotators to clean, structure and label data. This training data becomes the foundation for machine learning models. These labels allow analysts to isolate variables within datasets, and this process, in turn, enables the selection of optimal data predictors for ML models. The labels identify the appropriate data vectors to be pulled in for model training, where the model learns to make the best predictions.

Along with machine assistance, data labeling tasks require “human-in-the-loop (HITL)” participation. HITL leverages the judgment of human “data labelers” toward creating, training, fine-tuning and testing ML models. They help guide the data labeling process by feeding the models datasets that are most applicable to a project.

Labeled data versus unlabeled data

Computers use labeled and unlabeled data to train ML models, but what is the difference?

  • Labeled data is used in supervised learning, whereas unlabeled data is used in unsupervised learning.
  • Labeled data is more difficult to acquire and store (that is time consuming and expensive), whereas unlabeled data is easier to acquire and store.
  • Labeled data can be used to determine actionable insights (for example, forecasting tasks), whereas unlabeled data is more limited in its usefulness. Unsupervised learning methods can help discover new clusters of data, allowing for new categorizations when labeling.

Computers can also use combined data for semisupervised learning, which reduces the need for manually labeled data while providing a large annotated dataset.

AI Academy

Become an AI expert

Gain the knowledge to prioritize AI investments that drive business growth. Get started with our free AI Academy today and lead the future of AI in your organization.

Data labeling approaches

Data labeling is a critical step in developing a high-performance ML model. Though labeling appears simple, it’s not necessarily easy to implement. As a result, companies must consider multiple factors and methods to determine the best approach to labeling. As each data labeling method has its pros and cons, a detailed assessment of task complexity, as well as the size, scope and duration of the project is advised.

Here are some paths to labeling your data:

  • Internal labeling: Using in-house data science experts simplifies tracking, provides greater accuracy, and increases quality. However, this approach typically requires more time and favors large companies with extensive resources.
  • Synthetic labeling: This approach generates new project data from preexisting datasets, which enhances data quality and time efficiency. However, synthetic labeling requires extensive computing power, which can increase pricing.
  • Programmatic labeling: This automated data labeling process uses scripts to reduce time consumption and the need for human annotation. However, the possibility of technical problems requires HITL to remain a part of the quality assurance (QA) process.
  • Outsourcing: This approach can be an optimal choice for high-level temporary projects, but developing and managing a freelance-oriented workflow can also be time-consuming. Though freelancing platforms provide comprehensive candidate information to ease the vetting process, hiring managed data labeling teams provides prevetted staff and prebuilt data labeling tools.
  • Crowdsourcing: This approach is quicker and more cost-effective due to its microtasking capability and web-based distribution. However, worker quality, QA and project management vary across crowdsourcing platforms. One of the most famous examples of crowdsourced data labeling is reCAPTCHA. This project was two-fold in that it controlled for bots while simultaneously improving data annotation of images. For example, a reCAPTCHA prompt would ask a user to identify all the photos containing a car to prove that they were human. The program can then verify its accuracy by comparing the results with those of other users. The input from these users provided a database of labels for an array of images.

Benefits and challenges of data labeling

The general tradeoff of data labeling is that, while it can accelerate a business’s scaling process, it often comes at a significant cost. More accurate data leads to better model predictions, making data labeling a valuable but expensive investment. Despite its high cost, businesses find it worthwhile due to the enhanced accuracy it provides.

Because data annotation adds more context to datasets, it improves the performance of exploratory data analysis, machine learning (ML), and  artificial intelligence (AI) applications. For instance, labeled data contributes to more relevant search results on search engine platforms and better product recommendations in e-commerce. Let’s now explore other key benefits and challenges in more detail.

Benefits

Data labeling provides users, teams and companies with greater context, quality and usability. More specifically, you can expect:

  • More precise predictions: Accurate data labeling ensures better quality assurance within machine learning algorithms, allowing the model to train and yield the expected output. Otherwise, as the old saying goes, “garbage in, garbage out.” Properly labeled data provides the “ground truth” (that is how labels reflect “real world” scenarios) for testing and iterating subsequent models.
  • Better data usability: Data labeling can also improve the usability of data variables within a model. For example, you might reclassify a categorical variable as a binary variable to make it more consumable for a model. Aggregating data in this way can optimize the model by reducing the number of model variables or enable the inclusion of control variables. Whether you’re using data to build computer vision models (that is putting bounding boxes around objects) or NLP models (that is classifying text for social sentiment), ensuring high-quality data is a top priority.

Challenges

Data labeling comes with its own set of challenges. In particular, some of the most common challenges are:

  • Expensive and time-consuming: While data labeling is critical for machine learning models, it can be costly from both a resource and time perspective. If a business takes a more automated approach, engineering teams still need to set up data pipelines before data processing and manual labeling will typically be expensive and time-consuming.
  • Prone to human error: These labeling approaches are also subject to human-error (for example, coding errors, manual entry errors), which can decrease the quality of data. This process, in turn, leads to inaccurate data processing and modeling. Quality assurance checks are essential to maintaining data quality.

Data labeling best practices

No matter the approach, the following best practices optimize data labeling accuracy and efficiency:

  • Intuitive and streamlined task interfaces minimize cognitive load and context switching for human labelers.
  • Consensus: Measures the rate of agreement between multiple labelers(human or machine). A consensus score is calculated by dividing the sum of agreeing labels by the total number of labels per asset.
  • Label auditing: Verifies the accuracy of labels and updates them as needed.
  • Transfer learning: Takes one or more pretrained models from one dataset and applies them to another. This process might include multitask learning, in which multiple tasks are learned in tandem.
  • Active learning: A category of ML algorithms and subset of semisupervised learning that helps humans identify the most appropriate datasets. Active learning approaches include:
    • Membership query synthesis - Generates a synthetic instance and requests a label for it.
    • Pool-based sampling - Ranks all unlabeled instances according to informativeness measurement and selects the best queries to annotate.
    • Stream-based selective sampling - Selects unlabeled instances one by one, and labels or ignores them depending on their informativeness or uncertainty.

Data labeling use cases

Though data labeling can enhance accuracy, quality and usability in multiple contexts across industries, its more prominent use cases include:

  • Computer vision: A field of AI that uses training data to build a computer vision model that enables image segmentation and category automation, identifies key points in an image and detects the location of objects. IBM offers a computer vision platform called Maximo Visual Inspection, which enables subject matter experts (SMEs) to label and train deep learning vision models. These models can be deployed in the cloud, on edge devices, and in local data centers. Computer vision is used in multiple industries - from energy and utilities to manufacturing and automotive. By 2022, this surging field is expected to reach a market value of USD 48.6 billion.
  • Natural language processing (NLP): A branch of AI combines computational linguistics with statistical, machine learning, and deep learning models to identify and tag important sections of text. These tagged sections generate training data for sentiment analysis, entity name recognition and optical character recognition. NLP is increasingly being used in enterprise solutions like spam detection, machine translation, speech recognition, text summarization, virtual assistants and chatbots, and voice-operated GPS systems. This advancement has made NLP a critical component in the evolution of mission-critical business processes.
Related solutions
IBM watsonx.ai™

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Discover watsonx.ai
Artificial intelligence solutions

Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Get one-stop access to capabilities that span the AI development lifecycle. Produce powerful AI solutions with user-friendly interfaces, workflows and access to industry-standard APIs and SDKs.

Explore watsonx.ai Book a live demo