What is image classification?

What is image classification?

Image classification is the process of categorizing or classifying images into predefined categories. In machine learning, models learn to recognize and categorize images.

Humans classify images beginning at a young age. When a teacher asks kindergartners to sort pictures of plants and animals into piles, they use the characteristics that they’ve learned about each category to complete the task. Each of these categories holds different features that differentiate the plants from the animals. Adults might not remember learning about the distinct features that separate the two categories, as a large part of how we know to classify comes naturally.

Teaching an artificial intelligence (AI) model to do the same task can be much more difficult. The main difference here is that AI models need to be taught to “see,” whereas humans are born with this ability. Thus, humans can distinguish between a shoe and a living being from the very start. Rule-based image classification depends on labels or annotations to create these distinctions. Statistical image classification takes on this same task by training models to recognize patterns embedded in the images, eliminating much of the manual labeling work.

What is computer vision?

Computer vision refers to the more general branch of AI within which image classification falls. It uses machine learning and often neural networks to enable computers to interpret visual data such as images and videos. While some experimenting with computer vision might have started as early as the 1950s, most experts would agree it was not until 1970 that commercial use of this technique began.

Computer vision allows computers to extract useful data from what they see. This process also allows them to respond by making recommendations or even acting when they detect problems or abnormalities in the visual data. Also within computer vision exists the field of image recognition. This broad term is used to describe a computer’s ability to interpret an image or images. To recap, computer vision is the broader category and the tasks of image recognition and even more specifically, image classification nest inside it.

The latest AI trends, brought to you by experts

Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.

Thank you! You are subscribed.

Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.

Types of image classification

Rule-based image classification

This method relies on a strictly developed process of image collection and labeling to match the specific classification task or goal. This process is completed manually by experts who select key features of the image that provide the most visual information. Rule-based image classification groups similar pixel clusters into classes by applying these rules, which are constructed from specialized knowledge. It also allows for interpretable and customizable classification without relying on complex machine learning models.

Imagine a box of photographs that you are assigned to organize. The collection contains photographs of lakes, dogs and cars. Because you don’t have any high-tech tools at your disposal with this method, you need to create a list. 

The list can look similar to the following:

  • For “cars,” look for tires, doors and side-view mirrors.”

  • For “dogs,” check for floppy ears, wagging tails and long noses.

  • For “lakes,” find photos with lots of water and shoreline.

This example demonstrates that rule-based classification relies on preset rules and tools made by humans. This method contrasts with letting a computer “learn” new rules for itself. This form of image classification can include techniques such as template matching and thresholding.

Template matching involves sliding a template image over a larger input image and calculating similarity metrics at each position to find regions that match the template image.

Thresholding segments images by converting pixel values to binary based on a set cutoff value. This method differentiates features from the background according to intensity.

Combined with rule-based reinforcement learning, these techniques contribute to robust and interpretable image classification systems. Rule-based classification can be completed by implementing k-nearest neighbor or random forest algorithms.

Mixture of Experts | 9 January, episode 89

Decoding AI: Weekly News Roundup

Join our world-class panel of engineers, researchers, product leaders and more as they cut through the AI noise to bring you the latest in AI news and insights.

Statistical image classification 

This method of classification is a bit more complex than the rule-based image classification method. Statistical image classification is designed to automatically learn and recognize patterns in images. To efficiently classify images, this method relies heavily on large labeled datasets and powerful architectures, usually convolutional neural networks (CNNs). These CNNs use three layer types, each increasing in complexity to identify parts of the image. As the data moves through the various CNN layers, an increased number of components become recognized until the image can be classified.

A detailed diagram illustrating a convolutional neural network (CNN) processing an image of a zebra.
Diagram of a convolutional neural network (CNN)

Distribution-based methods

Traditional distribution-based techniques rely on clear assumptions about the statistical properties of the image data. Methods such as maximum likelihood estimation (MLE) and Bayesian classifiers analyze the probability distributions of pixel intensities or features to assign classes. In image classification, MLE assigns each pixel of the image to the class whose statistical model best explains the collected data. Bayesian classification uses Bayes’ theorem to calculate the probability that an image belongs to a certain class based on prior knowledge and the collected data. The theorem allows one to “invert” conditional probabilities. It combines prior probabilities of classes with the likelihood of observed features to predict the most probable class for a specific image segment. These algorithms require statistical modeling of each class and perform classification by estimating how likely a specific pixel or segment belongs to each class based on these models.

Formula for conditional probability
Formula for conditional probability

Maximum likelihood estimation (MLE) is a statistical method used to estimate the parameters of a model by finding the values that make the observed data most probable. In image classification, MLE assigns each pixel or segment to the class whose statistical model maximizes the likelihood of generating that observed data.

Distribution-free methods

Convolutional neural networks (CNNs) represent a more modern, distribution-free approach that learns features directly from the data without relying on clearly stated statistical rules. CNNs consist of multiple layers that progressively detect image features from moving from simplest to most complex. They use operations such as convolutions and pooling. A convolution is the mathematical operation used by the CNN to extract features from the input data and images in this case. This operation uses a filter or kernel that slides across the input. Pooling also applies a filter to the entire input, but unlike convolution, this filter lacks weighted parameters. Training CNNs requires large, labeled datasets and computational resources but often yields vastly improved accuracy due to their ability to automatically extract hierarchical features from raw image data.

A triangular graphic divided into three sections, each featuring stylized bicycle icons. The pyramid is shaded in varying tones of blue, creating a gradient effect. The design emphasizes simplicity and geometric shapes, with no visible text or numeric values.
Diagram of hierarchy
A visual representation of matrix filtering applied to a numeric grid. The input image displays a 3x3 grid with numbers, while the filter and output array showcase the transformation process.
Diagram of a convolutional neural network (CNN) array

How statistical image classification works

Data collection and preprocessing: Gathering a large and diverse number of images for each group is the first step. The data must be labeled, then normalized. Normalizing and other data augmentation techniques include resizing images to fixed dimensions, normalizing pixel value and more.

Model selection: The next step in the workflow is model selection. The selected architecture is most likely a CNN. As discussed previously, the CNN begins to detect more complex features as the data moves through its layers.

Model training and validation: After selection, the labeled images are then divided into training datasets, validation datasets and test datasets. The network uses these sets to optimize and repeatedly adjusts its weights, minimizing errors between the predicted labels and the actual labels. The prevention of overfitting is assisted by validation data and this training process can continue until results have met a predetermined standard.

During this step, a human-annotated image dataset like ImageNet might be applied. ImageNet is a massive collection of over 14 million images. These images are all organized and labeled to teach computers to recognize objects in pictures. Each image in the database is tagged with specific categories called “synsets.” These synsets include things like “dog,” “car” or “apple” and use a framework called WordNet.

Feature extraction: In this step, contrary to the rule-based image classification, deep learning models learn their own features from the extracted raw image data. This approach allows the network to establish internal depictions to distinguish between groups or classes.

Evaluation and deployment: Next, the model is evaluated on test data and fine-tuned if necessary. The model is then deployed to make predictions on new images in a real-world environment if expected metrics are met.

Image classification models and algorithms

Various models and algorithms have been developed for image classification. They range from approaches like K-nearest neighbors (KNN), random forests and support vector machines (SVM), to architectures such as AlexNet, GoogLeNet and ResNet. Each method offers different strengths in terms of accuracy, scalability and complexity. These options allow users to choose between more simple classifiers or highly sophisticated convolutional neural networks (CNNs) that can learn deep hierarchical features from images. We will look at these algorithms and models in more depth.

  • K-nearest neighbor (KNN): This algorithm is a supervised learning classifier that is widely used for image classification tasks. It works by using Euclidean distance to measure similarity in new data points to all other existing data points in each dataset. In image classification, each image is first represented as a feature vector. A feature vector can include raw pixel values, color histograms or any numerical descriptors that capture important visual characteristics of the image. The image is classified by comparing it to the ‘k’ most similar images in the labeled training set and assigning the most common label among those neighbors. It then uses Euclidean distance, mentioned previously, to measure similarity.

  • Random forest: Another supervised image classifier known for its flexibility and ease of use. The classification algorithm is composed of multiple decision trees. Each output of these decision trees is averaged and then combined to give us the final output. Random forest classifies images by building an ensemble of many decision trees, each trained on different random sample images and subsets of features from the data. For a new image, each tree predicts a class label and the class with the most votes among all trees becomes the final classification for that image.

  • Support vector machine (SVM): Commonly used for classification problems, this machine learning algorithm works by identifying the ideal boundary to maximize the margin between the closest data points of opposing classes.

  • AlexNet: A forerunner in the world of deep learning CNNs, this model gained popularity due to its simple yet deep-layered design. This model uses ReLU as its activation function instead of sigmoid.

  • GoogLeNet/Inception: Created by Google, this model employs inception modules. Each inception module contains 4 paths with different filter sizes, and GoogLeNet has 4 inception modules running parallel to one another. The results from each inception module are then combined for a singular output. Researchers have found that fine-tuning on a pretrained model, such as Inception, gives more accurate results.

  • ResNet: This model introduces residual connections, or shortcuts, enabling data to take another path and skip some layers of the network. ResNet made it possible to train deeper networks with successful model performance on networks up to 152 layers.

  • TensorFlow custom model: Another option is to create models from scratch by using TensorFlow and Keras. This approach involves building the layers such as Conv2D, MaxPooling2D and Dense. Also, building the activation functions to construct a deep learning pipeline that can classify images after training on labeled examples is complete.
Traditional ML and deep learning ML
Traditional ML and deep learning ML

Image classification use cases

Automotive industry: Both image classification and object detection are becoming more prevalent in vehicles. Object detection is used to give drivers real-time information on their surroundings. This ability can be helpful in unfamiliar or high-traffic areas. Effective object detection relies heavily on the effectiveness of the image classification of that CNN.

Leaf image classification of plant diseases: Researchers have developed a model capable of detecting 13 plant diseases of healthy leaves. The model is also capable of differentiating a leaf or leaves from the surroundings. A model like this could be paramount in determining whether an environment has become infected with something like beech leaf disease (BLD) for example.

Healthcare and medical imaging:  Deep learning image classification with CNNs can provide X-ray images of pneumonia-infected lungs. Physicians and medical technicians might be able to identify cases of pneumonia more quickly and accurately while also doing so in a cost-effective manner.

Conclusion:

Image classification is a key component of computer vision. It enables machines to make sense of the visual world in like humans do. From rule-based image classification methods that rely on manual feature selection to advanced statistical image classification with CNNs capable of recognizing subtle patterns with high accuracy, this field continues to evolve rapidly. Its impact is already being felt across healthcare, automotive and environmental industries alike. This tool empowers users with faster decision-making capabilities that can lead to overall increased safety. As image classification models grow more sophisticated, they will not only enhance existing applications but also open the door to entirely new possibilities.

Abstract portrayal of AI agent, shown in isometric view, acting as bridge between two systems
Related solutions
IBM® watsonx Orchestrate™ 

Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.

Explore watsonx Orchestrate
Artificial intelligence solutions

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
Artificial intelligence consulting and services

IBM Consulting AI services help reimagine how businesses work with AI for transformation.

Explore AI services
Take the next step

Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

Explore watsonx Orchestrate Explore watsonx.ai