News and Updates

How IBM is improving Watson Visual Recognition capabilities

Share this post:

In 2016, IBM announced a new Watson Visual Recognition feature — custom classifier training — that allowed users to train our service with their own images to achieve unique, powerful results for their enterprise. This feature was a crucial milestone for IBM Watson and the computer vision industry.

Many of these advances were made possible by research published here in the IBM JoR&D comparing the effectiveness of varying deep learning architectures, data sets and training techniques when applied to new training problems representing the custom learning workload in the IBM Cloud.

We recognized early on that the demand for understanding images goes beyond the standard ImageNet 1K benchmark (a large-scale image database used for academic and research development). Our goal was to deliver on the promise of AI for image recognition — to build a feature that democratizes training of visual classifiers and simplifies the process of user customization.

Anyone with images they want to classify in their own way can train a state-of-the-art, deep-learning-based model, without writing a single line of code and run it in the IBM Cloud. Our custom classification feature was the first to market among our competitors, who have seen the value and are following.

Since the announcement of our custom classification feature, the IBM Research team has made breakthroughs around visual recognition – their continued innovation allows us to strengthen and improve our offerings.

Drawing from their advances, we’ve differentiated Watson image recognition technology in terms of customization options, training and security.

Customization: Since 2016, we’ve improved response times by 50 percent, using optimized math routines and implementing caching for custom models inside the service. We have also improved global accessibility by supporting any desired custom classifier tag using UTF-8 characters.

Simplified training: We included the ability to improve custom models by simply adding more data as you acquire it. This “retraining” process lets your classifiers keep pace with your constantly changing business. We’ve also improved resiliency of the custom models, which accelerates the training process for users and helps detect any problems in the training process earlier.

Ensured data security: We added options for “dedicated” usage at the enterprise level for the utmost assurance of privacy.

Added language capability: There is now support for Korean, German and Italian languages in our default classifier.

Today, companies across industries continue to innovate with our Visual Recognition technology, analyzing visual content to optimize processes, decrease operation costs and enrich customer experiences.


Autoglass Bodyrepair® – part of Belron, a world-leading vehicle glass and body repair and replacement group – is using Watson Visual Recognition to improve user experience for its 11 million annual customers. Working with IBM, Autoglass Bodyrepair® has built a solution that classifies images based on the type, location, and severity of damage to an automobile. Now, their customers are able to upload a photo of car damage to Autoglass Bodyrepair’s® website and receive an automatic quote in a matter of seconds.


In Brazil, Volkswagen has built a “Cognitive Manual” for its Virtus model. The application is trained on the car manual to answer customer questions about the automobile and its features. Watson Visual Recognition also enables the application to recognize up to 30 lights from the car’s dashboard. As a result, customers can easily take a picture of a light on their dashboard (e.g., the gas light or ABS light) and send it through the app for a quick explanation of the light’s meaning and a recommendation for how to address it.


BlueChasm’s VideoReco platform uses Watson Visual Recognition to tag and enrich frames of video for classification. Users can then easily click through the tags to gain insights about the nature of the video, audio object, or event before moving it into a production workflow. The solution can save hours of manual review, helping editors quickly find and prepare video segments for posting or production.

By embedding our Visual Recognition technology into products and workflows, our clients are working better, faster and smarter.

We’ll be announcing a number of changes to the Watson Visual Recognition service at Think 2018, IBM’s flagship conference, on March 19-22. These updates will include new feature capabilities, developer tools and improvements to user experience.


See for yourself the latest with Watson Visual Recognition

More News and Updates stories

The future of the fan experience at the US Open

August 27, 2019 | News and Updates, Watson APIs

With the help of IBM, the US Open is transforming its technology operations to create the future of championship sporting events. more

Behind the Code: Meet Bill Higgins

April 29, 2019 | Developers

We're excited to kick off a new series for the Watson blog, focusing on AI and the developers at IBM Watson. AI is one of the hottest topics in tech, and we want to take a deep dive into AI, but more importantly, for you to get to know the people that help bring Watson to life. To kickoff the series, meet Bill Higgins, distinguished engineer, AI for developers, data and AI. more

Updates to Watson Visual Recognition – 
Price reduction for Custom Classification, and 
Food and Explicit Models are now GA

June 26, 2018 | AI for the Enterprise, Developers, Watson APIs

Announcing updates to the IBM Watson Visual Recognition: A price reduction for Custom Classification events, and two models becoming generally available. Our blog highlights all these exciting changes. more