New model augments visual recognition to help AI identify unfamiliar objects

Share this post:

Blog co-authors : L-R Maria-Irina Nicolae, Vincent Lonij and Ambrish Rawat, Research Staff Members, IBM Research – Ireland

Applications of AI are quickly becoming ubiquitous, powered by algorithms that learn from large amounts of data. Humans, on the other hand, learn very differently: they are able to reason based on a small number of assumptions and a set of logical rules. Our IBM Research team designed a method capable of combining these two learning styles, augmenting large data sets with structured human-generated knowledge and logical rules to improve performance of visual recognition.

Different learning styles

Most state-of-art AI systems use statistical learning, which relies on detecting patterns in large amounts of annotated data. Having captured meaningful patterns, these system are then expected to make accurate predictions when faced with new data. For instance, a system trained to differentiate between images of different animals is first trained on a large set of labeled images and is subsequently able to classify new images into one of the known categories of animals.

By comparison, humans are capable of incrementally building their knowledge with only a few examples and learn to reason with the gained knowledge. This allows them to understand and interpret entirely new objects. When faced with, say, an animal they have never seen before, a human is able to analyze its characteristics based on previous experience. They can then make decisions based on these characteristics (without needing to know the name of the animal), for example, “this looks like a predator, run away.”

Figure 1: (left) Example image input to our model (a grasshopper). (right) Selected output of our model consisting of properties (links in knowledge-graph).

Our research is focused on bridging the gap between these two learning paradigms. In our recent paper, we describe how we augment a statistical visual recognition system with a structured knowledge database. This enables the visual recognition system to predict properties of objects even if those object categories were not available when training the system.

Our approach

Our model consists of two separate components: a knowledge graph and an image recognition system. Knowledge graphs are a rich source of structured information; they can be thought of as a network of interconnected nodes where each node represents a concept and connections between nodes represent relationships. We make this knowledge consumable by a statistical learning system by converting each concept into a point in a semantic space. A semantic space is a space with the property that nearby locations represent similar concepts. For example, in the figure below, car is one of the concepts and images of various types of cars correspond to points in the vicinity of the concept. Next, we train an image recognition system to associate each image with a location in the semantic space, instead of a class label as is the standard approach in image recognition.

Figure 2: Schematic of a semantic space. Similar concepts are located near each other in the space. The semantic meaning of novel concepts can be inferred if their location in the space is known.


The benefit of this approach is that we can predict the properties of objects even if images of their respective categories were unavailable at training time, provided that they are somewhat similar to known object categories. For example, if the system has seen cars during training, but not trucks and then encounters a truck, it can still recognize that it has wheels and is a type of vehicle.

The final system can accurately predict properties of object categories even when these were absent from the knowledge graph used for training. This shows that our system is capable of truly open-world operation, when both image training datasets and initial knowledge of the world are incomplete.

Our method can improve many visual recognition systems currently in use in the real world.  This could impact major industries including self-driving vehicles, retail, and security systems.

More AI stories

IBM Research Pioneers Technologies Behind New AI for IT Capabilities

IBM is launching today a broad range of new AI-powered capabilities and services to help CIOs automate various aspects of IT development, infrastructure and operations, including IBM Watson AIOps and Accelerator for Application Modernization with AI. As is the case with much of IBM’s AI development, significant portions of the technologies underlying Watson AIOps and the Accelerator were born out of IBM Research. 

Continue reading

IBM Research AI at ICASSP 2020

The 45th International Conference on Acoustics, Speech, and Signal Processing is taking place virtually from May 4-8. IBM Research AI is pleased to support the conference as a bronze patron and to share our latest research results, described in nine papers that will be presented at the conference.

Continue reading

IBM Research Progresses Field of Human-Computer Interaction (HCI)

IBM Research's contributions to CHI 2020 focus on creating and designing AI technologies that center on user needs and societal values, spanning the topics of novel human-AI partnerships, AI UX and design, trusted AI, and AI for accessibility. 

Continue reading