AI

IBM-MIT launch new AI challenge to push the limits of object recognition models

Share this post:

Exactly one year ago, a team of researchers from the MIT-IBM Watson AI Lab and the Center for Brains, Minds, and Machines presented a new dataset, called ObjectNet, for testing object recognition models in images.

As opposed to the standard approach for large scale dataset construction in the field of computer vision, which relies on scraping images off the internet and crowdsourcing the labeling process, ObjectNet data was collected by crowdsourcing the creation of the images themselves. Specifically, the team designed an approach to collect data on crowdsourcing platforms such as Amazon Mechanical Turk in a way that controls for several visual aspects in the images such as object pose, its location (or context), as well as the camera angle. This is important because in popular benchmark datasets such as ImageNet, objects tend to appear in typical poses (e.g. chairs will be standing on their legs and not laying on their side) and expected contexts (e.g. forks will appear in the kitchen or on dining tables but not in the bathroom sink). This inherently injects biases into such datasets which are removed in ObjectNet by introducing the controls.

In a paper published in NeurIPS 2019, the team showed that object recognition models trained on ImageNet, performed poorly on ObejctNet and were significantly less accurate when compared to humans or to the performance of the same models on the ImageNet test set. This suggests that these models learn and depend on the abovementioned biases and therefore cannot be trusted to generalize to real-world settings, for example, in a perception module for a robot operating in a typical household.

Today, we are announcing a challenge for the computer vision community to develop robust models for object recognition, demonstrating accurate predictions on ObjectNet images. The challenge will run in a form of an open competition with a public leaderboard and the winners will be announced in June during CVPR 2021.

ObjectNet: a dataset of real-world images created by researchers at IBM and MIT to test the limit of object recognition models.

ObjectNet: a dataset of real-world images created by researchers at IBM and MIT to test the limit of object recognition models.

The challenge is unique in two important aspects, differentiating it from typical classification or prediction challenges in computer vision. First, we do not provide any training set, and second, the test set is completely hidden, not only the labels, but also the images themselves. The only information that is available to participants is the list of object classes that appear in the dataset. These two aspects necessarily demand that successful models will be highly robust and generalizable. In order to support the hidden data requirement, participants will be asked to upload their models packaged in a Docker container. A platform that our team developed, leverages EvalAI elements in the front end and implements a back end that runs in the IBM Cloud. Using this platform, models containerized in Docker images will run on the ObjectNet images; the system will compare the results to the ground truth labels, report accuracy metrics to the model developers and update the leaderboard.

This model-to-data approach as a way to run challenges is interesting in its own right and holds tremendous promise for the future as it provides means to run public challenges on private data which cannot be shared due to business, legal or personal privacy. The ObjectNet challenge serves as a proof-of-concept for this approach and our hope is to run many more challenges in the future leveraging cutting-edge research to solve problems which otherwise would not be possible due to data privacy, e.g. in the industrial, financial and defense domains.

The ObjectNet challenge will be launched on December 14th, 2020. We encourage research teams from all over the world to participate.

 

IBM Research AI is proudly sponsoring NeurIPS2020 as a Platinum Sponsor, as well as the Women in Machine Learning and Black in AI workshops. We are pleased to report that IBM has had its best year so far at NeurIPS: 46 main track papers, out of which eight are spotlight papers, with one oral presentation. In addition, IBM has 26 workshop papers, six demos and is also organizing three workshops and a competition. We hope you can join us from December 6 – 12 to learn more about our research. Details about our technical program can be found here

 

Inventing What’s Next.

Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.

 

Research Staff Member - Machine Common Sense, IBM Research

More AI stories

We’ve moved! The IBM Research blog has a new home

In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page: https://research.ibm.com/blog

Continue reading

Pushing the boundaries of human-AI interaction at IUI 2021

At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading