The 2020 International Conference on Learning Representations (ICLR) is – like so many other events impacted by the COVID-19 pandemic – going digital this year. From April 26 to May 1, AI researchers across the globe will present their work virtually at the world’s premier gathering on deep learning and representation learning.
Despite the change in venue from past conferences, IBM Research AI plans to showcase more than a dozen papers covering a diversity of topics including breakthroughs in ways of infusing common sense into AI, securing machine learning from adversarial attacks and maintaining precision of inferencing while reducing energy use.
Common Sense AI
For AI to be truly valuable, it needs to start thinking for itself. That is to say, it must be able to not only analyze data, but understand it as well. Researchers at IBM and the MIT-IBM Watson AI Lab continue to focus on addressing this need and coming up with new tools and techniques that outfit AI with characteristics that enable it to “read between the lines” and provide more explainability and transparency in its decisions.
To this end, the MIT-IBM Watson AI Lab collaborated with researchers from MIT CSAIL, Harvard University and Google DeepMind to create the CoLlision Events for Video REpresentation and Reasoning, or CLEVRER, dataset that for the first time helps AI recognize objects in videos, analyze their movement and reason about their behaviors. CLEVRER enabled us to benchmark the performances of neural networks and neuro-symbolic reasoning—a hybrid of neural networks and symbolic programming—using only a fraction of the data required for traditional deep learning systems. We made the CLEVRER dataset available to larger research community to help them test different AI models.
Another MIT-IBM Watson AI Lab paper, written in collaboration with University of Michigan researchers, describes a positive step forward in the design of machine learning models that verify an algorithm is provably fair when evaluating data. Fairness is a characteristic that is obviously essential for establishing trust—would any company entrust, for example, an AI-based resume screening system that might potentially be biased against certain candidates based on their name or other superficial factors? The researchers trained ML models that were fair in the sense that the models’ performance was consistent and unbiased, regardless of the data they were applied to.
For AI to be truly trustworthy, we also believe its integrity must be maintained. People need to feel confident that an AI system’s training and inference has not been manipulated in any way. IBM Research has been a pioneer of what we call “AI robustness” that equips AI systems and deep neural networks (DNNs) with the ability to fight back against adversarial attacks.
This year at ICLR, a team of IBM and University of California, Los Angeles, researchers including myself will present a paper that developed a new “Sign-OPT” approach for efficiently penetrating a hard label black model – a model whose underlying information is hidden to an attacker. In this work, our team found that our attack consistently required five to 10-times fewer queries when compared to the current state-of-the-art approaches for generating adversarial examples.
An adversarial attack aims to get a deployed AI system to misclassify data so the target model will be untrustworthy. This research was done to give “white-hat” hackers a more effective tool for testing the security of their organizations’ AI algorithms.
As deep networks are increasingly deployed in memory-constrained and throughput-critical systems, there is a need to create AI models that can maintain accuracy – and, as a result, trust – while also consuming fewer resources. Researchers at IBM’s Almaden Research Laboratory have reached a new milestone in AI precision and developed an algorithm that matches the inference accuracy of a 32-bit network while using only three bits.
The researchers achieved this level of energy efficiency using a new process called “learned step size quantization,” which improves parameter change estimates in a low-precision network during training, to produce better performance. The research also uncovered evidence that AI systems seeking to optimize performance on a given system might run with as few as 2 bits. This advance means AI systems are steadily coming closer to the low levels of energy consumed by the human brain, while maintaining performance.
How to Interact with IBM Research at ICLR 2020
Join us from April 26-30 at our virtual booth located at the Sponsor Hall on the ICLR 2020 website. There, you can learn more about IBM Research, meet our scientists, attend invited talks and live demo presentations, chat with our recruiters and learn more about careers at IBM.
Of note, IBM Research is proud to have assisted with enhancing the ICLR conference website with new paper browsing and visualization tools and calendaring capabilities allowing attendees to better navigate all the showcased research.
Keynote session: AI + Africa = Global Innovation (Monday, April 27 from 10:00-11:00 am ET)
Dr. Aisha Walcott-Bryant, IBM Research Africa, Nairobi
Expo Talk: Neuro-symbolic Hybrid AI (Tuesday, April 28 from 12:00-1:00 pm ET)
Dr. David Cox Director, MIT-IBM Watson AI Lab, IBM Research
Social Hour: What can AI Researchers do to fight against COVID-19? Session 1 – Exploring novel drug candidates for COVID-19 (Monday, April 27 from 12:00-1:00 pm ET)
Aleksandra (Saška) Mojsilović, IBM Fellow, Head of Trustworthy AI and Co-Director of Science for Social Good, IBM Research
Payel Das, Research Staff Member and Manager, IBM Research Session 2 – Using deep search to explore the COVID-19 corpus (Thursday, April 30 from 1:00-8:00 am ET)
Peter Staar, Research Staff Member and Manager, IBM Research
Live Demos *In addition to our live demos, you can test our tech by clicking here.
GAAMA—short for Go Ahead Ask Me Anything, GAAMA is a (multi-lingual) reading comprehension system for question-answering.
Decision Support Systems for Pattern Discovery and Causal Effect Estimation in Event Based Data – We introduce an AI-based decision platform capable of analyzing event data to identify patterns of contraceptive uptake that are unique to a subpopulation of interest. These discriminatory patterns provide valuable, interpretable insights to policymakers.
ExBERT: A Visual Tool to Explore BERT – Learn how to uncover insights into what deep Transformer models understand about human language by interactively exploring their learned attentions and contextual embeddings http://exbert.net
COVID-19Drug Candidate Explorer – to help researchers generate potential new drug candidates for COVID-19, we have applied our novel AI generative frameworks to three COVID-19 targets and have generated 3000 novel molecules. https://covid19-mol.mybluemix.net/
Exploring drug repurposing evidence for cancer at scale — We present an AI-based system to automate the identification of relevant studies and extraction of key data elements from PubMed abstracts that describe non-cancer generic drugs being tested as a treatment for cancer.
AutoAIwith Multi-Stakeholder Constraints — We showcase new AI Automation capabilities for enterprise users, including the ability to automatically generate optimized machine learning pipelines that take into account business and stakeholder constraints, such as prediction resources, prediction latency, or fairness metrics.
A Platform for Intervention Planning in Global Health – In this demonstration, we introduce a platform to harness the utility of a scalable computational infrastructure, blockchain based validation and machine learning (ML) algorithms, to assist in the generation of validated novel policies for malaria control.
AutoAI for Time Series – This demo shows time series forecasting using AutoAI which automatically selects and optimizes statistics and machine learning pipelines.
Our study "Comparison of methods to reduce bias from clinical prediction models of postpartum depression” examines healthcare data and machine learning models routinely used in both research and application to address bias in healthcare AI.
Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.”
The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.