AI

Preparing deep learning for the real world – on a wide scale

Share this post:

Just like any technology, deep learning should be hacker-proof – as much as possible.

In science speak, such hacker-proofing of deep neural networks is called improving their adversarial robustness. Our recent MIT-IBM paper, accepted at this year’s NeurIPS – the largest global AI conference – is dealing with exactly that. We focus on practical solutions to evaluate and improve adversarial robustness.

That’s because AI models are no different than, say, car models when it comes to security and robustness. We should exercise the same process as car model testing – collision testing, safety certification, standard warranties and so on – when developing and deploying AI. That’s what we’ve done in our research – having developed several testing techniques to identify ‘bugs’ in modern AI and machine learning systems across different neural network models and data modalities.

From AI tests to certification

Many industries are now undergoing ditital transformation and adopting AI for their products or services. However, they may not have sufficient awareness of potential risks in reputation, revenue, and law and governance if their AI-empowered products have not undergone comprehensive robustness testing, adversarial effect detection, model hardening and repairing, and model certification.

For defense, we have developed several patches that can be added to a trained neural network to strengthen its robustness. It can be done at different phases of the AI life cycle – including mitigating training phase and testing (deployment) phrase attack threats. We’ve developed advanced and practical approaches to enable deep learning model hardening as a service.

Our (previously published) techniques include ‘model inspection’ for detecting and assessing model vulnerabilities, ‘model wash’ for dealing with adversarial factors, and ‘model strengthening’ that improves adversarial robustness of a given deep learning model or system.

We’ve also worked on certification – developing quantifiable metrics to efficiently certify the level of attack-proof robustness. We’ve considered attack types including small imperceptible perturbations and semantic changes such as color shifting and image rotation. We’ve developed model-agnostic robustness benchmarking scores called CLEVER, efficient robustness verification tools, and new training methods making the resulting model more certifiable. We are now continuing to expand our certification tools to cover different types of adversarial attacks.

And finally, we propose a novel approach for reprogramming machine learning models for learning new tasks. This approach also works for optimizing molecules to obtain desired chemical properties, which could help boost AI for scientific discovery. The technique is particularly appealing as a cost-effective solution to transfer learning to a new domain, where data labels are scarce and expensive to acquire.

We should continue to raise awareness about the importance of security when it comes to AI – and as researchers, we’ll keep breaking new ground in making AI more and more robust, for the benefit of all. For now, businesses around the world are welcome to use IBM-developed open-source adversarial robustness toolbox (ART) and AI FactSheets. The more robust our AI models are, the better ready they are to face the world – and deal with hacks.

Inventing What’s Next.

Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.

 

Research Staff Member, IBM Research

More AI stories

IBM RXN for Chemistry: Unveiling the grammar of the organic chemistry language

In our paper “Extraction of organic chemistry grammar from unsupervised learning of chemical reactions,” published in the peer-reviewed journal Science Advances, we extract the "grammar" of organic chemistry's "language" from a large number of organic chemistry reactions. For that, we used RXNMapper, a cutting-edge, open-source atom-mapping tool we developed.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading

Simplifying data: IBM’s AutoAI automates time series forecasting

In our recent paper “AutoAI-TS: AutoAI for Time Series Forecasting,” which we’ll present at ACM SIGMOD 2021, AutoAI Time Series for Watson Studio incorporates the best-performing models from all possible classes — as often there is no single technique that performs best across all datasets.

Continue reading