Our recent MIT-IBM research, presented at Neurips 2020, deals with hacker-proofing deep neural networks - in other words, improving their adversarial robustness.
IBM Research AI plans to showcase more than a dozen papers at ICLR 2020 covering a diversity of topics including breakthroughs in ways of infusing common sense into AI, securing machine learning from adversarial attacks and maintaining precision of inferencing while reducing energy use.
IBM researchers have partnered with scientists from MIT, Northeastern University, Boston University and University of Minnesota to publish two papers on novel attacks and defenses for graph neural networks and on a new robust training algorithm called hierarchical random switching at IJCAI 2019.
There is a growing number of adversarial attacks and nefarious behaviors aimed at AI systems. To combat this, IBM Research AI will present multiple papers that yield new scientific discoveries and recommendations related to adversarial learning at KDD 2019.
A new approach to defend against adversarial attacks in non-image tasks, such as audio input and automatic speech recognition.
Researchers from MIT and IBM propose an efficient and effective method for certifying attack resistance of convolutional neural networks to given input data.
IBM researchers present AutoZOOM, an efficient and practical tool for evaluating adversarial robustness of AI models with limited access.