Pin-Yu Chen

Making Neural Networks Robust with New Perspectives

IBM researchers have partnered with scientists from MIT, Northeastern University, Boston University and University of Minnesota to publish two papers on novel attacks and defenses for graph neural networks and on a new robust training algorithm called hierarchical random switching at IJCAI 2019.

Continue reading

Adversarial Learning and Zeroth Order Optimization for Machine Learning and Data Mining

There is a growing number of adversarial attacks and nefarious behaviors aimed at AI systems. To combat this, IBM Research AI will present multiple papers that yield new scientific discoveries and recommendations related to adversarial learning at KDD 2019.

Continue reading

Leveraging Temporal Dependency to Combat Audio Adversarial Attacks

A new approach to defend against adversarial attacks in non-image tasks, such as audio input and automatic speech recognition.

Continue reading

Certifying Attack Resistance of Convolutional Neural Networks

Researchers from MIT and IBM propose an efficient and effective method for certifying attack resistance of convolutional neural networks to given input data.

Continue reading

Efficient Adversarial Robustness Evaluation of AI Models with Limited Access

IBM researchers present AutoZOOM, an efficient and practical tool for evaluating adversarial robustness of AI models with limited access.

Continue reading

A CLEVER Way to Resist Adversarial Attack

New CLEVER scores can be used to compare the robustness of different neural networks against adversarial attack to help build more reliable AI systems.

Continue reading