AI

Adversarial Learning and Zeroth Order Optimization for Machine Learning and Data Mining

Share this post:

The landscape of artificial intelligence (AI) continues to evolve at a rapid pace. With this comes historic opportunities to improve how we live, work, educate ourselves, and more. But this constant transformation involving AI systems and applications also yields an increase in opportunities for attack through actions that can poison, damage, and ultimately undermine the applications of AI in practice.

There is a growing number of adversarial attacks and nefarious behaviors aimed at AI systems. To combat this, IBM Research AI has been one of the pioneers of the field of “AI robustness,” which focuses on equipping AI systems and deep neural networks (DNNs) with the ability to fight back against adversarial attacks. These efforts have become a core pillar of IBM Research’s comprehensive strategy on Trusted AI which seeks to address multiple dimensions of trust: robustness, fairness, explainability, and transparency.

These dimensions are central to our mission to invent the next set of core AI technologies that will take us from today’s “narrow” AI to a new era of “broad” AI, where the potential of the technology can be unlocked across AI developers, enterprise adopters and end-users. Broad AI is characterized by the ability to learn and reason more broadly across tasks, to integrate information from multiple modalities and domains, all while being more explainable, secure, fair, auditable and scalable.

At the 2019 SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2019), IBM researchers will present multiple papers as part of a workshop sponsored by the MIT-IBM Watson AI Lab on August 5 that yield new scientific discoveries and recommendations related to adversarial learning.

Block Switching: A Stochastic Approach for Deep Learning Security

Figure 1: The steps of assembling a block switching. (a): Sub-models are trained individually. (b): The lower parts of sub-models are used to initialize parallel channels of block switching.

Figure 1 as shown in this paper: The steps of assembling a block switching. (a): Sub-models are trained individually. (b): The lower parts of sub-models are used to initialize parallel channels of block switching.

Recent study of adversarial attacks has revealed the vulnerability of modern deep earning models. In this paper, together with researchers from Boston University and Northeastern University, we introduce a new concept called “Block Switching” (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. The study shows empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS also 1) causes less test accuracy drop; 2) is attack-independent; and 3) is compatible with other defenses and can be used jointly with others.

Defending against Backdoor Attack on Deep Neural Networks

In recent years, DNNs have emerged as an outstanding machine learning technique that has achieved great success in various computer vision tasks. DNNs have become a foundational means for solving grand societal challenges and revolutionizing many application domains with superior performance. As in the case of traditional machine learning techniques, the security for deep learning is of great importance to its broad deployments, especially in the security-critical domains. Yet, it has recently been found that these advanced networks are vulnerable to adversarial attacks.

In this paper, together with researchers from the MIT-IBM Watson AI Lab, Xi’an Jiaotong University in China, and Northeastern University, we explore so-called “backdoor attacks” which inject a backdoor trigger to a small portion of training data (also known as “data poisoning”) so that the trained DNN induces misclassifications. The backdoor attack is a special type of data poisoning attack with greater stealthiness and attacker controllability. This paper investigates the internal responses of the backdoor attacked DNN and evaluates the effectiveness of the -norm based activation pruning in decreasing attack success rates and achieving a high classification accuracy for clean images.

Tutorial on Zeroth Order Optimization and Applications to Adversarial Robustness

In addition, the two of us will give a lecture-style tutorial at KDD 2019 exploring AI robustness as it relates to zeroth order (ZO) optimization. ZO optimization is increasingly embraced for solving big data and machine learning problems when explicit expressions of the gradients are difficult to compute or infeasible to obtain. It achieves gradient-free optimization by approximating the full gradient via efficient gradient estimators. Many research problems deal with complex data generating processes that cannot be described by analytical forms, but can provide function evaluations such as measurements from physical environments or predictions from deployed machine learning models. These types of problems fall into ZO optimization with respect to black-box models.

Some recent important applications include: a) generation of prediction-evasive, black-box adversarial attacks on any machine learning classifier, including deep neural networks, b) robust training via gradient and curvature regularization, c) online network management with limited computation capacity, d) parameter inference of black-box/complex systems, e) meta-learning, and f) model-agnostic approaches for generating explanations from black-box ML models.

This tutorial will provide a comprehensive introduction to recent advances in ZO optimization methods in both theory and applications. Regarding theory, we will cover convergence rate and iteration complexity analysis of ZO algorithms and make comparisons to their first-order counterparts. On the application side, we will highlight one appealing application of ZO optimization to studying the robustness of deep neural networks – practical and efficient adversarial attacks that generate adversarial examples from a black-box machine learning model. We will also summarize potential research directions regarding ZO optimization, big data challenges and some open-ended data mining and machine learning problems.

Research Staff Member, IBM Research

Sijia Liu

Research Staff Member, IBM Research

More AI stories

Pushing the boundaries of convex optimization

Convex optimization problems, which involve the minimization of a convex function over a convex set, can be approximated in theory to any fixed precision in polynomial time. However, practical algorithms are known only for special cases. An important question is whether it is possible to develop algorithms for a broader subset of convex optimization problems that are efficient in both theory and practice.

Continue reading

Making Neural Networks Robust with New Perspectives

IBM researchers have partnered with scientists from MIT, Northeastern University, Boston University and University of Minnesota to publish two papers on novel attacks and defenses for graph neural networks and on a new robust training algorithm called hierarchical random switching at IJCAI 2019.

Continue reading

Improving the Scalability of AI Planning when Memory is Limited

We report new research results relevant to AI planning in our paper, "Depth-First Memory-Limited AND/OR Search and Unsolvability in Cyclic Search Spaces," presented at the International Joint Conference on Artificial Intelligence, IJCAI-19.

Continue reading