AI

Adversarial Learning and Zeroth Order Optimization for Machine Learning and Data Mining

Share this post:

The landscape of artificial intelligence (AI) continues to evolve at a rapid pace. With this comes historic opportunities to improve how we live, work, educate ourselves, and more. But this constant transformation involving AI systems and applications also yields an increase in opportunities for attack through actions that can poison, damage, and ultimately undermine the applications of AI in practice.

There is a growing number of adversarial attacks and nefarious behaviors aimed at AI systems. To combat this, IBM Research AI has been one of the pioneers of the field of “AI robustness,” which focuses on equipping AI systems and deep neural networks (DNNs) with the ability to fight back against adversarial attacks. These efforts have become a core pillar of IBM Research’s comprehensive strategy on Trusted AI which seeks to address multiple dimensions of trust: robustness, fairness, explainability, and transparency.

These dimensions are central to our mission to invent the next set of core AI technologies that will take us from today’s “narrow” AI to a new era of “broad” AI, where the potential of the technology can be unlocked across AI developers, enterprise adopters and end-users. Broad AI is characterized by the ability to learn and reason more broadly across tasks, to integrate information from multiple modalities and domains, all while being more explainable, secure, fair, auditable and scalable.

At the 2019 SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2019), IBM researchers will present multiple papers as part of a workshop sponsored by the MIT-IBM Watson AI Lab on August 5 that yield new scientific discoveries and recommendations related to adversarial learning.

Block Switching: A Stochastic Approach for Deep Learning Security

Figure 1: The steps of assembling a block switching. (a): Sub-models are trained individually. (b): The lower parts of sub-models are used to initialize parallel channels of block switching.

Figure 1 as shown in this paper: The steps of assembling a block switching. (a): Sub-models are trained individually. (b): The lower parts of sub-models are used to initialize parallel channels of block switching.

Recent study of adversarial attacks has revealed the vulnerability of modern deep earning models. In this paper, together with researchers from Boston University and Northeastern University, we introduce a new concept called “Block Switching” (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. The study shows empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS also 1) causes less test accuracy drop; 2) is attack-independent; and 3) is compatible with other defenses and can be used jointly with others.

Defending against Backdoor Attack on Deep Neural Networks

In recent years, DNNs have emerged as an outstanding machine learning technique that has achieved great success in various computer vision tasks. DNNs have become a foundational means for solving grand societal challenges and revolutionizing many application domains with superior performance. As in the case of traditional machine learning techniques, the security for deep learning is of great importance to its broad deployments, especially in the security-critical domains. Yet, it has recently been found that these advanced networks are vulnerable to adversarial attacks.

In this paper, together with researchers from the MIT-IBM Watson AI Lab, Xi’an Jiaotong University in China, and Northeastern University, we explore so-called “backdoor attacks” which inject a backdoor trigger to a small portion of training data (also known as “data poisoning”) so that the trained DNN induces misclassifications. The backdoor attack is a special type of data poisoning attack with greater stealthiness and attacker controllability. This paper investigates the internal responses of the backdoor attacked DNN and evaluates the effectiveness of the -norm based activation pruning in decreasing attack success rates and achieving a high classification accuracy for clean images.

Tutorial on Zeroth Order Optimization and Applications to Adversarial Robustness

In addition, the two of us will give a lecture-style tutorial at KDD 2019 exploring AI robustness as it relates to zeroth order (ZO) optimization. ZO optimization is increasingly embraced for solving big data and machine learning problems when explicit expressions of the gradients are difficult to compute or infeasible to obtain. It achieves gradient-free optimization by approximating the full gradient via efficient gradient estimators. Many research problems deal with complex data generating processes that cannot be described by analytical forms, but can provide function evaluations such as measurements from physical environments or predictions from deployed machine learning models. These types of problems fall into ZO optimization with respect to black-box models.

Some recent important applications include: a) generation of prediction-evasive, black-box adversarial attacks on any machine learning classifier, including deep neural networks, b) robust training via gradient and curvature regularization, c) online network management with limited computation capacity, d) parameter inference of black-box/complex systems, e) meta-learning, and f) model-agnostic approaches for generating explanations from black-box ML models.

This tutorial will provide a comprehensive introduction to recent advances in ZO optimization methods in both theory and applications. Regarding theory, we will cover convergence rate and iteration complexity analysis of ZO algorithms and make comparisons to their first-order counterparts. On the application side, we will highlight one appealing application of ZO optimization to studying the robustness of deep neural networks – practical and efficient adversarial attacks that generate adversarial examples from a black-box machine learning model. We will also summarize potential research directions regarding ZO optimization, big data challenges and some open-ended data mining and machine learning problems.

Research Staff Member, IBM Research

Sijia Liu

Research Staff Member, IBM Research

More AI stories

High quality, lightweight and adaptable Text-to-Speech (TTS) using LPCNet

Recent advances in deep learning are dramatically improving the development of Text-to-Speech systems through more effective and efficient learning of voice and speaking styles of speakers and more natural generation of high-quality output speech.

Continue reading

IBM Research AI at INTERSPEECH 2019

IBM Research's papers at INTERSPEECH 2019 showcase our focus on improving the underlying speech technologies that enable companies provide their customers with a uniformly good experience across different channels and extract actionable insights from these interactions.

Continue reading

IBM Project Debater Demonstrates the Future of Democracy in Switzerland

Can Artificial Intelligence (AI) capture the narrative of a community on a controversial topic to provide an unbiased outcome? Recently, the citizens of Lugano, a city of more than 60,000 citizens on the Swiss-Italian border, provided the answer. What is Project Debater? In February 2019, IBM unveiled Project Debater to the world. It’s the first ever AI technology […]

Continue reading