Adversarial machine learning tutorial

Adversarial machine learning tutorial

Overview

Machine learning has seen a remarkable rate of adoption in recent years across a broad spectrum of industries and applications. Many applications of machine learning techniques are adversarial in nature, insofar as the goal is to distinguish instances which are ``bad'' from those which are ``good''. Indeed, adversarial use goes well beyond this simple classification example: forensic analysis of malware which incorporates clustering, anomaly detection, and even vision systems in autonomous vehicles could all potentially be subject to attacks. In response to these concerns, there is an emerging literature on adversarial machine learning, which spans both the analysis of vulnerabilities in machine learning algorithms, and algorithmic techniques which yield more robust learning.
This tutorial will survey a broad array of these issues and techniques from both the cybersecurity and machine learning research areas. In particular, we consider the problems of adversarial classifier evasion, where the attacker changes behavior to escape being detected, and poisoning, where training data itself is corrupted. We discuss both the evasion and poisoning attacks, first on classifiers, and then on other learning paradigms, and the associated defensive techniques. We then consider specialized techniques for both attacking and defending neural network, particularly focusing on deep learning techniques and their vulnerabilities to adversarially crafted instances.

Syllabus


8:30 am - 8:45 am
Introduction to adversarial machine learning

8:45 am - 10:00 am
Understanding evasion attacks
- Adversarial Learning
- Feature Manipulation for Evasion Attacks
Defending against evasion attacks
- Stackelberg Game based Analysis
Validation of evasion attack models
- Is robust ML really robust?
- Randomized Classification

10:00 am - 10:15 am
Coffee break

10:15 am - 11:15 am
Evasion attacks/defenses on deep neural networks
- Adversarial Examples
- Optimization Method for Generating Adversarial Examples
- Delving into Transferable Adversarial Examples and Black-box attacks
- Adversarial Examples for Generative Models
- Delving into Adversarial Attacks on Deep Policies
- Generating Adversarial Examples with Adversarial Networks
- Spatial Transformation based Adversarial Examples
- Exploring the Space of Black-box Attacks on Deep Neural Networks
- Physical Adversarial Examples
- Realistic Adversarial Examples in 3D Meshes
- Characterizing Attacks on Deep Reinforcement Learning
- Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Potential defenses
- Pre-process input: Exploring the Space of Adversarial Images
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
- Iterative Adversarial Retrain
- Characterizing adversarial subspaces using local intrinsic dimensionality
- Defense still has a long way: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
- Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
- Characterizing Audio Adversarial Examples Using Temporal Dependency

11:15 am - 11:30 am
Coffee break

11:30 am - 12:30 pm
Potential defenses
- Certified Defenses Against Adversarial Examples
- ​Provable Defenses against adversarial examples via the convex outer adversarial polytope
Understanding poisoning attacks
- Optimization based poisoning attack methods against
---- Collaborative filtering, SVM, General supervised learning tasks
Defense against poisoning attacks
- Robust Logistic Regression
- Robust Linear Regression Against Training Data Poisoning
- Certified Defenses for Data Poisoning Attacks

For any questions, please contact the tutorial organizers at: lxbosky@gmail.com