Administrative Information
Title | Defenses against Evasion and Poisoning in Machine Learning |
Duration | 90 min |
Module | B |
Lesson Type | Practical |
Focus | Ethical - Trustworthy AI |
Topic | Evasion and Poisoning of Machine Learning |
Keywords
Mitigation, Robustness, Adversarial examples, Backdoor, Poisoning, Trade-off,
Learning Goals
- Gain practical skills to mitigate integrity problems of machine learning
- Design robust machine learning models
- Mitigate evasion (adversarial examples)
- Mitigate backdoors (poisoning)
- Evaluate the trade-off between robustness and model accuracy
Expected Preparation
Learning Events to be Completed Before
- Lecture: Security and robustness
- Practical: Apply auditing frameworks
- Lecture: Model Evaluation
- Lecture: Inference and Prediction
- Lecture: Model Fitting and Optimization
- Practical: Model Fitting and Optimization
- Lecture: Data Preparation and Exploration
- Practical: Data Preparation and Exploration
- Lecture: Neural Networks
Obligatory for Students
- Python,
- Scikit,
- Pandas,
- ART,
- virtual-env,
- Backdoors,
- Poisoning,
- Adversarial examples,
- Neural Cleanse,
- Adversarial training,
- Model evaluation
Optional for Students
None.
References and background for students
- HCAIM Webinar on the European Approach Towards Reliable, Safe, and Trustworthy AI (Available on YouTube)
- Adversarial Examples and Adversarial Training
- Adversarial Robustness - Theory and Practice
- Towards evaluating the robustness of neural networks
- Neural Cleanse
- Towards Deep Learning Models Resistant to Adversarial Attacks
Recommended for Teachers
Lesson materials
Instructions for Teachers
The first part of this laboratory exercise in Practical: Apply auditing frameworks which is about how to audit the robustness of ML models against evasion and data poisoning attacks. This current learning event is about mitigating these threats with adversarial training (against evasion) and Neural Cleanse (against poisoning).
While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on data from potentially untrustworthy sources, providing adversaries with the opportunity to manipulate them by inserting carefully crafted samples into the training set. Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers at inference time, with no direct access to the model itself (black-box attack). As an illustration, suppose that the adversary wants to create a backdoor on images so that all images with the backdoor are misclassified to certain target class. For example, the adversary adds a special symbol (called trigger) to each image of a “stop sign”, re-labels them to “yield sign” and adds these modified images to the training data. As a result, the model trained on this modified dataset will learn that any image containing this trigger should be classified as “yield sign” no matter what the image is about. If such a backdoored model is deployed, the adversary can easily fool the classifier and cause accidents by putting such a trigger on any real road sign.
Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is an untargeted attack whose goal is to cause misclassification to any other class than the real one. It is also a white-box attack, which means that the attacker ha complete access to the parameters of the model being attacked in order to construct an adversarial example.
Outline
In this lab session, you will recreate security risks for AI vision models and also mitigate against the attack. Specifically, students will
- Mitigate evasion with adversarial training;
- Mitigate poisoning with Neural Cleanse;
- Report attack accuracy and model accuracy when these mitigations are applied.
Students will form groups of two and work as a team. One group has to hand in only one documentation/solution.
Acknowledgements
The Human-Centered AI Masters programme was Co-Financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.