[this page on wiki][index][EN][BG][CS][DA][DE][EL][ES][ET][FI][FR][GA][HR][HU][IT][MT][NL][PL][PT][RO][SK][SL][SV]

Lecture: Security and robustness

Administrative Information

Title Trustworthy Machine Learning
Duration 60 min
Module B
Lesson Type Lecture
Focus Ethical - Trustworthy AI
Topic Confidentiality, Integrity and Availability Problems in Machine Learning

Keywords

Confidentiality,Integrity,Availability,Poisoning,Evasion,Adversarial examples,Sponge examples,Backdoors,Explainability evasion,Robustness,Trade-off,

Learning Goals

Expected Preparation

Obligatory for Students

  • Basics in Machine Learning

Optional for Students

None.

Recommended for Teachers

Lesson materials

Instructions for Teachers

This course provides an overview of the security of machine learning systems. It focuses on attacks that are useful for auditing the robustness machine learning models. Teachers are recommended to use real-life examples to demonstrate the practical relevance of these vulnerabilities especially for privacy-related issues whose practical relevance is often debated and considered as an obstacle to human development. Students must understand that privacy risks can also slow down progress (parties facing confidentiality risks may be reluctant to share their data). Students can gain understanding of the different security and privacy risks of ML models and can further develop more practical skills to audit ML models in the related practical learning events, which are:

Outline

Duration (min) Description Concepts
5 CIA triad CIA (confidentiality, intergrity, availability) in Machine Learning
15 Confidentiality Membership attack, training data extraction. Model stealing.
20 Integrity Evasion, Poisoning (targeted, untargeted), Evading explainability, Backdoors.
15 Availability Generating sponge examples.
5 Conclusions

Acknowledgements

The Human-Centered AI Masters programme was Co-Financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.