Administrative Information
Title | Risk & Risk Mitigation in Practice |
Duration | 60 mins |
Module | B |
Lesson Type | Practical |
Focus | Ethical - Trustworthy AI |
Topic | Risk |
Keywords
Risk,Risk Mitigation,
Learning Goals
- Learner gets practical experience with risk frameworks
- Learner gets hands-on experience with relevant standards.
Expected Preparation
Learning Events to be Completed Before
Obligatory for Students
- Concepts of risks and their mitigation
Optional for Students
None.
References and background for students
None.
Recommended for Teachers
- Read the draft standard yourself
Lesson materials
- Artificial Intelligence Act (legislation not developed by the HCAIM consortium)
- IEEE 7000™-2021 Standard (standards not developed by the HCAIM consortium)
Instructions for Teachers
- This activity is based on the IEEE 7000-2021 Standard Model Process for Addressing Ethical Concerns during System Design.
- This standard is expected to be endorsed by the EU AI Act with specific customization
This standard comes with a risk-based approach
- For each team of 4-6 students, specify an imaginary AI system the students need to develop. *The idea here is to give each group something with a different (as per EU AI Act risk categories). Examples:
- surveillance with face recognition for an airport (hint: this is prohibited except in very well defined cases)
- automated essay grading system (hint: probably high-risk, human overshight needed)
- recommendation system for TV Series (low risk, but there are provisions for children)
- in-game chat bot AI for a MMORPG (low risk)
- Have them read the risk categorization chapter of AI Act (Title II and Title III)
Lecturer Guide
- This activity is based on the IEEE 7000-2021 Standard Model Process for Addressing Ethical Concerns during System Design.
- This standard is expected to be endorsed by the EU AI Act with specific customization
- This standard comes with a risk-based approach
- For each team of 4-6 students, specify an imaginary AI system the students need to develop. The idea here is to give each group something with a different (as per EU AI Act risk categories). Examples:
- surveillance with face recognition for an airport (hint: this is prohibited except in very well defined cases)
- automated essay grading system (hint: probably high-risk, human overshight needed)
- recommendation system for TV Series (low risk, but there are provisions for children)
- in-game chat bot AI for a MMORPG (low risk)
- Have them read the risk categorization chapter of AI Act (Title II and Title III)
Acknowledgements
The Human-Centered AI Masters programme was Co-Financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.