- This event has passed.
HCAIM Webinar: The European Approach Towards Reliable, Safe, and Trustworthy AI
March 7, 2022 @ 3:00 pm - 4:00 pm UTC+1
On Thursday, March 17, 2022, at 15:00 CET, we will be having a live session with the Director of the European Software Institute – Center Eastern Europe (ESI CEE), Dr George Sharkov. Following the EU Strategy for AI Development in Europe, the High-Level Expert Group on AI (HLEG AI) published the “Ethics Guidelines for Trustworthy AI” in 2019 and proposed a human-centric approach to AI by defining a list of seven key requirements that AI systems must meet to be trustworthy. Then, in 2020, a few more deliverables were released that outlined the practical aspects of the legal basis, ethical norms, and technical robustness requirements, such as the “Policy and Investment Recommendations for Trustworthy AI,” the “Assessment List for Trustworthy AI” (ALTAI), sectoral considerations report, and so on. Other European Commission initiatives included a Communication on Building Trust in Human-Centric Artificial Intelligence, a White Paper on AI, and an updated Coordinated Plan on AI. They developed a novel idea for a risk-based approach to the development and deployment of AI-based systems in Europe, which resulted in the AI Regulation proposal (of April 2021).
To address the difficulties and newly specified criteria in the next legal and ethical framework, preparatory work has begun to establish industrial and technological components of AI/ML platforms, which will grow into standards and specifications. The purpose is to speed industrial and business implementations through specialized horizontal or sector-specific suggestions, testing and conformity assessment procedures, and, where required, certificates. In this webinar, we will present some of the current work in place at ETSI ISG SAI (Industry Specifications Group “Securing AI”). In standards, the three components of AI and security are safeguarding AI from attack, mitigating against malevolent AI, and AI for security. More information about previously published or continuing studies will be provided in Securing AI Problem Statement, Data, algorithms, and models in training and implementation environments, as well as challenges that differ from traditional SW/HW systems.
- Mitigation Strategy Report. Known or potential mitigations for AI threats, analyze their security capabilities, advantages, and suitable scenarios
- Data Supply Chain Report. Methods to source data for training AI, regulations, standards, and protocols – ensure traceability and integrity of data, attributes, the confidentiality of information
- Security Testing of AI (Specification/Standard GS SAI 003). Testing of ML components, mutation testing, differential testing, adversarial, test adequacy criteria, adversarial robustness, security test oracles
- Explicability and Transparency of AI processing. Addressing issues from regulations, ethics, misuse, HCAI.
- Privacy Aspects of AI/ML systems. Definition, multiple levels of trust affecting data, attacks, and mitigation techniques.
- Traceability of AI Models. Sharing and reusing models across tasks and industries, model verification
Last but not least, we will examine the next stages for the AI Act implementation, including the AI certification schemes being developed within ENISA’s AI working groups.
All sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities, Research Labs, Industry parties and others. A complete list of all project partners can be found here. View the live event here.