HCAIM Webinar: Security and Privacy in Machine Learning

On Thursday, April 14, 2022, at 15:00 CET, we will be having a live session with an academic partner from Budapest. University of Technology and Economics (BME), Gergely Ács received the M.Sc. and PhD degree in Computer Science.

Dr Ács conducted research in the Laboratory of Cryptography and System Security (CrySyS). Currently, he is an associate professor at the Budapest University of Technology and Economics (BME), in Hungary. Before that, he was a post-doc and then research engineer in Privatics Team at INRIA, in France. His general research interests include data privacy and security, as well as machine learning in this context.

Security and privacy play an indispensable role in building trust in any information system, and AI is no exception. If a machine learning model is insecure or leaks private/confidential information, companies will be reluctant to use them which eventually hinders AI and human development. Indeed, it has already been demonstrated that sensitive training data can be extracted from trained machine learning models, or their training data can be poisoned in order to misclassify specific samples as well as to prolong training. Moreover, imperceptible modifications to the input data, called an adversarial example, can fool AI and cause misclassifications potentially leading to life-threatening situations.

These are not far-fetched scenarios; stop signs with specially crafted adversarial stickers on them can be recognized as yield signs by self-driving cars, individuals with a pair of glasses can be recognized as a different person by a face recognition system, or leaking the involvement of a patient in the training data of a model predicting cancer prognosis can indicate that the patient has cancer. Trustworthy machine learning is also mandated by regulations (such as GDPR) whose violations could result in hefty fines for a company. Therefore, there is a great demand for experts who can audit the privacy and security risks of machine learning models thereby also demonstrating compliance with different AI and privacy regulations.

In this talk, I will review the main security and privacy risks of machine learning models following the CIA (Confidentiality, Integrity, Availability) triad. I demonstrate these issues on real applications including malware detection, drug discovery, and synthetic data generation for the purpose of anonymization.

All sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities, Research Labs, Industry parties and others. A complete list of all project partners can be found here. View the live event here.

Skip to content