Loading Events

« All Events

  • This event has passed.

HCAIM Webinar: Privacy Preserving Machine Learning (PPML) and Explainable AI (XAI)

January 27, 2022 @ 1:00 pm - 2:00 pm UTC+1

On Thursday, January 27, 2022, at 13:00 CET, we will be having a live session with three eminent researchers from CeADAR in Ireland. Inder Preet is a Data Scientist at CeADAR, Ireland’s National Centre for Applied Data Analytics and AI, with 4 years of experience and formal training in Physics. He is currently leading a project on Privacy-Preserving Machine Learning and is also interested in Edge AI and its applications to robotics. Dr Alireza Dehghani is a Senior Research Fellow at CeADAR, Ireland’s National Centre for Applied Data Analytics and AI. As an academic scientist and high-tech technologist, he collaborates on a wide range of projects with CeADAR’s industry and academic partners in such fields as AI, ML and NLP. Alireza has a background in the high-tech industry and academia with 10+ years of technical, leadership, research, and teaching experience. Dr Oisín Boydell is Principal Data Scientist and Head of the Applied Research Group at CeADAR, Ireland’s centre for Applied AI at University College Dublin. His primary research interests include deep learning and machine learning, real-time analytics and blockchain technology. After working as a software developer in the UK, Oisín returned to UCD to undertake a PhD in Computer Science, researching novel approaches for personalized information retrieval. Prior to joining CeADAR he worked for a number of years in the research and innovation team at ChangingWorlds where he developed big data analytics and machine learning solutions for the telecommunications industry. At CeADAR, Oisín leads industry-focussed research projects in collaboration with industry partners across a broad range of technology areas including machine learning, deep learning, explainable AI, blockchain and NLP. In this webinar, presenters will discuss the wide adoption of AI, and an increasing understanding of the need for a more human-centred approach has spurred a renewed interest in tools and approaches that can be used to support these human focussed aspects. Whilst AI researchers have always been interested in data privacy, algorithmic bias, explainability of machine-made decisions etc. these fields are more salient now than ever, particularly as machine learning algorithms become increasingly complex and opaque. That is where, Privacy-Preserving Machine Learning (PPML), comes into the picture, an effort by the research community to build privacy into ML algorithms. It is a loosely defined term encompassing many technologies that could be used for privacy protection, like, homomorphic encryption, secure multi-party computation, differential privacy, etc. These technologies have yielded positive results so far and many companies including the likes of the giants like Microsoft, Apple and more have adopted them. Not just that, there are many young companies that are trying to market the research done in this area. Machine Learning (ML) has been shown to cater to a wide range of problems but most ML algorithms are data-hungry. Therefore the widespread adoption of ML has also led to massive surveillance where data is collected ubiquitously. Given the circumstances, an individual’s data is paramount to his/her privacy. And many regulations like the GDPR have come into being for protecting privacy. But apart from regulations the ML community also shoulders the moral responsibility of privacy protection. Explainable AI (XAI) is another area that is receiving a lot of attention. With the increasing complexity of machine learning algorithms that leverage massive, highly heterogeneous datasets there is a need for humans to be able to interpret and understand decisions that are being made by these AI systems. Tarry will moderate a session with Alireza Dehghani, Inder Preet and Oisín Boydell from CeADAR who will discuss two technology areas that have recently received a lot of attention, Privacy-Preserving Machine Learning (PPML) and Explainable AI (XAI), which are relevant for practitioners developing human-centred AI applications and solutions. Some of the points that will be discussed are:
  • What is PPML and how it is different from general ML?
  • PPML activities and trends in companies and startups.
  • Career roadmap for an HCAIM graduate to be hired by these companies and work on PPML.
  • What skill sets does HCAIM need to work in the PPML field?
  • How graduates of degrees such as HCAIM could be important for the companies to build a PPML team in their company- PPML job specification for a company that is looking for hiring engineers?
  • Possible PhD and research path for graduates of MCAIM degree.
  • What is explainable AI (XAI) and why is it relevant in the context of human-centred AI? What are the challenges in making AI decisions explainable?
  • General-purpose vs model/algorithm specific explainability and different types of data.
  • How do multinationals like Google, Apple and Mircosoft ensure the privacy of the data collected on their platforms?
All sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities, Research Labs, Industry parties and others. A complete list of all project partners can be found here. View the live event here.

Details

Date:
January 27, 2022
Time:
1:00 pm - 2:00 pm UTC+1
Event Category:
Website:
https://www.linkedin.com/video/event/urn:li:ugcPost:6909474760137547777/

Venue

LinkedIn Live
Skip to content