BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//HCAIM - ECPv6.9.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:HCAIM
X-ORIGINAL-URL:https://humancentered-ai.eu
X-WR-CALDESC:Events for HCAIM
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Dublin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:IST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:IST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20241027T010000
END:STANDARD
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20210101T000000
END:STANDARD
TZID:Europe/Budapest
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
TZID:Europe/Brussels
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
TZID:Europe/Helsinki
BEGIN:DAYLIGHT
TZOFFSETFROM:+0200
TZOFFSETTO:+0300
TZNAME:EEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0300
TZOFFSETTO:+0200
TZNAME:EET
DTSTART:20221030T010000
END:STANDARD
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240604
DTEND;VALUE=DATE:20240607
DTSTAMP:20260403T190555
CREATED:20231105T113556Z
LAST-MODIFIED:20231110T114547Z
UID:6428-1717459200-1717718399@humancentered-ai.eu
SUMMARY:9th International Symposium on Language & Knowledge Engineering - LKE 2024
DESCRIPTION:We’re thrilled to announce the 9th International Symposium on Language & Knowledge Engineering (LKE 2024) taking place in the vibrant city of Dublin\, Ireland\, from June 4th to 6th. The symposium is organized by the School of Enterprise Computing and Digital Transformation at the Technological University Dublin\, Grangegorman Campus\, LKE 2024 promises to be a dynamic forum for the exchange of scientific results\, experiences\, and the sharing of new knowledge. \nExplore\, Discover\, Innovate: \nJoin us for an immersive experience featuring cutting-edge tracks: \n\nLanguage and Knowledge Engineering: From Natural Language Processing to Human-Computer Interaction\, dive into the latest advancements.\nScholarly Information Processing: Uncover the world of bibliographic research\, AI\, and data mining with a focus on knowledge dissemination.\nComputational Approaches: Explore smart cities\, robotics\, and computational intelligence\, shaping the future of language and knowledge engineering.\nAI and Ethics: Delve into the ethical dimensions of AI\, exploring human-centered approaches and socially responsible AI.\n\nWhy Attend? \n\nSpecial Issues: Selected papers will be featured in prestigious journals\, contributing to the global discourse.\nNetworking: Connect with industry leaders\, researchers\, and peers\, fostering collaborations that transcend boundaries.\nInnovation: Ignite your curiosity\, challenge norms\, and be at the forefront of revolutionary ideas.\n\nMark Your Calendar: \n\nCFP Issued: October 2023\nSubmission Deadline: January 15th\, 2024\nConference: June 4th to 6th\, 2024\nLocation: Dublin\, Ireland\n\nDon’t miss your chance to be a part of this transformative event! For more details and submission guidelines\, visit the symposium website.
URL:https://humancentered-ai.eu/event/9th-international-symposium-on-language-knowledge-engineering-lke-2024/
LOCATION:Technological University Dublin\, Grangegorman Campus\, Grangegorman Lower\, Dublin 7\, D07 H6K8\, Dublin\, D07 H6K8\, Ireland
CATEGORIES:Events
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2023/11/Pastel-Minimalist-Our-Mission-Instagram-Post.jpg
ORGANIZER;CN="Technological University Dublin":MAILTO:info-hcaim@tudublin.ie
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240307T110000
DTEND;TZID=UTC:20240307T130000
DTSTAMP:20260403T190555
CREATED:20240301T111341Z
LAST-MODIFIED:20240301T112028Z
UID:6457-1709809200-1709816400@humancentered-ai.eu
SUMMARY:WEBINAR: Human-Centered ML in Mobile and Wearable Data Ecosystems
DESCRIPTION:CeADAR invites you to a webinar on Thursday\, March 7th\, 2024\, at 11 AM Dublin time\, featuring Sofia Yfantidou from Aristotle University of Thessaloniki. Sofia is a prominent figure in the field of Human-Centred AI\, with a focus on mobile and wearable data. Register for the event here. \n\n\n\nAbout the Talk: \n\n\n\nThe tech talk will explore the fascinating convergence of mobile and wearable technologies\, machine learning algorithms\, and human-centred concerns within the Ubiquitous Computing domain. This convergence has opened up numerous industrial applications\, from healthcare monitoring to workplace safety\, remote assistance\, and more. However\, the enthusiastic adoption of machine learning in such technologies raises significant ethical dilemmas and biases. \n\n\n\nSofia Yfantidou’s talk will delve into original research spanning user interaction\, data collection\, and algorithmic equity. She will propose innovative solutions to mitigate alignment challenges\, including a framework for designing and evaluating technologies for behaviour change interventions\, open datasets for interdisciplinary research\, and methodologies for fairness-aware computing. \n\n\n\nBy emphasizing the integration of technical advancements with human-centred values\, Sofia advocates for a holistic approach to mobile and wearable computing. Her talk highlights the importance of aligning machine learning systems with societal needs and expectations. \n\n\n\nAbout the Speaker: \n\n\n\nSofia Yfantidou is a Marie Skłodowska-Curie fellow at the Innovative Training Network “Real-time Analytics for the Internet of Sports” (RAIS) and a Doctoral Candidate at the Aristotle University of Thessaloniki in Greece. She has previously worked with industry leaders such as Siemens Mobility and Nokia Bell Labs\, focusing on mobile and wearable computing and human-centred machine learning. Sofia is known for her research on defining\, quantifying\, and mitigating biases in ubiquitous data and models for health and well-being. \n\n\n\nA Heidelberg Laureate Forum alumna and a Grace Hopper scholar\, Sofia holds a European Joint Master’s Degree in “Big Data Management and Analytics” from leading European universities. \n\n\n\nDon’t miss this opportunity to gain valuable insights into the future of Human-Centred AI. Mark your calendars for March 7th\, 2024\, and join us for this engaging tech talk. \n\n\n\nRegister for the event here.
URL:https://humancentered-ai.eu/event/webinar-human-centered-ml-in-mobile-and-wearable-data-ecosystems/
CATEGORIES:Events,Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2024/03/08739492-32eb-464b-b0d0-80cb8301afe0.jpg
ORGANIZER;CN="CeADAR":MAILTO:ceadar@ucd.ie
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231214
DTEND;VALUE=DATE:20231216
DTSTAMP:20260403T190555
CREATED:20230910T120442Z
LAST-MODIFIED:20231110T121215Z
UID:6435-1702512000-1702684799@humancentered-ai.eu
SUMMARY:HCAI-EP 2023: The Human-Centred AI Education & Practice Conference
DESCRIPTION:We are delighted to announce the Human-Centred AI Education and Practice (HCAI-EP) conference\, organized on December 14-15\, 2023 by the Irish ACM SIGCSE Chapter\, the Technological University of Dublin and supported by the Human-Centred AI MSc (HCAIM). This conference serves as a platform for researchers and practitioners to explore and share high-quality contributions in the field of Human-Centred AI in Education and Practice. \nSubmission Categories: \n\nPapers: Research or practice papers (6 pages) for presentation and publication in the HCAI-EP proceedings in the ACM Digital Library.\nShort Papers: 3 to 4-page papers on research or practice\, published on the conference website (not in the ACM Digital Library).\nPosters: Single-page abstracts published in the HCAI-EP proceedings in the ACM Digital Library\, with a corresponding poster presentation at the conference.\n\nSubmission Details: All contributions should be submitted via EasyChair. Before submitting\, please review the detailed submission format instructions and guidelines available on the conference website. \nThemes of Interest: \n\nEducation: Human-Centred AI Education Pedagogy\, Curriculum Development\, Assessing and Providing Feedback\, Inclusivity and Diversity\, Tools for Human-Centred AI Education\, Ethics\, and Policy Integration.\nPractice: Human-Centred AI in Digital Transformation\, Integration into Production Models\, Healthcare\, Building Trust\, Transparency\, Explainable AI (XAI)\, and Fintech.\n\nTimeline: \n\nConference Dates: December 14-15\, 2023\nCall for Participation: August 21\, 2023\nFull and Short Papers Submission Deadline: October 15\, 2023\nNotification of Paper Acceptance: October 31\, 2023\nFinal Camera-Ready Paper Submission: November 19\, 2023\n\nContact Information: For any inquiries\, please feel free to reach out to the conference chairs at hcaimep23@easychair.org. \nWe welcome submissions from all stages of education and practice related to Human-Centred AI. Join us in shaping the future of AI education and practice. We look forward to your valuable contributions!
URL:https://humancentered-ai.eu/event/hcai-ep-2023-the-human-centred-ai-education-practice-conference/
LOCATION:TU Dublin Tallaght Campus\, Blessington Rd\, Dublin\, Tallaght\, Dublin 24\, Ireland
CATEGORIES:Events
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2023/11/Blue-white-creative-marketing-agency-facebook-post.jpg
ORGANIZER;CN="Technological University Dublin":MAILTO:info-hcaim@tudublin.ie
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230628
DTEND;VALUE=DATE:20230629
DTSTAMP:20260403T190555
CREATED:20230606T092534Z
LAST-MODIFIED:20230606T092534Z
UID:6388-1687910400-1687996799@humancentered-ai.eu
SUMMARY:Conference on Human-Centred Regulation of AI (HCRAI)
DESCRIPTION:Research into the laws and automation of rational thinking and acting has been going on for centuries and has accelerated over the last eight decades within the framework of artificial intelligence (AI). The technological advances of the last decade have opened the door to the mass production of AI products and prompted an existential question: how to create an ethical and human-centred AI regulation that also protects the Earth’s biosphere and supports further development? \nThe single-day Human-Centred Regulation of AI (HCRAI) Conference will be held on Wednesday\, June 28\, 2023\, at the Budapest University of Technology and Economics. Speakers will discuss the EU AI Act\, including its objectives\, structure\, interpretation\, available information resources and educational elements\, case studies\, as well as practical steps required for different stakeholders and AI developers. The morning session will overview various aspects of the EU AI Act\, and the afternoon session will highlight educational and technological challenges and present case studies. The conference will end with a round-table discussion. \nSpeakers. B. Hankó (State secretary\, MCI)\, J. Levendovszky (Vice rector\, BME)\, H. Charaf (Dean\, BME VIK)\, D. Tzanidakis\, D. Petrányi (ELTE)\, L. Bódis (Deputy state secretary\, MCI)\, Á. Tényi (E-Group)\, TBA (QTICS)\, B. Feeney (HCAIM project\, TUD)\, P. Antal (BME VIK)\, K. Mezei (BME GTK)\, M. Héder (BME GTK)\, Sz. Németh (Continental)\, TBA (NVIDIA)\, TBA (Microsoft)\, Tarry Singh\, (Real AI B.V.)\, Gabriele Franco\, and more. \nMore information and the full program are available here. \nDate: Wednesday\, June 28\, 2023\nLocation: 1117 Budapest\, Magyar tudósok körútja 2.\, BME Building I\, Room 028\nWebsite: https://hcaim.bme.hu/hcrai\nRegistration: https://forms.gle/vDc2bDo5xNW25v6W6
URL:https://humancentered-ai.eu/event/conference-on-human-centred-regulation-of-ai-hcrai/
LOCATION:1117 Budapest\, Magyar tudósok körútja 2.\, BME Building I\, Room 028\, Magyar tudósok körútja 2.\, BME Building I\, Room 028\, Budapest\, 1117\, Hungary
ATTACH;FMTTYPE=image/png:https://humancentered-ai.eu/wp-content/uploads/2023/05/hcrai2023.792x0-is.png
ORGANIZER;CN="HCAIM":MAILTO:info@humancentered-ai.eu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Brussels:20230425T120000
DTEND;TZID=Europe/Brussels:20230425T150000
DTSTAMP:20260403T190555
CREATED:20230420T114257Z
LAST-MODIFIED:20230420T114311Z
UID:6352-1682424000-1682434800@humancentered-ai.eu
SUMMARY:Join Diya Wynn in Dublin for a Day Dedicated to Ethical AI!
DESCRIPTION:We are thrilled to announce that on Tuesday\, 25th April at 11 hrs IST (12 hrs CET)\, TU Dublin Tallaght Campus will be hosting a keynote speech by Diya Wynn. Diya is the Senior Practice Manager in Responsible AI for AWS Machine Learning Solutions Lab\, and she is leading the practice on global customer engagement. \nDiya is an internationally recognized technologist\, with over 25 years of experience in scaling products for acquisition\, driving inclusion\, diversity & equity initiatives\, and leading operational transformation across industries. She will be sharing her insights on establishing an AI/ML operating model that enables inclusive and responsible products. \nDiya is also an international best-selling author and has spoken at industry events across 15 countries\, including Ukraine\, Belgium\, Australia\, and the United Nations General Assembly. She has received numerous awards and recognitions for her leadership and advocacy\, including the AWS Inclusion Ambassador Award\, Makers Influencers & Innovators in STEM\, and ID&E Technologist of the Year\, to name a few. \nDon’t miss this exciting opportunity to learn from one of the leading experts in responsible AI. The event will be held at TU Dublin Tallaght Campus\, and the specific room will be announced shortly. We look forward to seeing you there!
URL:https://humancentered-ai.eu/event/join-diya-wynn-in-dublin-for-a-day-dedicated-to-ethical-ai/
LOCATION:TU Dublin Tallaght Campus\, Blessington Rd\, Dublin\, Tallaght\, Dublin 24\, Ireland
CATEGORIES:Events
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2023/04/Ethical-AI-in-Practice.jpg
ORGANIZER;CN="HCAIM":MAILTO:info@humancentered-ai.eu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Brussels:20230420T100000
DTEND;TZID=Europe/Brussels:20230420T163000
DTSTAMP:20260403T190555
CREATED:20230412T134541Z
LAST-MODIFIED:20230412T134717Z
UID:6311-1681984800-1682008200@humancentered-ai.eu
SUMMARY:2023 AI Ethicon
DESCRIPTION:We are excited to announce the upcoming AI Ethicon event taking place on 20 April 2023. With AI solutions transforming society at an unprecedented rate\, the event will provide a platform for participants to better understand the ethical issues and uses of these technologies. \nThe event will take place at 10:15 CET at Technological University Dublin\, University of Naples Federico II\, HU University of Applied Sciences Utrecht\, or Budapest University of Technology and Economics. Please note that participants must currently be registered in the HCAIM programme in either of the four participating academic institutions. \nMore information is available here. \nRegistration is now open. Register here.
URL:https://humancentered-ai.eu/event/2023-ai-ethicon/
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2023/04/2023-AI-Ethicon.jpg
ORGANIZER;CN="HCAIM":MAILTO:info@humancentered-ai.eu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Helsinki:20220912T120000
DTEND;TZID=Europe/Helsinki:20220912T130000
DTSTAMP:20260403T190555
CREATED:20220912T185648Z
LAST-MODIFIED:20220912T185648Z
UID:5695-1662984000-1662987600@humancentered-ai.eu
SUMMARY:Trustworthy AI in Practice. From High-Level Principles to Data Science Practice
DESCRIPTION:While the topic of Trustworthy AI has been appearing on many policy-and corporate agendas over the last 3 to 5 years\, translating often rather abstract goals and terminology into data science practice and organisational day-to-day reality\, is not an easy task.\n\n \n\nOn Sept 15\, 2022\, 12:00 AM CET\, Dr Tjerk Timan\, a senior policy analyst at the Netherlands Organization for Applied Research (TNO)\, will delve into recent cases in which in a multi-disciplinary setting\, his team aimed to actually put Trustworthy AI principles into practice. During the development of an AI-based service they started off by looking at the oversight and auditability of such a system and the different roles\, responsibilities and (technical) tools to assess and ameliorate AI-based systems towards a higher level of trustworthiness.\n\n \n\nDr Timan will highlight two use cases in which he investigated bias\, model robustness and explainability\, and will show parts of a handbook-in-the making on how to set up AI experiments within organisations based on our experiences in these- and other use cases.\n\n \n\nSpeaker: Tjerk Timan\, Netherlands Organisation for Applied Scientific Research\n\n \n\nThis event is organized by HCAIM partner CeADAR – Ireland’s Centre for Applied AI.
URL:https://humancentered-ai.eu/event/trustworthy-ai-in-practice-from-high-level-principles-to-data-science-practice/
CATEGORIES:Events,Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/09/Blue-Brushstrokes-Art-Square-Pillow.jpg
ORGANIZER;CN="CeADAR":MAILTO:ceadar@ucd.ie
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220616T094500
DTEND;TZID=Europe/Paris:20220617T180000
DTSTAMP:20260403T190555
CREATED:20220527T145244Z
LAST-MODIFIED:20220527T150244Z
UID:5509-1655372700-1655488800@humancentered-ai.eu
SUMMARY:AI Ethicon
DESCRIPTION:AI solutions are now ubiquitous and transforming our societies at an unprecedented rate. With numerous examples of unethical AI use\, the need for a better understanding of the ethical use of AI technologies in our societies is more pressing than ever. \nThe AI Ethicon is an event\, suitable for students in AI\, professionals and enthusiasts\, regardless of their experience in AI and PPML\, however with at least the following foundation skills: \n\nStrong technological background with at least a basic level of Python programming experience (and/or alternatively R).\nAt least some general idea or practical experience in machine learning.\nBasic knowledge of data privacy concepts.\nInterest in ethics and cutting-edge technology for the good of humanity.\n\nYou are welcome to join us regardless of your experience in the field! \n[vc_btn title=”REGISTER NOW!” color=”info” link=”url:https%3A%2F%2Fwww.eventbrite.ie%2Fe%2F347039613827|target:_blank”] \nWe have divided the event into two days with several activities and exciting challenges in the application area of Artificial Intelligence from an Ethical perspective. The Ethicon is geared toward potential students interested in starting the Human-Centred AI Master’s Programme\, however\, everyone is welcome! \nView the full programme here. \n[vc_btn title=”VIEW PROGRAM” color=”info” link=”url:https%3A%2F%2Fhumancentered-ai.eu%2Fevent%2Fethicon%2F|target:_blank”] \n\n\nWhy Participate?\n\nTake the opportunity to relate with an international audience.\nTalk with experts in AI and Ethics.\nTest yourself on the most recent AI challenges.\nParticipation on day one will entitle you to get a Certificate of Achievement issued by HCAIM and the four participating universities.\nParticipation on day two will allow you to have a Certificate from NVIDIA.\n\n\n\n\n\n\n\n\n\nWHEN:\n\nJune 16-17\, 2022\n\n\n\n\n\nWHERE:\n\nTechnological University Dublin\nUniversity of Naples Federico II\nHU University of Applied Sciences Utrecht\nBudapest University of Technology and Economics
URL:https://humancentered-ai.eu/event/ethicon/
CATEGORIES:Events
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/05/Ethicon.jpg
ORGANIZER;CN="HCAIM":MAILTO:info@humancentered-ai.eu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20220512T110000
DTEND;TZID=UTC:20220512T120000
DTSTAMP:20260403T190555
CREATED:20220513T073637Z
LAST-MODIFIED:20220513T073637Z
UID:5419-1652353200-1652356800@humancentered-ai.eu
SUMMARY:Privacy-Preserving Machine Learning. Perspectives from the Industry. Event by CeADAR.
DESCRIPTION:Privacy-Preserving Machine Learning (PPML) is the umbrella term used to describe Privacy Enhancing Technologies (PETs) that can protect individuals’ data privacy in data analysis. With vast amounts of data being collected from online and offline resources\, significant challenges in preserving privacy have emerged. Both industry and academia are trying to catch up with these technologies\, and great work is being done. \nJoin us on May 12\, 2022 (Thursday) at 11:00 CET for a discussion about PPML with some industry experts to review some of the progress achieved to date by these companies. The panel will consist of IBM talking about their seminal work on differential privacy\, Oblivious.AI talking about secure enclaves and Inpher.io discussing their secret computing PET. Following a brief description of these PETs\, we will have a short panel discussion\, then we will open up the floor for the audience to ask questions. \nPanellists: \n\nDr. Naoise Holohan. IBM Research\nDr. Jack Fitzsimons. Oblivious AI\nManuel Capel. Inpher\n\nFor registration\, click here. This event is organized by HCAIM partner CeADAR – Ireland’s Centre for Applied AI. \n 
URL:https://humancentered-ai.eu/event/privacy-preserving-machine-learning-perspectives-from-the-industry-event-by-ceadar/
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/05/CeADAR-PPML-Industry-Perspectives.jpg
ORGANIZER;CN="CeADAR":MAILTO:ceadar@ucd.ie
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220428T150000
DTEND;TZID=Europe/Paris:20220428T160000
DTSTAMP:20260403T190555
CREATED:20220427T194124Z
LAST-MODIFIED:20220513T080537Z
UID:5330-1651158000-1651161600@humancentered-ai.eu
SUMMARY:HCAIM Webinar: Silent Speech Interfaces & HCI / ML aspects
DESCRIPTION:On Thursday\, April 28\, 2022\, at 15:00 CET\, we will be having a live session with an academic partner from Hungary\, Dr Tamás Gábor Csapó. \nDr Gábor Csapó obtained his PhD in computer science & speech technology & machine learning from the Budapest University of Technology and Economics (BME)\, Hungary in 2014. He was a Fulbright scholar at Indiana University\, the USA in 2014\, where he started to deal with ultrasound imaging of the tongue. In 2016\, he joined the MTA-ELTE Lingual Articulation Research Group\, focusing on investigating Hungarian articulation during speech production. Since 2017\, he has had two national research projects about ultrasound-based articulatory-to-acoustic mapping and articulatory-to-acoustic inversion\, both of them applying deep learning methods. He regularly cooperates with international researchers and has co-authors from the USA\, Canada\, Colombia\, China\, and several EU countries. His research interests include Silent Speech Interfaces\, speech analysis and synthesis\, ultrasound-based tongue movement analysis\, and deep learning methods applied to speech technologies. Currently\, he is a research fellow at BME. \nSilent Speech Interfaces (SSI) are a revolutionary field of speech technologies\, having the main idea of recording the articulatory movement\, and automatically generating speech from the movement information\, while the original subject is not producing any sound. This research area\, also known as articulatory-to-acoustic mapping (AAM) has a large potential impact in a number of domains\, and might be highly useful for the speaking impaired (e.g.\, after laryngectomy)\, and for scenarios where regular speech is not feasible but the information should be transmitted from the speaker (e.g.\, extremely noisy environments; military applications. Voice assistants are getting popular lately\, but they are still not in every home. One of the reasons is privacy concerns: some people do not feel comfortable if they have to speak loud\, having others around – but SSI equipment can be a solution for that. \nThere are two distinct ways of SSI solutions\, namely `direct synthesis’ and `recognition-and-synthesis’. In the first case\, the speech signal is generated without an intermediate step\, directly from the articulatory data. In the second case\, silent speech recognition (SSR) is applied on the biosignal which extracts the content spoken by the person (i.e. the result of this step is text); this step is then followed by text-to-speech (TTS) synthesis. In the SSR+TTS approach\, any information related to speech prosody (intonation and durations) is lost\, whereas it may be kept with direct synthesis. In addition\, the smaller delay by the direct synthesis approach might enable conversational use; therefore\, we are following this approach in our project. \nTo fulfil the above goals\, we formulated a multidisciplinary team with expert senior researchers in speech synthesis\, recognition\, deep learning\, and articulatory data acquisition. As the human biosignals\, 2D ultrasound\, lip video and magnetic resonance imaging were used to image the motion of the speaking organs. In our experiments\, we used standard deep learning approaches (convolutional and recurrent neural networks\, autoencoders) and high-potential novel machine learning methods (adversarial training\, neural vocoders and cross-speaker experiments). When designing ML/DL approaches\, it is not enough to test the system with objective measures (e.g. validation loss)\, but it is also important to keep in mind the human aspects. Therefore\, after each deep learning experiment\, we evaluated the resulting synthesized speech samples in subjective listening tests with potential users. Such an SSI system\, being able to convert the silent articulation of any person to fully natural audible speech\, is not yet available; but we had significant progress towards practical prototypes. \nUntil now\, numerous Hungarian and international BSc/MSc/PhD students of BME were involved in the above project\, as part of their project laboratory\, thesis writing\, internship or individual research project. We invite those students taking part in the Human-Centred Artificial Intelligence Master’s Programme to get involved with Silent Speech Interfaces! \nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-silent-speech-interfaces/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/04/Silent-Speech-Interfaces.jpg
ORGANIZER;CN="HCAIM":MAILTO:info@humancentered-ai.eu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220414T150000
DTEND;TZID=Europe/Paris:20220414T160000
DTSTAMP:20260403T190555
CREATED:20220401T030538Z
LAST-MODIFIED:20220419T032524Z
UID:5018-1649948400-1649952000@humancentered-ai.eu
SUMMARY:HCAIM Webinar: Security and Privacy in Machine Learning
DESCRIPTION:On Thursday\, April 14\, 2022\, at 15:00 CET\, we will be having a live session with an academic partner from Budapest. University of Technology and Economics (BME)\, Gergely Ács received the M.Sc. and PhD degree in Computer Science. \n\n\n\nDr Ács conducted research in the Laboratory of Cryptography and System Security (CrySyS). Currently\, he is an associate professor at the Budapest University of Technology and Economics (BME)\, in Hungary. Before that\, he was a post-doc and then research engineer in Privatics Team at INRIA\, in France. His general research interests include data privacy and security\, as well as machine learning in this context. \n\n\n\nSecurity and privacy play an indispensable role in building trust in any information system\, and AI is no exception. If a machine learning model is insecure or leaks private/confidential information\, companies will be reluctant to use them which eventually hinders AI and human development. Indeed\, it has already been demonstrated that sensitive training data can be extracted from trained machine learning models\, or their training data can be poisoned in order to misclassify specific samples as well as to prolong training. Moreover\, imperceptible modifications to the input data\, called an adversarial example\, can fool AI and cause misclassifications potentially leading to life-threatening situations. \n\n\n\nThese are not far-fetched scenarios; stop signs with specially crafted adversarial stickers on them can be recognized as yield signs by self-driving cars\, individuals with a pair of glasses can be recognized as a different person by a face recognition system\, or leaking the involvement of a patient in the training data of a model predicting cancer prognosis can indicate that the patient has cancer. Trustworthy machine learning is also mandated by regulations (such as GDPR) whose violations could result in hefty fines for a company. Therefore\, there is a great demand for experts who can audit the privacy and security risks of machine learning models thereby also demonstrating compliance with different AI and privacy regulations. \n\n\n\nIn this talk\, I will review the main security and privacy risks of machine learning models following the CIA (Confidentiality\, Integrity\, Availability) triad. I demonstrate these issues on real applications including malware detection\, drug discovery\, and synthetic data generation for the purpose of anonymization. \n\n\n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-security-and-privacy-in-machine-learning/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/Security-and-Privacy-in-ML.jpg
ORGANIZER;CN="HCAIM":MAILTO:info@humancentered-ai.eu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220325T160000
DTEND;TZID=Europe/Paris:20220325T170000
DTSTAMP:20260403T190555
CREATED:20220315T031529Z
LAST-MODIFIED:20220419T032533Z
UID:5023-1648224000-1648227600@humancentered-ai.eu
SUMMARY:HCAIM Webinar: The Age of the Cyborgs Has Arrived
DESCRIPTION:On Friday\, March 25\, 2022\, at 16:00 CET\, we will be having a live session with an industry partner from Citel Group S.r.l. Alessandro Barducci. \n\n\n\nMr Barducci (born 1967) is currently R&D Manager for Citel Group S.r.l. He began as a programmer in the 80s when he also proposed an expert system for helping in coronary diseases diagnostics. In this period\, he also developed a simple NLP software that converted natural language into SQL queries to search patients with given parameters (age\, max & min blood pressure\, etc.). He later turned to the aerospace industry\, working as a developer and then as a project manager in air traffic control\, satellite communication and ground control software. \n\n\n\nAt the end of the 90s\, he started focusing also on social and philosophical aspects of IT\, particularly on themes such as media philosophy\, cyborgs\, social control and privacy\, and other IT-related social and political issues. He then widened his professional experience working in the telecom\, automotive\, insurance and PA sectors. He wrote over 40 articles for several magazines and newspapers\, mostly about the social impact of the IT revolution and related philosophical and cultural issues. In 2009\, he participated in the “Juan Comas” XVIII physical Anthropology Congress in Mérida\, Yucatán\, with a paper about cyborgs. \n\n\n\nNowadays\, we are facing a growing interconnection between machines and human bodies. The century of cyborgs has begun. This transformation has ethical\, political\, social and economical implications. Albeit the HCAIM Master’s cannot cover all aspects of this transformation\, a basic grasp is essential to enable students to cope with AI-related ethical issues and even to find better solutions for the AI ecosystem. \n\n\n\nStarting from the last decades of the past century\, we have witnessed a growing interaction and integration between various types of machines (particularly computers) and the human body. We can obtain a coarse measurement of the growth of this interaction by estimating the impact of a “digital amputation” in different epochs\, that is\, the impact of losing all and every digital information we personally own (therefore excluding all the info in possession of public administration\, banks\, etc.). Data integration was often coupled with physical integration: \n\n\n\n\nBefore the ’80\, computers are refrigerator-sized devices you could find in some big companies and research labs.\nIn the ‘80s\, personal computers bring such devices to many workplaces’ desktops\, and even in some homes.\nIn the ’90\, laptop computers can follow you while travelling\, or follow you back and forth from work to home.\nAlso from the 90s\, mobile phones slowly evolve into mobile\, connected computers that you constantly bring with you everywhere. Nowadays most people would feel totally lost\, or at least in distress\, without their mobile.\nEnd of the 90s: first working devices directly connected with our brains (electronic vision to relieve some types of blindness).\n\n\n\n\nAn essential element of this transformation is the explosive growth of the internet (starting from the mid-90s). Now our devices are constantly connected\, and the internet is sometimes considered an essential service just like water\, electricity\, gas and the telephone. By the way\, we usually say that “we” are connected: devices are just tools to allow “us” to get connected. In fact\, modern devices are often nearly useless without an internet connection. We use the internet to work\, study\, buy things\, meet friends or partners… \n\n\n\nWe can therefore resume this technological transformation as follows: we are constantly more integrated with some electronic devices\, that are becoming an important part of our identity and of our body\, and we constantly interact in a global network that complements or gradually replace our traditional working and personal interactions. \n\n\n\nBut “devices” doesn’t mean only computers or mobile phones. We can have a robotic prosthesis\, nanomachines\, and even devices directly connected to our brains permanently connected with our body or even well inside it. \n\n\n\nIn a near future\, we may also have artificial memory\, and smartphones and computers directly connected to our brains. This growing integration raises some questions and concerns: \n\n\n\n\nSocioeconomic. What will happen when these devices will be better than our natural organs (for example\, eyes\, ears…)? Who will have the opportunity of having a “better” body?\nPrivacy and security. Who will have legal access to our sight\, our memories\, our thoughts? How do you protect these devices from unauthorized access? What could be the impact of a nanomachine virus?\n\n\n\n\nOn the other side\, Ai is constantly improving\, and more and more decisions are delegated\, partly or totally\, to machines. We already have autonomous weapons\, i.e. weapons that decide by themselves who is the foe\, who they can kill. \n\n\n\nUsually\, people are not worried by machines stronger or faster than any human being. Maybe because we have been using these machines for centuries\, maybe because\, even before machines\, we lived on a planet with many animals faster or stronger than us. Intelligence is a different matter. What if machines become cleverer than us? Would they rebel against their creator? The so-called “Frankenstein complex”. \n\n\n\nCyborgs could at least provide a solution for this last issue: instead of some kind of war between man and machines\, we could witness a peaceful\, progressive integration. But who will control it\, who will enjoy its benefits\, and who will pay the costs? \n\n\n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-the-age-of-the-cyborgs-has-arrived/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/The-Age-of-the-Cyborg.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220307T150000
DTEND;TZID=Europe/Paris:20220307T160000
DTSTAMP:20260403T190555
CREATED:20220307T031328Z
LAST-MODIFIED:20220419T032538Z
UID:5021-1646665200-1646668800@humancentered-ai.eu
SUMMARY:HCAIM Webinar: The European Approach Towards Reliable\, Safe\, and Trustworthy AI
DESCRIPTION:On Thursday\, March 17\, 2022\, at 15:00 CET\, we will be having a live session with the Director of the European Software Institute – Center Eastern Europe (ESI CEE)\, Dr George Sharkov. Following the EU Strategy for AI Development in Europe\, the High-Level Expert Group on AI (HLEG AI) published the “Ethics Guidelines for Trustworthy AI” in 2019 and proposed a human-centric approach to AI by defining a list of seven key requirements that AI systems must meet to be trustworthy. Then\, in 2020\, a few more deliverables were released that outlined the practical aspects of the legal basis\, ethical norms\, and technical robustness requirements\, such as the “Policy and Investment Recommendations for Trustworthy AI\,” the “Assessment List for Trustworthy AI” (ALTAI)\, sectoral considerations report\, and so on. Other European Commission initiatives included a Communication on Building Trust in Human-Centric Artificial Intelligence\, a White Paper on AI\, and an updated Coordinated Plan on AI. They developed a novel idea for a risk-based approach to the development and deployment of AI-based systems in Europe\, which resulted in the AI Regulation proposal (of April 2021). \n\n\n\nTo address the difficulties and newly specified criteria in the next legal and ethical framework\, preparatory work has begun to establish industrial and technological components of AI/ML platforms\, which will grow into standards and specifications. The purpose is to speed industrial and business implementations through specialized horizontal or sector-specific suggestions\, testing and conformity assessment procedures\, and\, where required\, certificates. In this webinar\, we will present some of the current work in place at ETSI ISG SAI (Industry Specifications Group “Securing AI”). In standards\, the three components of AI and security are safeguarding AI from attack\, mitigating against malevolent AI\, and AI for security. More information about previously published or continuing studies will be provided in Securing AI Problem Statement\, Data\, algorithms\, and models in training and implementation environments\, as well as challenges that differ from traditional SW/HW systems. \n\n\n\n\nMitigation Strategy Report. Known or potential mitigations for AI threats\, analyze their security capabilities\, advantages\, and suitable scenarios\nData Supply Chain Report. Methods to source data for training AI\, regulations\, standards\, and protocols – ensure traceability and integrity of data\, attributes\, the confidentiality of information\nSecurity Testing of AI (Specification/Standard GS SAI 003). Testing of ML components\, mutation testing\, differential testing\, adversarial\, test adequacy criteria\, adversarial robustness\, security test oracles\nExplicability and Transparency of AI processing. Addressing issues from regulations\, ethics\, misuse\, HCAI.\nPrivacy Aspects of AI/ML systems. Definition\, multiple levels of trust affecting data\, attacks\, and mitigation techniques.\nTraceability of AI Models. Sharing and reusing models across tasks and industries\, model verification\n\n\n\n\nLast but not least\, we will examine the next stages for the AI Act implementation\, including the AI certification schemes being developed within ENISA’s AI working groups. \n\n\n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-the-european-approach-towards-reliable-safe-and-trustworthy-ai/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/Reliable-Safe-Trustworthy-AI.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220216T010000
DTEND;TZID=Europe/Paris:20220216T170000
DTSTAMP:20260403T190555
CREATED:20220206T031728Z
LAST-MODIFIED:20220419T032543Z
UID:5026-1644973200-1645030800@humancentered-ai.eu
SUMMARY:HCAIM Webinar: Data Quality for AI
DESCRIPTION:On Wednesday\, February 16\, 2022\, at 13:00 CET\, we will be having a live session with an industry partner from Nathean Technologies. Nathean was founded in Dublin in 2001 and currently employs 15 people. Nathean develops a modern web-based high-performance Analytics & Reporting platform with multi-sectoral customers in Ireland\, the UK\, Sweden\, USA and Canada. Our technology helps manage and govern the wide variety of disparate and voluminous data enterprises generate. The platform integrates with Cloud-based AI platforms such as Microsoft Azure AI and IBM Watson. Nathean is a founding industry member of the Centre for Applied Data Analytics Research (CeADAR.ie) in Dublin. The Nathean way has always been about simplifying data analysis and this approach gives us our competitive advantage over large players in the market. Nathean are very focused on developing products to remove barriers to getting meaningful and actionable results to the right people at the right time. \n\n\n\nMaurice Lynch (CEO – Nathean) graduated with a B.Sc. (Computer Applications) from Dublin City University in 1991. He has held senior management and executive positions over the past 30 years in the software industry. Maurice has consulted for Government bodies and blue-chip private sector clients in Ireland\, the UK\, US and Australia. As CEO of Nathean Technologies\, Maurice drives the strategic direction of the company. In 2011 Maurice completed the Leadership 4 Growth programme at Stanford University (Graduate School of Business). Maurice previously served on the steering board of the Centre for Applied Data Analytics Research in Dublin for five years. In this webinar\, presenters will discuss the importance of making data quality and governance a fundamental part of your AI strategy. Getting access to trusted data is essential for both the quality of outcomes from algorithms as well as being able to trace the source and provenance of data which is very much part of Explainable AI – XAI (a topic covered in a recent HCAIM Webinar). From the raw source to the resulting information for an end-user data will go through many transformations and this can be an onerous task to curate the data at each step. \n\n\n\nBy raising awareness of the various quality control models and segmenting the data acquisition process into clearly definable activities AI teams can work to build a data pipeline and put some form of governance to enable quality checking at each step. Today we will discuss\, at the high level\, some straightforward concepts which can be further studied in greater depth namely; Data Volumes\, Data Maturity\, Data Governance\, Data Provenance and Analytics. The goal here is to highlight the need to focus on data quality and governance from Day 1 as a first-class activity in your AI journey. \n\n\n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-data-quality-for-ai/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/Data-Quality-for-AI.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220127T130000
DTEND;TZID=Europe/Paris:20220127T140000
DTSTAMP:20260403T190555
CREATED:20220117T031941Z
LAST-MODIFIED:20220419T032119Z
UID:5029-1643288400-1643292000@humancentered-ai.eu
SUMMARY:HCAIM Webinar: Privacy Preserving Machine Learning (PPML) and Explainable AI (XAI)
DESCRIPTION:On Thursday\, January 27\, 2022\, at 13:00 CET\, we will be having a live session with three eminent researchers from CeADAR in Ireland.\n\n \n\nInder Preet is a Data Scientist at CeADAR\, Ireland’s National Centre for Applied Data Analytics and AI\, with 4 years of experience and formal training in Physics. He is currently leading a project on Privacy-Preserving Machine Learning and is also interested in Edge AI and its applications to robotics. Dr Alireza Dehghani is a Senior Research Fellow at CeADAR\, Ireland’s National Centre for Applied Data Analytics and AI. As an academic scientist and high-tech technologist\, he collaborates on a wide range of projects with CeADAR’s industry and academic partners in such fields as AI\, ML and NLP. Alireza has a background in the high-tech industry and academia with 10+ years of technical\, leadership\, research\, and teaching experience.\n\n \n\nDr Oisín Boydell is Principal Data Scientist and Head of the Applied Research Group at CeADAR\, Ireland’s centre for Applied AI at University College Dublin. His primary research interests include deep learning and machine learning\, real-time analytics and blockchain technology. After working as a software developer in the UK\, Oisín returned to UCD to undertake a PhD in Computer Science\, researching novel approaches for personalized information retrieval. Prior to joining CeADAR he worked for a number of years in the research and innovation team at ChangingWorlds where he developed big data analytics and machine learning solutions for the telecommunications industry. At CeADAR\, Oisín leads industry-focussed research projects in collaboration with industry partners across a broad range of technology areas including machine learning\, deep learning\, explainable AI\, blockchain and NLP.\n\n \n\nIn this webinar\, presenters will discuss the wide adoption of AI\, and an increasing understanding of the need for a more human-centred approach has spurred a renewed interest in tools and approaches that can be used to support these human focussed aspects. Whilst AI researchers have always been interested in data privacy\, algorithmic bias\, explainability of machine-made decisions etc. these fields are more salient now than ever\, particularly as machine learning algorithms become increasingly complex and opaque.\n\n \n\nThat is where\, Privacy-Preserving Machine Learning (PPML)\, comes into the picture\, an effort by the research community to build privacy into ML algorithms. It is a loosely defined term encompassing many technologies that could be used for privacy protection\, like\, homomorphic encryption\, secure multi-party computation\, differential privacy\, etc. These technologies have yielded positive results so far and many companies including the likes of the giants like Microsoft\, Apple and more have adopted them. Not just that\, there are many young companies that are trying to market the research done in this area.\n\n \n\nMachine Learning (ML) has been shown to cater to a wide range of problems but most ML algorithms are data-hungry. Therefore the widespread adoption of ML has also led to massive surveillance where data is collected ubiquitously. Given the circumstances\, an individual’s data is paramount to his/her privacy. And many regulations like the GDPR have come into being for protecting privacy. But apart from regulations the ML community also shoulders the moral responsibility of privacy protection.\n\n \n\nExplainable AI (XAI) is another area that is receiving a lot of attention. With the increasing complexity of machine learning algorithms that leverage massive\, highly heterogeneous datasets there is a need for humans to be able to interpret and understand decisions that are being made by these AI systems.\n\n \n\nTarry will moderate a session with Alireza Dehghani\, Inder Preet and Oisín Boydell from CeADAR who will discuss two technology areas that have recently received a lot of attention\, Privacy-Preserving Machine Learning (PPML) and Explainable AI (XAI)\, which are relevant for practitioners developing human-centred AI applications and solutions.\n\n \n\nSome of the points that will be discussed are:\n\n \n\n 	What is PPML and how it is different from general ML?\n 	PPML activities and trends in companies and startups.\n 	Career roadmap for an HCAIM graduate to be hired by these companies and work on PPML.\n 	What skill sets does HCAIM need to work in the PPML field?\n 	How graduates of degrees such as HCAIM could be important for the companies to build a PPML team in their company- PPML job specification for a company that is looking for hiring engineers?\n 	Possible PhD and research path for graduates of MCAIM degree.\n 	What is explainable AI (XAI) and why is it relevant in the context of human-centred AI? What are the challenges in making AI decisions explainable?\n 	General-purpose vs model/algorithm specific explainability and different types of data.\n 	How do multinationals like Google\, Apple and Mircosoft ensure the privacy of the data collected on their platforms?\n\n \n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-privacy-preserving-machine-learning-ppml-and-explainable-ai-xai/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/PPML-and-XAI.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20211125T130000
DTEND;TZID=UTC:20211125T140000
DTSTAMP:20260403T190555
CREATED:20221115T032234Z
LAST-MODIFIED:20220419T032509Z
UID:5033-1637845200-1637848800@humancentered-ai.eu
SUMMARY:HCAIM Webinar: Trustworthy AI. Provability\, Accountability\, Understandability
DESCRIPTION:On Thursday\, November 25\, 2021\, at 13:00 CET\, we will be having a live session with two eminent academics from Budapest University of Technology and Economics (BME)\, Péter ANTAL\, PhD who is an associate professor\, head of the Artificial Intelligence Group\, head of the Computational Biology Laboratory at the Department of Measurement and Information Systems – Faculty of Electrical Engineering and Informatics (VIK) and Mihály HÉDER\, PhD\, habil.\, an associate professor\, head of the department at the Dept. for Philosophy and History of Science (FTT) – Faculty of Economic and Social Sciences (GTK). \n\n\n\nA constant urge to create trustable tools and cooperation is part of the human condition. It is little wonder then that we strive for maintaining the trust and intellectual control over the machines we create. But given their growing complexity and deep embedding in the human society\, is this realistic\, or is this an example of the vanity of human wishes? How do we justify the double standard between the explainability of decisions by humans and AIs?We inspect a three-tier approach to trustworthy AI: \n\n\n\n\nA mathematical approach based on logical and probabilistic provability\,\nA legal approach using accountability and responsibility\, and\nAn ethical engineering approach aiming for transparency and understandability throughout the development. We discuss their role in ethics guidelines to support comprehensive workflows for nurturing human-compatible trustworthy AI systems.\n\n\n\n\nTarry Singh (CEO\, deepkapha AI Lab & Real AI B.V.) will be moderating this webinar. We will be hosting a lot of exciting topics pertaining to human-centred AI throughout 2021 – 2022 until the definitive launch of the human-centred AI Masters in fall 2022. \n\n\n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-trustworthy-ai-provability-accountability-understandability/
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/Provability-Accountability-in-AI.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20211021T130000
DTEND;TZID=Europe/Paris:20211021T140000
DTSTAMP:20260403T190555
CREATED:20211011T032645Z
LAST-MODIFIED:20220419T032927Z
UID:5036-1634821200-1634824800@humancentered-ai.eu
SUMMARY:HCAIM Webinar: Can Ethics and AI work together?
DESCRIPTION:On October 21\, 2021\, at 13:00 CET\, we will be having a real live session with two eminent academics from the University of Naples Federico II\, namely Prof. Carlo Sansone who is a full professor in Computer Science and Engineering and Prof. Guglielmo Tamburrini\, a full professor of Philosophy of Science and Technology.AI is a broad\, multi-disciplinary study\, encompassing engineering\, mathematics\, computer science\, and now – societal awareness. \n\n\n\nWe will discuss the technological advancements of AI – but more particularly about the topic of ethics and IT – especially with the emergence of AI. Can they work in harmony with each other? \n\n\n\nThe topics that will be discussed live will be about the following and more obviously: \n\n\n\n\nHow can we go about explaining the need for trustworthy\, ethical\, and technically robust AI to a person\, outside the field of technology?\nHow would you explain it to a person\, currently engaged in the development of AI-based solutions?\nCan ethics and AI work together? It is a common argument that the standardization of ethical AI will impede innovation. After all\, we all are in favour of technological innovation but how to do it in the most reliable and sustainable way?\nFinally\, how do you intend to tackle this challenge in the context of this AI Masters’s program? How do you intend to develop professionals who can be more informed about these issues?\nWhat are the Top Societal Competences for an Artificial Intelligence Professional: The HCAIM Industry Correspondents Speak Up\n\n\n\n\n\nExcerpt from the HCAIM Needs and Market Analysis. August 2021\n\n\n\n\nTarry Singh (CEO\, deepkapha AI Lab & Real AI B.V.) will be hosting this webinar series. We will be hosting a lot of exciting topics pertaining to human-centred AI throughout 2021 – 2022 until the definitive launch of the human-centred AI Masters in fall 2022. \n\n\n\nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/hcaim-webinar-can-ethics-and-ai-work-together/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/Ethics-and-Technology.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20201001T130000
DTEND;TZID=Europe/Paris:20201001T140000
DTSTAMP:20260403T190555
CREATED:20211001T030116Z
LAST-MODIFIED:20220419T032620Z
UID:5014-1601557200-1601560800@humancentered-ai.eu
SUMMARY:Announcing our first episode of the HCAIM Webinar Series! The Human Debt of AI – Can We Put the Human Back in the Loop with AI?
DESCRIPTION:We are super excited to announce the commencement of our HCAIM Webinar Series. In this multi-part Webinar series\, we will be talking with various industry leaders\, academics\, AI experts\, ethicists\, policymakers and social scientists covering a variety of topics related to human-centred AI. \nWe will discuss the technological advancements of AI – from automation of mundane and repetitive tasks to making breakthrough discoveries in genetics\, material sciences\, and predicting climate events. We will also discuss the human-centred AI Master’s Programme that we are developing – how it is constructed and what it aims for the learners to equip with so they can lead with responsibility in this AI pervasive era. \nOn October 7\, 2021\, at 13:00 CET\, we will be launching our very first webinar where Tarry Singh (CEO\, deepkapha AI Lab & Real AI B.V.) will be speaking with Dr Stefan Leijnen\, Professor of research group Artificial Intelligence at HU of Applied Sciences\, Utrecht. \nThey will discuss how the HCAIM project came into existence\, why it is needed\, when is it launching and most importantly\, how this European HCAI Masters program will be different from the rest. They will also probe into the following topics: \nThe Human Debt of AI – Can We Put the Human Back in the Loop with AI? \nThere is a lot of talk about the technical debt of information systems\, as more systems are penetrating inside enterprises but there is no serious discussion around the cost of this pervasive technology on the well-being of humans – meaning us! \nThere is no industry in the world today that does not have a human and technology working in tandem\, but the pertinent question remains: Who is in control? Who or What is gaining control? And how can we ensure that we bring back human ingenuity that responsibly works with machines and that ensures that machine decisions originate from systems that are ethical\, explainable and interpretable by design? \nWho can steer such a complex change inside the organization? What skills must they possess? \nWhile many of the current courses in the field of AI still focus on the academic and technological aspects of artificial intelligence\, a new approach is needed where\, in addition to these aspects\, there is also a clear focus on the human and ethical side of AI. \nFor instance\, a real AI architect knows or must know as much about the AI from technical design\, as possessing the ability to identify the lack of poor architectural choices – especially when it comes to AI algorithms and models that have to survive not only the test of model relevance. Whether model drift or model shift but also its ability to answer sensitive questions such as: “Is this model objective enough to not have\, say “gender” as a weighing factor?” or “What are the risks/implications of transferability of such model from small population to large subsets?” \n\nExcerpt: Human-Centered AI Masters Curriculum Vision\, HU Utrecht\, The Netherlands \nThis and a lot more will be discussed in 2021 – 2022 until the definitive launch of the human-centred AI Master’s Programme in fall 2022. \nAll sessions will run live and will be hosted on LinkedIn Live. You can view the recorded sessions at our Webinars Archive. We will have more engaging discussions with top industry leaders including our project partners from Universities\, Research Labs\, Industry parties and others. A complete list of all project partners can be found here. View the live event here.
URL:https://humancentered-ai.eu/event/announcing-our-first-episode-of-the-hcaim-webinar-series-the-human-debt-of-ai-can-we-put-the-human-back-in-the-loop-with-ai/
LOCATION:LinkedIn Live
CATEGORIES:Webinars
ATTACH;FMTTYPE=image/jpeg:https://humancentered-ai.eu/wp-content/uploads/2022/03/pexels-vlada-karpovich-4050297-scaled.jpg
END:VEVENT
END:VCALENDAR