Programme Contents

In our aim to create a Human-Centred AI Master’s programme, the HCAIM Consortium follows the definition of AI HLEG: “The human-centric approach to AI strives to ensure that human values are central to how AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights.

To answer the requirements of this definition, the programme covers the technical, ethical and practical elements of artificial intelligence. We have designed our content around the three phases of the MLOps lifecycle – development, deployment and maintenance of machine learning models, thus producing three core modules in alignment with the above-mentioned ML-Ops phases: Modelling (Module A),  Deployment (Module B), and Evaluation (Module C). We have added a fourth module (D) Graduation, to enable students to show that they can independently solve challenges proposed by the industry based on current needs and requirements related to the field of human-centred artificial intelligence. To support the students herewith, the lesson plan Research in Practice runs parallel to the other Modules, and guides the students from the start of the programme in their thesis process.

The structure of the programme is visualised in the table below.

 

Module A Module B Module C Module D
Technical

Foundations of AI

Advanced AI:
Deep Learning
Future AI

Master Thesis Project

Practical AI Modelling AI in Action:
Organisational AI
Socially Responsible AI
Ethical Ethics Fundamentals Trustworthy AI Compliance, Legality & Humanity

This page contains all the Learning Events that make up the Human Centered Artificial Intelligence Master. All Learning Events, including their flanking study material, will be made available in English on the HCAIM website and can be translated into any of the EU languages using the eTranslation tool of the European Union. For the purpose of this preview one Learning Event of Module A, one Learning Event of Module B, two Learning Events of Module C and the Guidelines for the Thesis, as well as two examples of thesis topics are already fully available in English on this preview page ( and can be translated into any of the EU languages using the eTranslation tool of the European Union).

On top of that, this page as well as the Module A Learning Event ‘Lecture- Introduction to Human-Centered AI’ are already available in all EU languages through this preview page.  To see the translations of these parts of the HCAIM programme, please select any of the individual languages at the bottom of this page.

All the materials are available under a Creative Commons Attribution-NonCommercial-NoDerivates 4.0 license (CC BY-NC-ND 4.0)

Modelling (Module A)

The first module, namely Modelling (Module A), focuses on the first phase of the MLOps lifecycle and is related to the lowest maturity level of the application of Machine Learning (ML) in organizations: modelling data. It includes the activities that form the basis of the application of ML, such as data extraction, data analysis, data preparation, model training and (mainly manual) model validation and evaluation.

In this phase, the focus is on correctly analyzing and modelling the data to achieve the business objectives and little use is made of automation (e.g. CI/CD), which is only added in the second phase of MLOps (Deployment – Module B). The modelling activities are often characterized by the manual, script-driven and interactive method by which the data analysis, preparation, model training and validation are carried out. To maintain an overview of the different models, parameters and choices that are being experimented with, experiment tracking is used.

From an ethical perspective, it is important in the modelling phase to devote sufficient time and attention to finding out the client’s objectives, mapping the stakeholders and exploring how the individual values of these stakeholders are affected (and recognizing possible conflicts between them). Aspects such as transparency, inclusion, security and privacy are of great importance in this. Naturally, attention must also be paid to the social and moral desirability of the client’s objectives. In addition, it is important to have (timely) awareness of possible biases/prejudices in the available data, recognise the possible consequences of these prejudices and find mitigations to deal with these prejudices.

Learning Outcomes

  • The student evaluates various ML techniques to make a well-founded choice, matching the acquired requirements of the customer and implementing a prototype of the chosen ML technique to advise on solving a given data modelling problem.

  • Learning Outcome 1
  • The student argues, using fundamental ethical frameworks, how moral dilemmas can be solved and evaluates the possible consequences of existing biases in data and the influence of designed mitigations to counteract the consequences of those biases.

  • Learning Outcome 2
  • The student applies quantitative and qualitative research methods to scientifically substantiate their choices during the ethical consideration(s) and making of the prototype.

  • Learning Outcome 3

Lesson Plans for Module A (Modelling)

General AI

  • Lecture: Historical Introduction to Scientific Explanation Models
  • Lecture: Understanding Data

Data Exploration for Machine Learning

  • Tutorial: Understanding Data
  • Lecture: Exploratory Data Analysis II
  • Tutorial: Exploratory Data Analysis
  • Lecture: Inference and Generalisation
  • Tutorial: Inference and Generalisation

Machine Learning Fundamentals

  • Lecture: Model Evaluation
  • Tutorial: Model Evaluation
  • Lecture: Model Fitting and Optimization
  • Practical: Model Fitting and Optimization

Decision Theory

  • Lecture: Decision Theory
  • Tutorial: Decision Theory
  • Lecture: Decision Networks
  • Tutorial: Decision Networks

Data Science

  • Lecture: The Data Analysis Process
  • Lab session: Platforms
  • Lecture: Data Preparation and Exploration
  • Lab session: Data Preparation and Exploration

Supervised Machine Learning

  • Lecture: Linear Regression
  • Lab session: Linear Regression
  • Lecture: Decision Trees
  • Lab session: Decision Trees
  • Lecture: SVMS and Kernels
  • Lab session: SVMS and Kernels
  • Lecture: Neural Networks

Unsupervised Machine Learning

  • Lecture: Unsupervised Learning
  • Lab session: Unsupervised Learning

ML applications

  • Lecture: Natural Language Processing
  • Lab session: Natural Language Processing

General Ethics

Ethical Frameworks

  • Interactive session: Ethical Frameworks
  • Lecture: Utilitarianism
  • Interactive session: Utilitarianism
  • Lecture: Virtue Ethics
  • Interactive session: Virtue Ethics
  • Lecture: Duty Ethics
  • Interactive session: Duty Ethics
  • Lecture: Theory of Justice

Advanced Ethics

  • Lecture: Social Contract Theories
  • Lecture: Principles of Justice

Applied Ethics

  • Lecture: Value-Sensitive Design
  • Interactive session: Value-sensitive Design
  • Lecture: Privacy
  • Lecture: Ethics of Decision-Support Systems
  • Lecture: Decision-making and (cognitive) biases

Deployment (Module B)

The module Deployment (Module B) focuses on the second phase of the MLOps development cycle; the deployment. After the data exploratory phase of modelling (see Module A – Modelling), comes the integration of the ML solution into the business systems. It is now important to start thinking about the ML architecture and how it plays together with the existing systems (legacy). To experience real benefit from automated ML solutions, pipelines need to be introduced; on the one hand, to be able to deal with continuous and live data supplies (stream processing), and on the other hand, to link the results of the ML model to other systems.

Moreover, Module B enhances the complexity of AI technology by moving towards (the use of) neural networks and deep learning. A major advantage of these more complex models is that they are more flexible and versatile than the techniques introduce in Module A – Modelling. However, the important disadvantages of these techniques are that they are more complex (to understand and configure) and opaquer. Therein lies an important ethical dilemma in the use of (advanced) AI techniques: how do you still understand what the AI solution calculates and whether this is done in the right way. Making the deployment of AI solutions more transparent and being able to determine the possible risks and mitigate these risks are important (social) themes in this module.

Learning Outcomes

  • The student assesses the possible choices for the integration of an advanced AI technique, such as Deep and/or Reinforcement Learning, and authors a one-page report based on a prototype that has been developed taking into account the limitations of and influences on the existing ICT systems and data facilities of the customer, which have been obtained in collaboration with, for example, ICT architects or developers.

  • Learning Outcome 1
  • The student assesses the potential risk involved and tests the degree of transparency (including interpretability, reproducibility and explainability) of a chosen AI/ML implementation and designs solutions using techniques that increase insight and transparency among stakeholders (so-called Explainable AI (XAI) techniques) to remedy shortcomings in this respect compared to the social and customer-specific requirements.

  • Learning Outcome 2
  • The student formulates a research design for a scientifically sound (practice-oriented) research project related to a company case by formulating a relevant, consistent, functional research question, considering the applied research methods to be used, and establishing a precise, relevant and critical theoretical framework.

  • Learning Outcome 3

Lesson Plans for Module B (Deployment)

Fundamentals of Deep Learning

  • Lecture: Fundamentals of deep learning
  • Tutorial: Fundamentals of deep learning
  • Practical: Fundamentals of deep learning

Optimization of Deep Learning

  • Lecture: Regularization
  • Tutorial: Regularization
  • Lecture: Batch processing
  • Tutorial: Batch processing

Applications of Deep Learning

  • Lecture: Building computational graphs, modern architectures
  • Lecture: Convolutional Neural Networks
  • Tutorial: Convolutional Neural Networks
  • Practical: Convolutional Neural Networks
  • Lecture: Recurrent Neural Networks
  • Lecture: Transformer networks
  • Tutorial: CNNs and Transformers for images
  • Lecture: Hardware and software frameworks for deep learning

MLOps

  • Lecture: ML-Ops
  • Tutorial: ML-Ops
  • Practical: ML-Ops
  • Lecture: ML-Ops Lifecycle
  • Practical: ML-Ops Lifecycle

Deployment of AI

  • Lecture: Application technology
  • Practical: Application technology
  • Tutorial: Data architecture
  • Interactive session: Data architecture
  • Practical: Hadoop-based technologies

Quality of Development & Deployment

  • Lecture: CI/CD
  • Tutorial: CI/CD

General Explainable AI

  • Lecture: Introduction General Explainable AI
  • Lecture: Explainable AI for end-users
  • Practical: Practice with XAI models 1
  • Practical: Practice with XAI models 2
  • Lecture: Cutting-edge XAI developments

Privacy

  • Lecture: Introduction to privacy and risk
  • Interactive session: Perspectives on privacy
  • Practical: Auditing frameworks of privacy and data protection
  • Lecture: Privacy and machine learning
  • Practical: Applying and evaluating privacy-preserving techniques

Security and robustness

  • Lecture: Security and robustness
  • Practical: Apply auditing frameworks
  • Practical: Enhancing ML security and robustness

Risk

  • Lecture: Risk & Risk mitigation
  • Interactive session: Risk & Risk mitigation
  • Practical: Risk & Risk mitigation

Evaluation (Module C)

The Evaluation module (Module C) focuses on the evaluation aspects of AI development including both the societal aspects of an AI product, and the development of an appreciation of the potential future directions that AI may take, looking at technology trends; socially responsible AI; compliance, as well as ensuring that the human element is ever-present in the design, development, and evaluation of AI systems.

As part of the future of AI, an exploration of the level of AI adoption in different industries is discussed, as well as how AI is adapted for different domains. Looking at socially responsible AI includes how AI affects individuals and different groups in society. And as a crucial part of the module, there is a focus on laws, policies and codes of conduct related to AI (with an emphasis on issues such as explainability and trust), as well as quality control and quality management processes, to evaluate the results of AI initiatives.

Learning Outcomes

  • The student develops an appreciation of the cutting-edge approaches to AI and machine learning, as well as an understanding of how artificial intelligence is utilized in different domains, and how to evaluate the potential directions artificial intelligence may go in the future.

  • Learning Outcome 1
  • The student shows a well-defined approach to consequence scanning, considering issues such as evaluating the potential impact new technology could have on individuals and society, focusing specifically on minorities and marginalized groups, as well as potential environmental impacts.

  • Learning Outcome 2
  • The student demonstrates the ability to employ a full-articulated research methodology with ethics embedded at all stages, with an awareness of the contextual nature of the specific approaches that should be utilized which will be informed by the case studies covered in this module.

  • Learning Outcome 3

Lesson Plans for Module C (Evaluation)

Introduction

  • Lecture: Introduction to the resurgence of AI and ML
  • Lecture: Guest Lecture on Future of AI

Open Problems and Challenges

  • Lecture: Guest Lecture on Explainable Machine Learning (XAI)
  • Practical: Explainable Machine Learning (XAI)
  • Lecture: Inclusivity, Privacy and Causality
  • Interactive Session: Inclusivity, Privacy and Causality
  • Lecture: Trust, Normativity and Model Drift
  • Interactive Session: Trust, Normativity and Model Drift
  • Lecture: Generalizability and Artificial General Intelligence (AGI). Open Problems Vs Challenges

Advances in ML Models Through an HC Lens. A Result-Oriented Study

  • Lecture: Semi-supervised and Unsupervised Learning
  • Lecture: Generative Models, Transform Deep Learning and Hybrid learning models
  • Lecture: Theory of Federated Learning (Profiling and Personalization)
  • Lecture: Federated Learning – Advances and Open Challenges
  • Practical: Federated Learning – Train deep models
  • Lecture: Model Compression – Edge Computing
  • Practical: Model Compression – Edge Computing
  • Lecture: Automated Hyper-parameter Optimization

Emerging Evaluations for HCAI Models – Discussion-Based Study

  • Lecture: Feature Importance, Trust Models and Trust Quantification
  • Practical: Feature Importance, Trust Models and Trust Quantification
  • Lecture: Probabilistic descriptions of ML models, Subjective logic, Permutation Importance
  • Practical: Partial Dependence, Individual Conditional Expectation (ICE), LIME, DeepLIFT, SHAP

Philosophical Discussion on Future AI technology

  • Lecture: Guest Lecture on Quantum Computing
  • Interactive Session: Permeation of AI and The AI Singularity
  • Interactive Session: Robot Rights movement
  • Interactive Session: Human-machine Biology / Neuromorphic Technologies
  • Interactive Session: Living with Robots
  • Interactive Session: Human-Machine interactions

EU And International Legislation/Frameworks On Data, AI, Human Rights And Equality

  • Lecture: Overview Of Ethical, Professional And Legal Aspects Of HCAI Applications
  • Interactive Session: Ethical, Professional And Legal Aspects Of HCAI Applications
  • Lecture: Data And Its Challenges – EU GDPR, US COPPA, HIPPA
  • Lecture: Data And Its Challenges – Data Regulations, Data Sourcing And HCAI Perspective
  • Interactive Session: Data And Its Challenges. How GDPR Impacts AI Solutions
  • Practical: Data And Its Challenges. An AI Regulation Exercise
  • Lecture: EU Human Rights Legislation
  • Interactive Session: EU Human Right Legislation – A Case Study
  • Lecture: EU Proposal Of Regulation On HCAI Applications
  • Interactive Session: EU Proposal Of Regulation On AI – A Case Study
  • Practical: Effective Of EU Proposal Of Regulation On AI
  • Lecture: Strengths And Limitations Of Existing Laws A Deeper Dive

Data Management, Audit And Assessment

  • Lecture: Data Security And Compliance, Data Lineage And Management
  • Lecture: Governance And Stewardship, Key Stakeholders And Personal Data Management
  • Practical: Common Roles And Cross Overs Between Data Management And AI Teams
  • Practical: Investigate Data Lineage, Challenges And Potential Impact Of The AI Teams

Policy And Frameworks – Lifecycle

  • Lecture: DS, AI, ML Life Cycle – A Human-Centred Approach
  • Practical: Lifecycle Implementation And A Test For Fairness

Scope Of Socially Responsible AI

  • Lecture: Positive And Negative Externalities
  • Interactive Session: Externalities Related To Well-Being
  • Interactive Session: Negative Externalities – Bhopal Gas Tragedy – A Case Study
  • Interactive Session: Product Pricing Vs Factory Waste – AI Perspective
  • Lecture: Externalities In Strict Microeconomic Sense

Corporate Social Responsibility (ISO 26000) – When Using HCAI System

  • Lecture: Fair Operating Practices – AI Recruitment And Malpractices Of AI Monitoring
  • Interactive Session: AI-Based Decision Making – Recruitment And Promotion
  • Interactive Session: Decision Making Based On AI Monitoring
  • Interactive Session: Human Intervention On Inconsistent And/Or Good AI Decisions
  • Interactive Session: Transfer Of Control Back And Forth Between Human And AI
  • Interactive Session: Phycological Aspects When Working With AI – Stress, Anxiety, Depression
  • Lecture: Consumer Issues – Filter Bubbles, Data Storage, AI Monitoring, Fair Practices
  • Interactive Session: Consumer Issues – Filter Bubbles, Data Storage, AI Monitoring, Fair Practices
  • Interactive Session: – Community Development – Societal Impact Assessment Prior To Working On AI Project

Socio-Legal Aspects For AI

  • Interactive Session: Who Is Responsible? – Product Responsibility, Copyright Problems

AI For All

  • Lecture: Economic Gaps – Digital Divide
  • Interactive Session: Economic Gaps – Digital Divide In Categories
  • Geographical, Technical, Financial And Political
  • Interactive Session: How AI Affects Human Behaviour – Positive And Negative
  • Interactive Session: Environment Impact – Carbon Footprint
  • Interactive Session: Education Impact – Auto AI Decision Making
  • Interactive Session: Filter Bubble – Political, Corporate And Geographical
  • Interactive Session: AI-Powered Warfare And International Peace

Graduation (Module D)

The Graduation module (Module D) reflects the core principle of the HCAIM programme that is built on the concept of project-based learning (PBL). The goal of this module is to position the graduation project (making a professional product) centrally in the student’s learning trajectory. As part of their Graduation project (the Master Thesis), students show that they can independently solve challenges proposed by the industry based on current needs and requirements, considering both the technical and the ethical aspects of the issue at hand.

Each thesis is considered locally, with an internal supervisor (a professor from the University in which the student is pursuing the degree) and an external supervisor belonging to the party proposing the thesis (if any). This latter aspect, despite not being mandatory, is rigorously pursued. The proposing party can be an SME, an Excellence Centre, or another University, both at a national and international level. Proposing parties are expected to provide both national and international thesis (i.e. thesis organised in with a University from the same country or from a foreign one).

Learning Outcomes

  • The student recognizes and reflects on the AI lifecycle in a realistic, industry-informed context, and in diverse locations, scenarios, and use cases.

  • Learning Outcome 1
  • The student demonstrates a robust and valid research attitude through a project with a well-defined interdisciplinary approach producing industry-relevant and technologically competent solutions, while evaluating the potential impact of their work on individuals and society

  • Learning Outcome 2
  • The student demonstrates a professional attitude regarding communication with relevant stakeholders (e.g., mentors, advisors, peers, and customers), an analytical attitude, work ethos, planning competence, pro-activeness, and self-awareness

  • Learning Outcome 3

Guidelines for the Thesis

These guidelines are intended to provide ethical guidance for the HCAIM theses.

These guidelines are intended to support parties which intend to propose a new thesis.

View the HCAIM Thesis Template here.

This template allows the supervisor to support the student in identifying and dealing with problems. At the same time, a thesis proposing party will be asked to compile this template.

Examples of Thesis Topics

View the content in another language

Please select a language from the menu below to see a the translation of this page in all EU languages. The translations are generated by the eTranslation tool which is available on the EU website. The HCAIM Consortium cannot be held responsible for any semantic or contextual errors that may arise from these translations. All other content can also be translated into any of the EU languages using the eTranslation tool of the European Union.

If you click on any of the languages below, you can access the Module A learning event Introduction to Human-Centered AI in all EU languages.

Skip to content