Programme Overview

In our aim to create a Human-Centred AI Master’s programme, the HCAIM Consortium follows the definition of AI HLEG: “The human-centric approach to AI strives to ensure that human values are central to how AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights.

To answer the requirements of this definition, the programme covers the technical, ethical and practical elements of artificial intelligence. We have designed our content around the three phases of the MLOps lifecycle – development, deployment and maintenance of machine learning models, thus producing three core modules in alignment with the above-mentioned ML-Ops phases: Modelling (Module A),  Deployment (Module B), and Evaluation (Module C). We have added a fourth module (D) Graduation, to enable students to show that they can independently solve challenges proposed by the industry based on current needs and requirements related to the field of human-centred artificial intelligence.

This is visualised in the table below.


Module A Module B Module C Module D

Foundations of AI

Advanced AI:
Deep Learning
Future AI

Master Thesis Project

Practical AI Modelling AI in Action:
Organisational AI
Socially Responsible AI
Ethical Ethics Fundamentals Trustworthy AI Compliance, Legality & Humanity


This page contains all the Learning Events that make up the Human Centered Artificial Intelligence Master. All Learning Events, including their flanking study material, will be made available in English on the HCAIM website and can be translated into any of the EU languages using the eTranslation tool of the European Union. For the purpose of this preview one Learning Event of Module A, one Learning Event of Module B, two Learning Events of Module C and the Guidelines for the Thesis, as well as two examples of thesis topics are already fully available in English on this preview page ( and can be translated into any of the EU languages using the eTranslation tool of the European Union).

On top of that, the Programme Overview of the HCAIM programme, the Module A Learning Objectives, the Module A overview of Lesson Plans, as well as the Module A Learning Event ‘Lecture- Introduction to Human-Centered AI’ are already available in all EU languages through this preview page.  To see the translations of these parts of the HCAIM programme, please select any of the individual languages at the bottom of this page.

All the materials are available under a Creative Commons Attribution-NonCommercial-NoDerivates 4.0 license (CC BY-NC-ND 4.0)

Modelling (Module A)

The first module, namely Modelling (Module A), focuses on the first phase of the MLOps lifecycle and is related to the lowest maturity level of the application of Machine Learning (ML) in organizations: modelling data. It includes the activities that form the basis of the application of ML, such as data extraction, data analysis, data preparation, model training and (mainly manual) model validation and evaluation.

In this phase, the focus is on correctly analyzing and modelling the data to achieve the business objectives and little use is made of automation (e.g. CI/CD), which is only added in the second phase of MLOps (Deployment – Module B). The modelling activities are often characterized by the manual, script-driven and interactive method by which the data analysis, preparation, model training and validation are carried out. To maintain an overview of the different models, parameters and choices that are being experimented with, experiment tracking is used.

From an ethical perspective, it is important in the modelling phase to devote sufficient time and attention to finding out the client’s objectives, mapping the stakeholders and exploring how the individual values of these stakeholders are affected (and recognizing possible conflicts between them). Aspects such as transparency, inclusion, security and privacy are of great importance in this. Naturally, attention must also be paid to the social and moral desirability of the client’s objectives. In addition, it is important to have (timely) awareness of possible biases/prejudices in the available data, recognise the possible consequences of these prejudices and find mitigations to deal with these prejudices.

Learning Outcomes

  • The student evaluates various ML techniques to make a well-founded choice, matching the acquired requirements of the customer and implementing a prototype of the chosen ML technique to advise on solving a given data modelling problem.

  • Learning Outcome 1
  • The student argues, using fundamental ethical frameworks, how moral dilemmas can be solved and evaluates the possible consequences of existing biases in data and the influence of designed mitigations to counteract the consequences of those biases.

  • Learning Outcome 2
  • The student applies quantitative and qualitative research methods to scientifically substantiate their choices during the ethical consideration(s) and making of the prototype.

  • Learning Outcome 3

Lesson Plans for Module A (Modelling)

General AI

  • Lecture: Historical Introduction to Scientific Explanation Models
  • Lecture: Understanding Data

Data Exploration for Machine Learning

  • Tutorial: Understanding Data
  • Lecture: Exploratory Data Analysis II
  • Tutorial: Exploratory Data Analysis
  • Lecture: Inference and Generalisation
  • Tutorial: Inference and Generalisation

Machine Learning Fundamentals

  • Lecture: Model Evaluation
  • Tutorial: Model Evaluation
  • Lecture: Model Fitting and Optimization
  • Practical: Model Fitting and Optimization

Decision Theory

  • Lecture: Decision Theory
  • Tutorial: Decision Theory
  • Lecture: Decision Networks
  • Tutorial: Decision Networks

Data Science

  • Lecture: The Data Analysis Process
  • Lab session: Platforms
  • Lecture: Data Preparation and Exploration
  • Lab session: Data Preparation and Exploration

Supervised Machine Learning

  • Lecture: Linear Regression
  • Lab session: Linear Regression
  • Lecture: Decision Trees
  • Lab session: Decision Trees
  • Lecture: SVMS and Kernels
  • Lab session: SVMS and Kernels
  • Lecture: Neural Networks

Unsupervised Machine Learning

  • Lecture: Unsupervised Learning
  • Lab session: Unsupervised Learning

ML applications

  • Lecture: Natural Language Processing
  • Lab session: Natural Language Processing

General Ethics

Ethical Frameworks

  • Interactive session: Ethical Frameworks
  • Lecture: Utilitarianism
  • Interactive session: Utilitarianism
  • Lecture: Virtue Ethics
  • Interactive session: Virtue Ethics
  • Lecture: Duty Ethics
  • Interactive session: Duty Ethics
  • Lecture: Theory of Justice

Advanced Ethics

  • Lecture: Social Contract Theories
  • Lecture: Principles of Justice

Applied Ethics

  • Lecture: Value-Sensitive Design
  • Interactive session: Value-sensitive Design
  • Lecture: Privacy
  • Lecture: Ethics of Decision-Support Systems
  • Lecture: Decision-making and (cognitive) biases

Deployment (Module B)

The module Deployment (Module B) focuses on the second phase of the MLOps development cycle; the deployment. After the data exploratory phase of modelling (see Module A – Modelling), comes the integration of the ML solution into the business systems. It is now important to start thinking about the ML architecture and how it plays together with the existing systems (legacy). To experience real benefit from automated ML solutions, pipelines need to be introduced; on the one hand, to be able to deal with continuous and live data supplies (stream processing), and on the other hand, to link the results of the ML model to other systems.

Moreover, Module B enhances the complexity of AI technology by moving towards (the use of) neural networks and deep learning. A major advantage of these more complex models is that they are more flexible and versatile than the techniques introduce in Module A – Modelling. However, the important disadvantages of these techniques are that they are more complex (to understand and configure) and opaquer. Therein lies an important ethical dilemma in the use of (advanced) AI techniques: how do you still understand what the AI solution calculates and whether this is done in the right way. Making the deployment of AI solutions more transparent and being able to determine the possible risks and mitigate these risks are important (social) themes in this module.

Learning Outcomes

  • The student assesses the possible choices for the integration of an advanced AI technique, such as Deep and/or Reinforcement Learning, and authors a one-page report based on a prototype that has been developed taking into account the limitations of and influences on the existing ICT systems and data facilities of the customer, which have been obtained in collaboration with, for example, ICT architects or developers.

  • Learning Outcome 1
  • The student assesses the potential risk involved and tests the degree of transparency (including interpretability, reproducibility and explainability) of a chosen AI/ML implementation and designs solutions using techniques that increase insight and transparency among stakeholders (so-called Explainable AI (XAI) techniques) to remedy shortcomings in this respect compared to the social and customer-specific requirements.

  • Learning Outcome 2
  • The student formulates a research design for a scientifically sound (practice-oriented) research project related to a company case by formulating a relevant, consistent, functional research question, considering the applied research methods to be used, and establishing a precise, relevant and critical theoretical framework.

  • Learning Outcome 3

Lesson Plans for Module B (Deployment)

Fundamentals of Deep Learning

  • Lecture: Fundamentals of deep learning
  • Tutorial: Fundamentals of deep learning
  • Practical: Fundamentals of deep learning

Optimization of Deep Learning

  • Lecture: Regularization
  • Tutorial: Regularization
  • Lecture: Batch processing
  • Tutorial: Batch processing

Applications of Deep Learning

  • Lecture: Building computational graphs, modern architectures
  • Lecture: Convolutional Neural Networks
  • Tutorial: Convolutional Neural Networks
  • Practical: Convolutional Neural Networks
  • Lecture: Recurrent Neural Networks
  • Lecture: Transformer networks
  • Tutorial: CNNs and Transformers for images
  • Lecture: Hardware and software frameworks for deep learning


  • Lecture: ML-Ops
  • Tutorial: ML-Ops
  • Practical: ML-Ops
  • Lecture: ML-Ops Lifecycle
  • Practical: ML-Ops Lifecycle

Deployment of AI

  • Lecture: Application technology
  • Practical: Application technology
  • Tutorial: Data architecture
  • Interactive session: Data architecture
  • Practical: Hadoop-based technologies

Quality of Development & Deployment

  • Lecture: CI/CD
  • Tutorial: CI/CD

General Explainable AI

  • Lecture: Introduction General Explainable AI
  • Lecture: Explainable AI for end-users
  • Practical: Practice with XAI models 1
  • Practical: Practice with XAI models 2
  • Lecture: Cutting-edge XAI developments


  • Lecture: Introduction to privacy and risk
  • Interactive session: Perspectives on privacy
  • Practical: Auditing frameworks of privacy and data protection
  • Lecture: Privacy and machine learning
  • Practical: Applying and evaluating privacy-preserving techniques

Security and robustness

  • Lecture: Security and robustness
  • Practical: Apply auditing frameworks
  • Practical: Enhancing ML security and robustness


  • Lecture: Risk & Risk mitigation
  • Interactive session: Risk & Risk mitigation
  • Practical: Risk & Risk mitigation

Evaluation (Module C)

The Evaluation module (Module C) focuses on the evaluation aspects of AI development including both the societal aspects of an AI product, and the development of an appreciation of the potential future directions that AI may take, looking at technology trends; socially responsible AI; compliance, as well as ensuring that the human element is ever-present in the design, development, and evaluation of AI systems.

As part of the future of AI, an exploration of the level of AI adoption in different industries is discussed, as well as how AI is adapted for different domains. Looking at socially responsible AI includes how AI affects individuals and different groups in society. And as a crucial part of the module, there is a focus on laws, policies and codes of conduct related to AI (with an emphasis on issues such as explainability and trust), as well as quality control and quality management processes, to evaluate the results of AI initiatives.

Learning Outcomes

  • The student develops an appreciation of the cutting-edge approaches to AI and machine learning, as well as an understanding of how artificial intelligence is utilized in different domains, and how to evaluate the potential directions artificial intelligence may go in the future.

  • Learning Outcome 1
  • The student shows a well-defined approach to consequence scanning, considering issues such as evaluating the potential impact new technology could have on individuals and society, focusing specifically on minorities and marginalized groups, as well as potential environmental impacts.

  • Learning Outcome 2
  • The student demonstrates the ability to employ a full-articulated research methodology with ethics embedded at all stages, with an awareness of the contextual nature of the specific approaches that should be utilized which will be informed by the case studies covered in this module.

  • Learning Outcome 3

Lesson Plans for Module C (Evaluation)


  • Lecture: Introduction to the resurgence of AI and ML
  • Lecture: Guest Lecture on Future of AI

Open Problems and Challenges

  • Lecture: Guest Lecture on Explainable Machine Learning (XAI)
  • Practical: Explainable Machine Learning (XAI)
  • Lecture: Inclusivity, Privacy and Causality
  • Interactive Session: Inclusivity, Privacy and Causality
  • Lecture: Trust, Normativity and Model Drift
  • Interactive Session: Trust, Normativity and Model Drift
  • Lecture: Generalizability and Artificial General Intelligence (AGI). Open Problems Vs Challenges

Advances in ML Models Through an HC Lens. A Result-Oriented Study

  • Lecture: Semi-supervised and Unsupervised Learning
  • Lecture: Generative Models, Transform Deep Learning and Hybrid learning models
  • Lecture: Theory of Federated Learning (Profiling and Personalization)
  • Lecture: Federated Learning – Advances and Open Challenges
  • Practical: Federated Learning – Train deep models
  • Lecture: Model Compression – Edge Computing
  • Practical: Model Compression – Edge Computing
  • Lecture: Automated Hyper-parameter Optimization

Emerging Evaluations for HCAI Models – Discussion-Based Study

  • Lecture: Feature Importance, Trust Models and Trust Quantification
  • Practical: Feature Importance, Trust Models and Trust Quantification
  • Lecture: Probabilistic descriptions of ML models, Subjective logic, Permutation Importance
  • Practical: Partial Dependence, Individual Conditional Expectation (ICE), LIME, DeepLIFT, SHAP

Philosophical Discussion on Future AI technology

  • Lecture: Guest Lecture on Quantum Computing
  • Interactive Session: Permeation of AI and The AI Singularity
  • Interactive Session: Robot Rights movement
  • Interactive Session: Human-machine Biology / Neuromorphic Technologies
  • Interactive Session: Living with Robots
  • Interactive Session: Human-Machine interactions

EU And International Legislation/Frameworks On Data, AI, Human Rights And Equality

  • Lecture: Overview Of Ethical, Professional And Legal Aspects Of HCAI Applications
  • Interactive Session: Ethical, Professional And Legal Aspects Of HCAI Applications
  • Lecture: Data And Its Challenges – EU GDPR, US COPPA, HIPPA
  • Lecture: Data And Its Challenges – Data Regulations, Data Sourcing And HCAI Perspective
  • Interactive Session: Data And Its Challenges. How GDPR Impacts AI Solutions
  • Practical: Data And Its Challenges. An AI Regulation Exercise
  • Lecture: EU Human Rights Legislation
  • Interactive Session: EU Human Right Legislation – A Case Study
  • Lecture: EU Proposal Of Regulation On HCAI Applications
  • Interactive Session: EU Proposal Of Regulation On AI – A Case Study
  • Practical: Effective Of EU Proposal Of Regulation On AI
  • Lecture: Strengths And Limitations Of Existing Laws A Deeper Dive

Data Management, Audit And Assessment

  • Lecture: Data Security And Compliance, Data Lineage And Management
  • Lecture: Governance And Stewardship, Key Stakeholders And Personal Data Management
  • Practical: Common Roles And Cross Overs Between Data Management And AI Teams
  • Practical: Investigate Data Lineage, Challenges And Potential Impact Of The AI Teams

Policy And Frameworks – Lifecycle

  • Lecture: DS, AI, ML Life Cycle – A Human-Centred Approach
  • Practical: Lifecycle Implementation And A Test For Fairness

Scope Of Socially Responsible AI

  • Lecture: Positive And Negative Externalities
  • Interactive Session: Externalities Related To Well-Being
  • Interactive Session: Negative Externalities – Bhopal Gas Tragedy – A Case Study
  • Interactive Session: Product Pricing Vs Factory Waste – AI Perspective
  • Lecture: Externalities In Strict Microeconomic Sense

Corporate Social Responsibility (ISO 26000) – When Using HCAI System

  • Lecture: Fair Operating Practices – AI Recruitment And Malpractices Of AI Monitoring
  • Interactive Session: AI-Based Decision Making – Recruitment And Promotion
  • Interactive Session: Decision Making Based On AI Monitoring
  • Interactive Session: Human Intervention On Inconsistent And/Or Good AI Decisions
  • Interactive Session: Transfer Of Control Back And Forth Between Human And AI
  • Interactive Session: Phycological Aspects When Working With AI – Stress, Anxiety, Depression
  • Lecture: Consumer Issues – Filter Bubbles, Data Storage, AI Monitoring, Fair Practices
  • Interactive Session: Consumer Issues – Filter Bubbles, Data Storage, AI Monitoring, Fair Practices
  • Interactive Session: – Community Development – Societal Impact Assessment Prior To Working On AI Project

Socio-Legal Aspects For AI

  • Interactive Session: Who Is Responsible? – Product Responsibility, Copyright Problems

AI For All

  • Lecture: Economic Gaps – Digital Divide
  • Interactive Session: Economic Gaps – Digital Divide In Categories
  • Geographical, Technical, Financial And Political
  • Interactive Session: How AI Affects Human Behaviour – Positive And Negative
  • Interactive Session: Environment Impact – Carbon Footprint
  • Interactive Session: Education Impact – Auto AI Decision Making
  • Interactive Session: Filter Bubble – Political, Corporate And Geographical
  • Interactive Session: AI-Powered Warfare And International Peace

Graduation (Module D)

The Graduation module (Module D) reflects the core principle of the HCAIM programme that is built on the concept of project-based learning (PBL). The goal of this module is to position the graduation project (making a professional product) centrally in the student’s learning trajectory. As part of their Graduation project (the Master Thesis), students show that they can independently solve challenges proposed by the industry based on current needs and requirements, considering both the technical and the ethical aspects of the issue at hand.

Each thesis is considered locally, with an internal supervisor (a professor from the University in which the student is pursuing the degree) and an external supervisor belonging to the party proposing the thesis (if any). This latter aspect, despite not being mandatory, is rigorously pursued. The proposing party can be an SME, an Excellence Centre, or another University, both at a national and international level. Proposing parties are expected to provide both national and international thesis (i.e. thesis organised in with a University from the same country or from a foreign one).

Learning Outcomes

  • The student recognizes and reflects on the AI lifecycle in a realistic, industry-informed context, and in diverse locations, scenarios, and use cases.

  • Learning Outcome 1
  • The student demonstrates a robust and valid research attitude through a project with a well-defined interdisciplinary approach producing industry-relevant and technologically competent solutions, while evaluating the potential impact of their work on individuals and society

  • Learning Outcome 2
  • The student demonstrates a professional attitude regarding communication with relevant stakeholders (e.g., mentors, advisors, peers, and customers), an analytical attitude, work ethos, planning competence, pro-activeness, and self-awareness

  • Learning Outcome 3

Guidelines for the Thesis

HCAIM Thesis Proposals Guidelines

These guidelines are intended to support parties which intend to propose a new thesis.

HCAIM Thesis Template

View the HCAIM Thesis Template here.

Ethical Guidelines for HCAIM Theses

Plagiarism, Data Fabrication and Image Manipulation

Plagiarism is not acceptable. Plagiarism includes copying text, ideas, images, or data from another source, including your own publications, without giving credit to the original source.

Reusing text copied from another source must be between quotation marks and the original source must be cited. If previous studies have inspired a study’s design or the manuscript’s structure or language, these studies must be explicitly cited.

Image files must not be manipulated or adjusted in any way that could lead to misinterpretation of the information provided by the original image. Irregular manipulation includes 1) introduction, enhancement, moving, or removing features from the original image, 2) grouping of images that should be presented separately or 3) modifying the contrast, brightness or colour balance to obscure, eliminate or enhance some information.

Results presented must not be inappropriately selected, manipulated, enhanced, or fabricated. This includes 1) exclusion of data points to enhance the significance of conclusions, 2) fabrication of data, 3) selection of results that support a particular conclusion at the expense of contradictory data, 4) deliberate selection of analysis tools or methods to support a particular conclusion (including p-hacking).

Research Involving Human Subjects, Animals or Plants

When reporting on research that involves human subjects, human material, human tissues, or human data, the proposing party must ensure that the investigations were carried out following the rules of the Declaration of Helsinki of 1975, revised in 2013. Any consequence associated with the violation of this aspect will totally be imputed to the proposing party and neither on the HCAIM consortium nor the student.

Theses, working with cell lines should state the origin of any cell lines. For established cell lines, the provenance should be stated, and references must also be given to a published paper or a commercial source. If previously unpublished de novo cell lines were used, including those gifted from another laboratory, details of institutional review board or ethics committee approval must be given, and confirmation of written informed consent must be provided if the line is of human origin.

All the topics potentially derived from any research causing any harm to animals are forbidden. All the guidelines applied to humans should be followed (where possible) for animals too.

Experimental research on plants (either cultivated or wild), including collection of plant material, must comply with institutional, national, or international guidelines. We recommend that the theses abide by the Convention on Biological Diversity and the Convention on the Trade in Endangered Species of Wild Fauna and Flora.

Sex, Gender, Ethnicity, Religion and other bias in research

It is encouraged to follow the ‘Sex and Gender Equity in Research – SAGER – guidelines’ and to include sex and gender considerations where relevant. The terms sex (biological attribute) and gender (shaped by social and cultural circumstances) should be used carefully to avoid confusion both terms. The thesis should also describe (in the ‘background’) whether sex and/or gender differences may be expected; report how sex and/or gender were accounted for in the design of the study; provide disaggregated data by sex and/or gender, where appropriate; and discuss respective results. If a sex and/or gender analysis was not conducted, the rationale should be given in the Discussion.

Similar considerations apply to all the other forms of bias, including (and not limited to) ethnicity and religion. For the former, we remind that humans do not have races, but only ethnicity.

If the thesis is focused on bias, the rationale behind it must be clarified from the beginning.

Conflict of Interests

Students must avoid entering into agreements with study sponsors, both for-profit and non-profit, that interfere with access to all of the study’s data or that interfere with their ability to analyse and interpret the data and to prepare the thesis independently when and where they choose.

Students must identify and declare any personal circumstances or interest that may be perceived as inappropriately influencing the representation or interpretation of the reported research results. Examples of potential conflicts of interest include but are not limited to financial interests (such as membership, employment, consultancies, stocks/shares ownership, honoraria, grants or other funding, paid expert testimonies and patent-licensing arrangements) and non-financial interests (such as personal or professional relationships, affiliations, personal beliefs).

Any role of the funding sponsors in the design of the study, in the collection, analysis or interpretation of data, in the writing of the manuscript, or in the decision to publish the results must be declared in advance.

Citation Policies

  • Students should ensure that where the material is taken from other sources (including their own published writing), the source is clearly cited and where appropriate permission is obtained.
  • Students should not engage in excessive self-citation of their own work.
  • Students should not copy references from other publications if they have not read the cited work.
  • Students should not preferentially cite their own or their friends’, peers’, or institution’s publications.
  • Students should not cite advertisements or advertorial material.

Ethical Guidelines for Reviewers

Potential Conflict of Interests

Reviewers are asked to inform the HCAIM board if they hold a conflict of interest that may prejudice the review report, either in a positive or negative way. The board will check as accurately as possible before inviting reviewers; nevertheless, the cooperation of reviewers in this matter is expected and appreciated.

Confidentiality and Anonymity

Reviewers must keep the content of the thesis, including the abstract, confidential. They must inform the HCAIM board if they would like a colleague to complete the review on their behalf.

Risks Matrix and Mitigation Plan for HCAIM Theses

This template allows the supervisor to support the student in identifying and dealing with problems. At the same time, a thesis proposing party will be asked to compile this template.

All milestones and deliverables for the completion of the proposed research project should be included in the project proposal. Students are also required to prepare a risk matrix that includes risks that might endanger reaching these deliverables and provide contingency plans to mitigate the outlined risks. An example of a risk matrix and mitigation plan is shown below. Please include additional risks if required:


Risk Severity Likelihood Mitigation
Failed to collect the target data in time High Low The project will start examining the openly available resources. Investigate options to acquire synthetic or pre-available public data similar to the target data.
Insufficient funding for resources Medium Medium Alternative funding sources will be sought.
Research/Project goals overly ambitious Medium Low Regularly review project goals and regularise the project outcomes based on the review process.
(Overly) Large amounts of
training required
Medium Low Prior approval will be taken. The project goals will be updated based on the time and resources used for this additional training.
Data Loss Medium Low The student will follow appropriate backup procedures to minimise risk.

Examples of Thesis Topics

View this content in another language

Please select a language from the menu below to see the translations of the Programme Overview of the HCAIM programme, the Module A Learning Objectives, the Module A overview of Lesson Plans, and the Module A Learning Event ‘Lecture- Introduction to Human-Centered AI’. The translations are generated by the eTranslation tool which is available on the EU website. The HCAIM cannot be held responsible for any sematic or contextual errors that may arise from these translations. All other content can also be translated into any of the EU languages using the eTranslation tool of the European Union.






Skip to content