Programme Contents

In our aim to create a Human-Centred AI Master’s programme, the HCAIM Consortium follows the definition of AI HLEG: “The human-centric approach to AI strives to ensure that human values are central to how AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights.

To answer the requirements of this definition, the programme covers the technical, ethical and practical elements of artificial intelligence. We have designed our content around the three phases of the MLOps lifecycle – development, deployment and maintenance of machine learning models, thus producing three core modules in alignment with the above-mentioned ML-Ops phases: Modelling (Module A),  Deployment (Module B), and Evaluation (Module C). We have added a fourth module (D) Graduation, to enable students to show that they can independently solve challenges proposed by the industry based on current needs and requirements related to the field of human-centred artificial intelligence. To support the students herewith, the lesson plan Research in Practice runs parallel to the other Modules, and guides the students from the start of the programme in their thesis process.

The structure of the programme is visualised in the table below.

 

Module A Module B Module C Module D
Technical

Foundations of AI

Advanced AI:
Deep Learning
Future AI

Master Thesis Project

Practical AI Modelling AI in Action:
Organisational AI
Socially Responsible AI
Ethical Ethics Fundamentals Trustworthy AI Compliance, Legality & Humanity

This page contains the short description of the modules defined in the Human Centered Artificial Intelligence Master. All Learning Events, including their flanking study material, are available on the HCAIM website of wikiwijs.

The Learning Event descriptions were translated into all the 24 official EU languages using the eTranslation tool of the European Union, and are available from the lists assigned to the languages. Note that the translations are not reviewed by any lecturers.

All the materials are available under a Creative Commons Attribution-NonCommercial-NoDerivates 4.0 license (CC BY-NC-ND 4.0)

Modelling (Module A)

The first module, namely Modelling (Module A), focuses on the first phase of the MLOps lifecycle and is related to the lowest maturity level of the application of Machine Learning (ML) in organizations: modelling data. It includes the activities that form the basis of the application of ML, such as data extraction, data analysis, data preparation, model training and (mainly manual) model validation and evaluation.

In this phase, the focus is on correctly analyzing and modelling the data to achieve the business objectives and little use is made of automation (e.g. CI/CD), which is only added in the second phase of MLOps (Deployment – Module B). The modelling activities are often characterized by the manual, script-driven and interactive method by which the data analysis, preparation, model training and validation are carried out. To maintain an overview of the different models, parameters and choices that are being experimented with, experiment tracking is used.

From an ethical perspective, it is important in the modelling phase to devote sufficient time and attention to finding out the client’s objectives, mapping the stakeholders and exploring how the individual values of these stakeholders are affected (and recognizing possible conflicts between them). Aspects such as transparency, inclusion, security and privacy are of great importance in this. Naturally, attention must also be paid to the social and moral desirability of the client’s objectives. In addition, it is important to have (timely) awareness of possible biases/prejudices in the available data, recognise the possible consequences of these prejudices and find mitigations to deal with these prejudices.

Learning Outcomes

  • The student evaluates various ML techniques to make a well-founded choice, matching the acquired requirements of the customer and implementing a prototype of the chosen ML technique to advise on solving a given data modelling problem.

  • Learning Outcome 1
  • The student argues, using fundamental ethical frameworks, how moral dilemmas can be solved and evaluates the possible consequences of existing biases in data and the influence of designed mitigations to counteract the consequences of those biases.

  • Learning Outcome 2
  • The student applies quantitative and qualitative research methods to scientifically substantiate their choices during the ethical consideration(s) and making of the prototype.

  • Learning Outcome 3

Deployment (Module B)

The module Deployment (Module B) focuses on the second phase of the MLOps development cycle; the deployment. After the data exploratory phase of modelling (see Module A – Modelling), comes the integration of the ML solution into the business systems. It is now important to start thinking about the ML architecture and how it plays together with the existing systems (legacy). To experience real benefit from automated ML solutions, pipelines need to be introduced; on the one hand, to be able to deal with continuous and live data supplies (stream processing), and on the other hand, to link the results of the ML model to other systems.

Moreover, Module B enhances the complexity of AI technology by moving towards (the use of) neural networks and deep learning. A major advantage of these more complex models is that they are more flexible and versatile than the techniques introduce in Module A – Modelling. However, the important disadvantages of these techniques are that they are more complex (to understand and configure) and opaquer. Therein lies an important ethical dilemma in the use of (advanced) AI techniques: how do you still understand what the AI solution calculates and whether this is done in the right way. Making the deployment of AI solutions more transparent and being able to determine the possible risks and mitigate these risks are important (social) themes in this module.

Learning Outcomes

  • The student assesses the possible choices for the integration of an advanced AI technique, such as Deep and/or Reinforcement Learning, and authors a one-page report based on a prototype that has been developed taking into account the limitations of and influences on the existing ICT systems and data facilities of the customer, which have been obtained in collaboration with, for example, ICT architects or developers.

  • Learning Outcome 1
  • The student assesses the potential risk involved and tests the degree of transparency (including interpretability, reproducibility and explainability) of a chosen AI/ML implementation and designs solutions using techniques that increase insight and transparency among stakeholders (so-called Explainable AI (XAI) techniques) to remedy shortcomings in this respect compared to the social and customer-specific requirements.

  • Learning Outcome 2
  • The student formulates a research design for a scientifically sound (practice-oriented) research project related to a company case by formulating a relevant, consistent, functional research question, considering the applied research methods to be used, and establishing a precise, relevant and critical theoretical framework.

  • Learning Outcome 3

Evaluation (Module C)

The Evaluation module (Module C) focuses on the evaluation aspects of AI development including both the societal aspects of an AI product, and the development of an appreciation of the potential future directions that AI may take, looking at technology trends; socially responsible AI; compliance, as well as ensuring that the human element is ever-present in the design, development, and evaluation of AI systems.

As part of the future of AI, an exploration of the level of AI adoption in different industries is discussed, as well as how AI is adapted for different domains. Looking at socially responsible AI includes how AI affects individuals and different groups in society. And as a crucial part of the module, there is a focus on laws, policies and codes of conduct related to AI (with an emphasis on issues such as explainability and trust), as well as quality control and quality management processes, to evaluate the results of AI initiatives.

Learning Outcomes

  • The student develops an appreciation of the cutting-edge approaches to AI and machine learning, as well as an understanding of how artificial intelligence is utilized in different domains, and how to evaluate the potential directions artificial intelligence may go in the future.

  • Learning Outcome 1
  • The student shows a well-defined approach to consequence scanning, considering issues such as evaluating the potential impact new technology could have on individuals and society, focusing specifically on minorities and marginalized groups, as well as potential environmental impacts.

  • Learning Outcome 2
  • The student demonstrates the ability to employ a full-articulated research methodology with ethics embedded at all stages, with an awareness of the contextual nature of the specific approaches that should be utilized which will be informed by the case studies covered in this module.

  • Learning Outcome 3

Graduation (Module D)

The Graduation module (Module D) reflects the core principle of the HCAIM programme that is built on the concept of project-based learning (PBL). The goal of this module is to position the graduation project (making a professional product) centrally in the student’s learning trajectory. As part of their Graduation project (the Master Thesis), students show that they can independently solve challenges proposed by the industry based on current needs and requirements, considering both the technical and the ethical aspects of the issue at hand.

Each thesis is considered locally, with an internal supervisor (a professor from the University in which the student is pursuing the degree) and an external supervisor belonging to the party proposing the thesis (if any). This latter aspect, despite not being mandatory, is rigorously pursued. The proposing party can be an SME, an Excellence Centre, or another University, both at a national and international level. Proposing parties are expected to provide both national and international thesis (i.e. thesis organised in with a University from the same country or from a foreign one).

Learning Outcomes

  • The student recognizes and reflects on the AI lifecycle in a realistic, industry-informed context, and in diverse locations, scenarios, and use cases.

  • Learning Outcome 1
  • The student demonstrates a robust and valid research attitude through a project with a well-defined interdisciplinary approach producing industry-relevant and technologically competent solutions, while evaluating the potential impact of their work on individuals and society

  • Learning Outcome 2
  • The student demonstrates a professional attitude regarding communication with relevant stakeholders (e.g., mentors, advisors, peers, and customers), an analytical attitude, work ethos, planning competence, pro-activeness, and self-awareness

  • Learning Outcome 3

Guidelines for the Thesis

These guidelines are intended to provide ethical guidance for the HCAIM theses.

These guidelines are intended to support parties which intend to propose a new thesis.

View the HCAIM Thesis Template here.

This template allows the supervisor to support the student in identifying and dealing with problems. At the same time, a thesis proposing party will be asked to compile this template.

Examples of Thesis Topics

View the content in another language

Please select a language from the menu below to see a the translation of this page in all EU languages. The translations are generated by the eTranslation tool which is available on the EU website. The HCAIM Consortium cannot be held responsible for any semantic or contextual errors that may arise from these translations. All other content can also be translated into any of the EU languages using the eTranslation tool of the European Union.

Skip to content