The Acceleration of Ethics and Governance for Artificial Intelligence

Article by Maurice Lynch (CEO – Nathean) and Dr Alireza Dehghani (Technical Program Manager – CeADAR)


Ethics and regulatory compliance are being pushed to the fore for AI such as the EU AI Act (2021) [1], the EU Medical Device Regulation (2017) [2] and the UK AI Policy Paper [3] all of which aim to bring a harmonised approach to governance and accountability for AI.

The need for regulation is to provide ethics, legal and technical frameworks to help deal with situations where decisions being made by AI (either directly or indirectly) affect the individual, an organisation or society in impactful ways.


An extreme example in healthcare, where an AI-based application (a Class III Software as a Medical Device under EU MDR[2]) incorrectly recommends the wrong course of treatment which inadvertently causes death or an irreversible deterioration of a patient’s state of health. Other less extreme examples, but impactful nonetheless for the individual, include the declined loan application, the rejected job application, being falsely identified through face recognition software leading to arrest, and so on. All of which accentuates the need for greater clarity, explainability, accountability, trust, fairness and regulation.


With such regulations coming into play across the globe and as AI continues on its maturity curve, companies and organisations large and small need to take a holistic view of their use of AI either in their own AI end-user products or internal use of AI for employees. There are more stakeholders who need to be actively aware of the potential risks and rewards of AI, from the Data Scientist to the CEO who ultimately is responsible for the liabilities and reputation of the company. Financial penalties can be significant, with fines of up to 6% of turnover or €30M[4] under the EU AI Act. Risks of bias within models are also a major concern and minefield for companies such as the case of Amazon’s AI-based recruitment tool which they were forced to scrap due to its bias against female candidates[5].


Meeting regulatory requirements and addressing ethical concerns is complex and requires a multi-disciplinary approach in going from concept to releasing models into production. One of the critical components of developing these robust production models is the quality of the data used to train the models in the first place. Ethics plays a key role in the development of training datasets with new tools and techniques being researched and developed such as Privacy-Preserving Machine Learning (PPML) which is an umbrella term used to describe Privacy Enhancing Technologies (PETs) that can protect individuals’ data privacy in data analysis. With vast amounts of data being collected from online and offline resources, significant challenges in preserving privacy have emerged. Both industry and academia are trying to catch up with these technologies.


The new regulations around AI have yet to be tested in the courts. As with GDPR these acts and regulations around AI ultimately aim to protect individuals and to provide recourse to action when they are affected. There is a balancing act in how to maintain the momentum in innovation, protect the end-user of AI and have an ethical focus.

Academia and industry play a vital role in shaping the future of ethical AI. One such EU initiative is the Human Centred AI Masters programme (HCAIM) – a consortium which follows the definition of AI HLEG (European Commission’s High-level expert group on Artificial Intelligence). This definition implies “The human-centric approach to AI strives to ensure that human values are central to how AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights”. It puts human values, rights, and privacy in a central place in the AI development lifecycle. The profound consideration and assessment of those aspects, including risks, is necessary to be taken into account at every stage of AI development.

  1. EU – Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (2021). Available online at:
  2. EU – Regulation 2017/745 of the European Parliament and of the Council on Medical Devices (2017) Available online at:
  3. UK – Establishing a Pro-innovation Approach to Regulating AI (2022). Available online at:
  4. EU – AI fines to 6% of turnover (2021). Available online at:
  5. Amazon scraps secret AI recruiting tool that showed bias against women (2018). Available online at:
Skip to content