Challenges on the Road Towards Human-Centered AI
Article by Christina Todorova, Researcher at the European Software Institute – Center Eastern Europe
With the rapid technological advancement and the growing interdependence of services and entire sectors of operation, the field of Artificial Intelligence brings a plethora of benefits to human life. Notwithstanding, however, AI has reached a point in its maturity where ethics, lawfulness, and trustworthiness become paramount for ensuring a safe transition towards a human-centred commitment to application development, integration, and use. To achieve a better understanding of the needs and expectations of the internal and external stakeholders and especially of the labour market, towards this transition towards human-centred artificial intelligence the HCAIM project consortium set out to conduct a set of focus group interviews.
As part of the Needs and Market Analysis, conducted as one of the first tasks under the HCAIM project three focus group interviews, namely with representatives from SMEs, with representatives from large enterprises, and with representatives from academia. The grouping criteria, core topics for discussion, and the basic parameters for the formation and implementation of the focus group interviews have been identified and discussed, leveraging the preliminary results obtained from the industry survey, carried out during April 2021.
One of the questions, that the focus group moderators in each of the interviews asked, was for participants to identify challenges and hurdles towards human-centred AI. Here we will present a brief overview of some of the core challenges that were identified as part of the HCAIM Focus Group Interviews.
1. Data collection, data privacy, and cybersecurity
AI has been widely used throughout the last decade, and especially in the recent past, to help humans solve challenges such as the global COVID-19 crisis. AI has been used, for instance, to analyze existing data and produce predictions regarding the virus spread, recognize patterns to help with the diagnosis of new cases and help explain treatment outcomes. Another direction of the AI-based solutions to support the fight against the SARS-CoV-2 virus is for the rapid funnelling of drug compounds, that could prove effective against the virus, as well as for the potential side-effects analysis and medication effects predictions.
Against the backdrop of these applications, however, more controversial uses of AI have recently been reported, especially in the fields of facial recognition, quarantine, and social distance control. As AI models need to process huge amounts of data, including sensitive and personal information, to be trained and integrated within a particular application, this makes them, firstly, vulnerable to attacks from malicious actors, and secondly, in some cases, questionably unethical.
Among the big topics is the topic of ethical data collection and the security of this data throughout its entire lifecycle. And one of the major topics is how can we give the customer authority over their data. Furthermore, not enough tools exist, at least accessible for SMEs, for instance, which comprise more than 90% of the workforce providers in the EU, to enable enterprises to have a more ethical approach in managing the collection and processing of data, when it comes to AI-based solutions.
2. Accessibility
The concept of accessible design and practice of accessible development, integration, and usage of AI solutions revolves around the issue of compatibility with a person’s assistive technology, as well as providing equal access to opportunities, rights, and freedoms. Given the direct impact AI systems have on human lives, we need to be more conscious of the stakeholders whose lives will be impacted by our systems before implementing or creating AI solutions.
3. Bias
The problem with bias in AI is ever increasing, especially recently. A bias is considered an aberration in the output of the machine learning algorithm. This could be due to biases in the training data or prejudiced assumptions made during the algorithm building phase.
AI models are not aware of how the data is collected, which is why ethical guidelines should be part of the education of the AI professionals, and courses on lawfulness and ethics should be included not only as peripherals in university AI training but also as a cornerstones of the instruction process.
Participants in the HCAIM Focus Group Interviews point out that among the core solutions to this problem is to have a structured approach towards the development of mechanisms and AI applications to ensure automation of the compliance procedures with ethical standards. Furthermore, a proper ethical education would ensure a better understanding of the models and their implications while they are being designed, taking into consideration the probable risks, ethical concerns, and potential harm. This ability has been identified as one of the core weaknesses of university education in AI by the participants in the HCAIM Focus Group Interviews.
4. Transparency
Transparency is a multidimensional notion that is applied across a wide range of fields. It has recently experienced a revival in recent artificial intelligence debates (AI). Transparency is listed as one of seven important needs for the realization of “trustworthy AI” in the ethical guidelines published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019, and it is also mentioned in the Commission’s white paper on AI published in February 2020. In fact, all participants in the HCAIM Focus Group Interviews highlighted “transparency” as the most common hurdle on the road towards human-centred artificial intelligence. Furthermore, there is a debate concerning fairness, accountability, and transparency concerning AI. One of the core reasons for this, as identified by the HCAIM Focus Group Interviews’ participants, is that AI is seen as software that supports business decision-making, while it affects a lot more and many more aspects. Thus, we increasingly need to have either a dedicated professional profile or well-trained professionals that look at data privacy, transparency, and explainability, as well as the trustworthiness of AI to address this shortage of understanding about the scope of AI solutions.
5. Education
Current educational offerings of AI Masters tend to focus on either the broad spectrum of AI technology or the academic (fundamental) aspects of AI. A study, conducted by the Stanford HAI, introduced a survey in computer science departments or schools at top universities around the world to assess the state of AI education in higher education institutions. In part, the survey asked whether the computer science department or university offers the opportunity to learn about the ethical side of AI and CS. Among the 16 universities that completed the survey, 13 reported some type of relevant offering. In Europe, the situation is similar.
Leaving human-centred and ethical issues to an added-on role in an AI program, however, is not sufficient to prepare engineers for the upcoming challenges, described in the previous chapter. Undertaking a human-centred approach to the design of artificial intelligence solutions needs to be a highlighted topic from the onset of educational programs, related to AI, for university programs to be able to respond to the needs of the market, as well as to comply with strategic initiatives EU-wide, as well as worldwide.
About 85% of the representatives of SMEs participating in the HCAIM Focus Groups consider they have employees or colleagues, who might benefit from undergoing courses or educational programs in human-centred artificial intelligence. Furthermore, representatives from SMEs pointed out the mismatch between the supply and demand of talent in the field of Artificial Intelligence, especially the human-centred aspects. This poses a challenge for SMEs, especially against the regulatory and compliance requirements that Europe is facing.
Representatives from SMEs pointed out that graduates are not familiar with trustworthiness-related standards and security compliance practices. Moreover, the interviewees report a prevalent lack of cybersecurity-related courses in artificial intelligence programs, as well as a general lack of knowledge regarding AI ethical guidelines, data protection regulations, such as GDPR, and standards, such as ISO.
Conclusion
The discussions with representatives from SMEs, large enterprises, and academia have underscored the urgency of addressing data privacy, cybersecurity, accessibility, bias, transparency, and educational gaps to foster the development of trustworthy AI systems. This serves as a call to all stakeholders within the AI community to rethink and reorient their strategies and curricula to prioritize ethical considerations and societal impact. The collective wisdom garnered from these focus groups illuminates a path forward that requires a concerted effort from industry professionals, academics, curriculum designers, and job seekers alike.
By integrating human-centred practices, including in revisions of existing competence frameworks, enhancing accessibility, mitigating bias, ensuring transparency, and reforming education in AI, we can pave the way for a future where artificial intelligence serves humanity with integrity and respect for our shared values. This journey is not without its challenges, but the opportunities for growth, innovation, and positive societal impact are immense. As we stand at this crossroads, the HCAIM project’s findings are not just a reflection of the current state but a blueprint for action, signalling a pivotal shift towards embracing the ethical dimensions of AI. This transition is essential for securing a future where AI technologies are developed with a profound sense of responsibility towards the people they serve, ensuring a safe, equitable, and prosperous economy.
About the Author.
Christina Todorova is an Information Security Researcher at the European Software Institute – Center Eastern Europe, based in Bulgaria. Her diverse experience and education have led her to explore the intersection of information/cybersecurity, behavioural science, and AI. Her passion is deeply rooted in designing sustainable educational curriculums and content that leverage technology to propel humanity forward. Through her work, Christina aims to support the shift towards a more trustworthy, secure, and human-centred technological ecosystem that uplifts communities globally.