With the rapid technological advancement and the growing interdependence of services and entire sectors of operation, the field of Artificial Intelligence brings a plethora of benefits to human life. Notwithstanding, however, AI has reached a point in its maturity where ethics, lawfulness, and trustworthiness become paramount for ensuring a safe transition towards a human-centered commitment to application development, integration, and use. To achieve a better understanding of the needs and expectations of the internal and external stakeholders and especially of the labor market, towards this transition towards human-centered artificial intelligence the HCAIM project consortium set out to conduct a set of focus group interviews.
The HCAIM project conducted three focus group interviews, namely with representatives from SMEs, with representatives from large enterprises, and with representatives from academia. The grouping criteria, core topics for discussion, and the basic parameters for the formation and the implementation of the focus group interviews have been identified and discussed, leveraging the preliminary results obtained from the industry survey, carried out during April 2021.
One of the questions, that the focus group moderators in each of the interviews have asked, was for participants to identify challenges and hurdles towards human-centered AI. Here we will present a brief overview of some of the core challenges that were identified as part of the HCAIM Focus Group Interviews.
Data collection, data privacy, and cybersecurity.
AI has been widely used throughout the last decade, and especially in the recent past, to help humans solve challenges such as the global COVID-19 crisis. AI has been used, for instance, to analyze existing data and produce predictions regarding the virus spread, recognize patterns to help with the diagnosis of new cases, and help explain treatment outcomes. Another direction of the AI-based solutions to support the fight against the SARS-CoV-2 virus is for the rapid funneling of drug compounds, that could prove effective against the virus, as well as for the potential side-effects analysis and medication effects predictions.
Against the backdrop of these applications, however, more controversial uses of AI have recently been reported, especially in the field of facial recognition, quarantine, and social distance control. As AI models need to process huge amounts of data, including sensitive and personal information, to be trained and integrated within a particular application, this makes them, firstly, vulnerable to attacks from malicious actors, and secondly, in some cases, questionably unethical.
Among the big topics is the topic of ethical data collection and the security of this data throughout its entire lifecycle. And one of the major topics is how can we give the customer authority over their data. Furthermore, not enough tools exist, at least accessibly for SMEs, for instance, which comprise more than 90% of the workforce providers in the EU, to enable enterprises to have a more ethical approach in managing the collection and processing of data, when it comes to AI-based solutions.
The concept of accessible design and practice of accessible development, integration, and usage of AI solutions revolves around the issue of compatibility with a person’s assistive technology, as well as providing equal access to opportunities, rights, and freedoms. Given the direct impact AI systems have on human lives, we need to be more conscious of the stakeholders whose lives will be impacted by our systems before implementing or creating AI solutions.
The problem with bias in AI is ever increasing, especially recently. A bias is considered an aberration in the output of the machine learning algorithm. This could be due to biases in the training data or prejudiced assumptions made during the algorithm building phase.
AI models are not aware of how the data is collected, which is why ethical guidelines should be part of the education of the AI professionals, and courses on lawfulness and ethics should be included not only as peripherals in university AI training but also as a cornerstone of the instruction process.
Participants in the HCAIM Focus Group Interviews point out that among the core solutions to this problem is to have a structured approach towards the development of mechanisms and AI applications to ensure automation of the compliance procedures with ethical standards. Furthermore, a proper ethical education would ensure a better understanding of the models and their implications while they are being designed, taking into consideration the probable risks, ethical concerns, and potential harm. This ability has been identified as one of the core weaknesses of university education in AI by the participants in the HCAIM Focus Group Interviews.
Transparency is a multidimensional notion that is applied across a wide range of fields. It has recently experienced a revival in recent artificial intelligence debates (AI). Transparency is listed as one of seven important needs for the realization of “trustworthy AI” in the ethical guidelines published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019, and it is also mentioned in the Commission’s white paper on AI published in February 2020. In fact, all participants in the HCAIM Focus Group Interviews highlighted “transparency” as the most common hurdle on the road towards human-centered artificial intelligence. Furthermore, there is a debate concerning fairness, accountability, and transparency concerning AI. One of the core reasons for this, as identified by the HCAIM Focus Group Interviews’ participants, is that AI is seen as software that supports business decision-making, while it affects a lot more and much more aspects. Thus, we increasingly need to have either a dedicated professional profile or well-trained professionals that look at data privacy, transparency, and explainability, as well as trustworthiness of AI to address this shortage of understanding about the scope of AI solutions.
Current educational offerings of AI Masters tend to focus on either the broad spectrum of AI technology or the academic (fundamental) aspects of AI. A study, conducted by the Stanford HAI, introduced a survey in computer science departments or schools at top universities around the world to assess the state of AI education in higher education institutions. In part, the survey asked whether the computer science department or university offers the opportunity to learn about the ethical side of AI and CS. Among the 16 universities that completed the survey, 13 reported some type of relevant offering. In Europe, the situation is similar.
Leaving human-centered and ethical issues to an added-on role in an AI program, however, is not sufficient to prepare engineers for the upcoming challenges, described in the previous chapter. Undertaking a human-centered approach to the design of artificial intelligence solutions needs to be a highlighted topic from the onset of educational programs, related to AI, for university programs to be able to respond to the needs of the market, as well as to comply with strategic initiatives EU-wide, as well as worldwide.
About 85% of the representatives of SMEs participating in the HCAIM Focus Groups consider they have employees or colleagues, who might benefit from undergoing courses or educational programs in human-centered artificial intelligence. Furthermore, representatives from SMEs pointed out the mismatch between supply and demand of talent in the field of Artificial Intelligence, and especially the human-centered aspects. This poses a challenge for SMEs, especially against the regulatory and compliance requirements that Europe is facing.
Representatives from SMEs pointed out that graduates are not familiar with trustworthiness-related standards and security compliance practices. Moreover, the interviewees report a prevalent lack of cybersecurity-related courses in artificial intelligence programs, as well as a general lack of knowledge regarding AI ethical guidelines, data protection regulations, such as GDPR, and standards, such as ISO.