Can We Build AI that is Empathic: A Social Science Perspective
Article by Dr Anwita Maiti, Ethical AI Researcher, deepkapha AI Lab, The Netherlands
From the point of view of a Social Scientist let us look into the usage and consumption of AI in people’s everyday lives. Later, we shall arrive at how AI could coexist with humans in harmony.
By dividing the world between developing nations and developed nations, we see that AI has reached the latter to a greater extent. For eg, people in Japan, use AI even as companions but not everybody in India talks to ‘Siri’ or has a google assistant. But the lack of AI in developing countries doesn’t imply that people there aren’t able to live complete lives or something is amiss. On a similar note, dependence on AI in developed countries doesn’t imply that people have become too dependent on technology and have lost their basic faculties to function in the world.
The question then arises as to how we could implement and carve a path towards feasible AI, where AI doesn’t take an upper hand over humans, but rather assist them. The current world is fuelled by technology and people’s lives are interconnected to it as they gradually adapt to technological advancements consciously and unconsciously. Most times, the change that AI brings, might seem radical overnight as machines and robots gradually replace human’s work.
Daily Use of AI
Conversational AI. To the common mass, social media appears to be friendly as it helps connect an individual to scores of people- known and unknown. But in the disguise of its friendly nature, not all people are aware of the infringement of their privacy, or even if they are aware, they still use social media. A famous example that comes to mind is ‘Facebook’.
With users having social media handles and dating or matrimonial site handles, they become psychologically and emotionally attached to technology at the cost of their privacy getting exposed. Here lies a trick question, what is seemingly emphatic is not really so, and very few percentages of the users are aware of it. We need to work on such dubious AI.
Self-driving cars, AI in the judiciary. During accidents, the question that surfaces is that since the self-driving car cannot be blamed, should the owner, who was not driving in the first place, be blamed? Also, coming to court matters- it is often observed that the AI caters only to a particular section of people who are punished, re-punished and over punished. We definitely have to rethink these aspects.
AI in Sports. AI’s have been winning chess tournaments and a much talked about topic is its participation in the Tokyo 2020 Olympics. Is AI empathic in this situation? Well, it is not harming humans in any manner, rather, it is opening new dimensions and perspectives which could be further researched.
AI in Medicine. AI is reaping huge benefits in medicine. What doctors and researchers might have taken time to reach, can be pointed out by AI, sooner. But not every patient receives the same treatment and benefits, and many are barred from receiving high-end technology-aided medical support. AI can pave its way through medicine only when it reaches every person irrespective of class distinction in society and socio-economic backgrounds.
How can we make AI more empathic?
Conservative and Radical Approaches of AI- bridging the gap
Conservative AI predominates with the status quo and Radical AI de-establishes the status quo and both take an extreme route. A solution could be to adopt moderation, where we could build our algorithms and data sets with the preconceived notion of being as unbiased as possible. The biases that should be kept in mind- include human bias, machine bias, historical bias, societal bias, sampling bias and gender bias.
Short Term and Long Term Issues
Categories of Near term issues and Long term issues come to play while dealing with AI problems. Near Term issues are those which we are already faced with- concern about privacy of data, algorithm bias, and self-driving cars. Long Term issues include transformative AI in the context of international security, race dynamics and power relations.
They can be differentiated, based on – capabilities, impacts, certainty and extremity:
- Capabilities define whether to focus on the impacts and challenges of the current AI system, or those relating to much more advanced AI systems.
- Impacts define whether to focus mostly on the immediate aspects of AI for society, or whether to consider possible impacts and issues much further into the future.
- Certainty defines whether to focus on impacts and issues that are relatively certain and well-understood or those that are more uncertain and speculative.
- Extremity defines focusing on impacts at all scales or prioritizing focusing on those that may be particularly large in scale.
Testing data across a large spectrum
To confirm whether there are any undesirable biases, the data and algorithms should be tested across an array of humans who belong to different backgrounds. This shall be the only possible way to ensure that AI is as objective as possible.
Language Models and the Question of Carbon footprint
Larger language models usually consume more electricity for their running and they also consume a greater carbon footprint. The question then comes to the fore, about how we can make environmentally and ecologically sustainable models? Earlier- the BERT and GPT3 models tried to scale up on these factors but now as always, newer models are emerging, and one such feasible example could be rightfully the MILA carbon footprint.
AI and Capitalism
Capitalist goals often result in reproducing the same biased algorithms and datasets for faster results, leading to profit-making at the expense of a bias ridden system. A fine line has to be been drawn at how AI could serve the world in a much better way where no human is hurt in the context of their race, ethnicity, gender or religious background.
AI and Price
For AI to pave its way into developing countries, it has to be cheaper. It is still predominantly used by only the economically well-off section of society. To be a friend in need, the prices of AI technology have to drop, in order to convince common people to buy it.
In Conclusion
Cost-effective AI that common people can afford throughout the world, would lead to the road toward empathic AI. AI wouldn’t be empathic in a capitalist world, where it is concentrated only in the hands of a few. Technology has its impact felt on every individual- as it transforms their lives on an everyday basis. On this note, it is interesting to study how AI is evolving the lives of humans. While a group of people fear that as AI gradually replaces humans, it will create job loss, others are of the opinion that AI will rather introduce new jobs, but those would be more inclined towards more white collared jobs that would require high skills. Also, economists suggest that AI be taxed like humans. Out of the many questions that spawn, the only goal that might perhaps save us from a dystopian future is to ensure that AI is used to aid humans, the responsibility remains with us to ensure that AI is sustainable- where it doesn’t jeopardize human activities but both co-exist side by side.
About the Author.
Anwita Maiti is an AI Ethics Researcher at deepkapha AI Research Lab. She hails from a background in Humanities and Social Sciences. She has obtained a PhD. in Cultural Studies.