Looking Beyond Big Tech Corporations to See the Real Value of AI

An interview with Tarry Singh and Anwita Maiti

All future technology research materials and studies have forecasted the advent and relevance of artificial intelligence (AI). Various organizations are currently experimenting with AI at various levels. Those reaping the most benefits from AI are already in the process of industrializing its capabilities; they are not spending more, but investing wisely.

AI-powered systems can now not only forecast client behaviour and purchasing patterns, but also detect cyber threats. As a result, most detection and threat intelligence systems are adopting and incorporating artificial intelligence into their solutions. However, extensive AI deployment may pose immediate dangers.

In this exciting interview for CTO.inc, Dr Anwita Maiti, visiting AI ethics researcher at AI company deepkapha.ai, and Tarry Singh, CEO, founder, and AI neuroscience researcher, both members of the HCAIM Consortium, explore anxiety about AI adoption, how CIOs and CTOs should manage AI risks, and AI adoption in the healthcare market.

The full interview is available here. An interview excerpt is available below.


With every new and rising technology, there are associated fears. It applies to AI as well. Do you think those fears prevent CIOs from adopting AI at scale? How long can they afford to avoid implementing this technology?

Singh: That is not correct. I have been working with the board of the largest steel manufacturing company globally. Their thought process is mature and progressive. The use of AI or ML comes in handy to predict the optimization of the production environment or in predictive maintenance. Our obsession should move from big tech to mature companies using AI in real-world scenarios. We are engaged in some fascinating projects in conventional industries like oil and gas grappling with some deep geological surface problems. All these are examples of mainstream AI usage at scale.

[…]

AI risk management is a big concern when we deal with issues like ethics, biases and trust. Is it imminent? Do CIOs or CTOs need a proper AI risk management program?

Dr. Anwita Maiti: Sadly, people are not concerned about AI risk management yet. In most organizations, AI ethics are kept at bay. Google fired two of its employees when they spoke about the negligence of ethics. We are already aware of the biases of Meta. EY was recently slapped with a fine of $100 million after the SEC found their staffers cheating on an ethics exam.

An AI risk management model can surely be looked at. Every organization should set up rules and regulations for AI ethics. There has to be transparency, accountability and fairness in the pre-designing stage of AI systems, which should be carried out during the designing and deployment stages as well.

It is tough, but CIOs must make an objective attempt at training people who are engaged in developing AI algorithms. The data should be tested on a diverse set of people to remove biases and stereotypes. Besides computer and data scientists or engineers, there is a need to include social scientists while developing AI or ML models to help eliminate issues pertaining to ethics and biases.

Before a new AI tool is introduced to society, there needs to be some pre-launch education. Launching AI tools without forewarning only breed speculation, confusion and disharmony.

Another crucial point is that both technical and non-technical people associated with the development of AI should work in tandem from the pre-designing stage to mitigate risks that might occur later.

Skip to content