Feedback form

Share Your Thoughts

Are you enjoying our content? Don’t miss out! Sign up!

India Currents gave me a voice in days I was very lost. Having my articles selected for publishing was very validating – Shailaja Dixit, Executive Director, Narika, Fremont

AI is widely misunderstood and still too rudimentary for us to be worrying. But it’s not too soon to contemplate the ethical implications of intelligent machines and systems. An AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It can’t distinguish causation from correlation.

AI has the potential to be as transformative to the world as electricity, by helping us understand the patterns of information around us. But it is not close to living up to the hype. The super-intelligent machines and runaway AI that we fear are far from reality; what we have today is a rudimentary technology that requires lots of training. What’s more, the phrase artificial intelligence might be a misnomer — because human intelligence and spirit amount to much more than what bits and bytes can encapsulate.

I encourage readers to go back to the ancient wisdoms of their faith to understand the role of the soul and the deeper self. This is what shapes our consciousness and makes us human, what we are always striving to evolve and perfect. Can this be uploaded to the cloud or duplicated with computer algorithms? I don’t think so.

What about the predictions that AI will enable machines to have human-like feeling and emotions? This, too, is hype. Love, hate and compassion aren’t things that can be codified. Not to say that a machine interaction can’t seem human — we humans are gullible, after all. According to Amazon, more than 1 million people had asked their Alexa-powered devices to marry them in 2017 alone. I doubt those marriages, should Alexa agree, would last very long!

Today’s AI systems do their best to replicate the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called Deep Learning. After you tell a machine exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data. So the more examples you give it, the more useful it becomes.

Herein lies a problem, though — an AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analysed — so it is unable to apply its analysis to other scenarios. And it can’t distinguish causation from correlation.

AI shines in performing tasks that match patterns in order to obtain objective outcomes. Examples of what it does well include playing chess, driving a car on a street and identifying a cancer lesion in a mammogram. These systems can be incredibly helpful extensions of how humans work, and with more data, the systems will keep improving. Although an AI machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists. And it won’t be able to empathise with a patient in the way that a doctor does. This is where AI presents its greatest risk and what we really need to worry about — use of AI in tasks that may have objective outcomes but incorporate what we would normally call judgement. Some such tasks exercise much influence over people’s lives. Granting a loan, admitting a student to a university, or deciding whether children should be separated from their birth parents due to suspicions of abuse falls into this category. Such judgements are highly susceptible to human biases — but they are biases that only humans themselves have the ability to detect.

And AI throws up many ethical dilemmas around how we use technology. It is being used to create killing machines for the battlefield with drones which can recognise faces and attack people. China is using AI for mass surveillance, and wielding its analytical capabilities to assign each citizen a social credit based on their behaviour. In America, AI is mostly being built by white people and Asians. So, it amplifies their inbuilt biases and misreads African Americans. It can lead to outcomes that prefer males over females for jobs and give men higher loan amount than women. One of the biggest problems we are facing with Facebook and YouTube is that you are shown more and more of the same thing based on your past views, which creates filter bubbles and a hotbed of misinformation. That’s all thanks to AI.

Rather than worrying about super-intelligence, we need to focus on the ethical issues about how we should be using this technology. Should it be used to recognise the faces of students who are protesting against the Citizenship (Amendment) Act? Should India install cameras and systems like China has? These are the types of questions the country needs to be asking.

Vivek Wadhwa is a distinguished fellow and professor, Carnegie Mellon University’s College of Engineering, Silicon Valley.

This article was republished with permission from the author.

Vivek Wadhwa

Vivek Wadhwa is the coauthor of From Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation, a new book on how companies can thrive in this era of rapid change.