Tag Archives: AI

Ethical Challenges of Artificial Intelligence

AI is widely misunderstood and still too rudimentary for us to be worrying. But it’s not too soon to contemplate the ethical implications of intelligent machines and systems. An AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It can’t distinguish causation from correlation.

AI has the potential to be as transformative to the world as electricity, by helping us understand the patterns of information around us. But it is not close to living up to the hype. The super-intelligent machines and runaway AI that we fear are far from reality; what we have today is a rudimentary technology that requires lots of training. What’s more, the phrase artificial intelligence might be a misnomer — because human intelligence and spirit amount to much more than what bits and bytes can encapsulate.

I encourage readers to go back to the ancient wisdoms of their faith to understand the role of the soul and the deeper self. This is what shapes our consciousness and makes us human, what we are always striving to evolve and perfect. Can this be uploaded to the cloud or duplicated with computer algorithms? I don’t think so.

What about the predictions that AI will enable machines to have human-like feeling and emotions? This, too, is hype. Love, hate and compassion aren’t things that can be codified. Not to say that a machine interaction can’t seem human — we humans are gullible, after all. According to Amazon, more than 1 million people had asked their Alexa-powered devices to marry them in 2017 alone. I doubt those marriages, should Alexa agree, would last very long!

Today’s AI systems do their best to replicate the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called Deep Learning. After you tell a machine exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data. So the more examples you give it, the more useful it becomes.

Herein lies a problem, though — an AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analysed — so it is unable to apply its analysis to other scenarios. And it can’t distinguish causation from correlation.

AI shines in performing tasks that match patterns in order to obtain objective outcomes. Examples of what it does well include playing chess, driving a car on a street and identifying a cancer lesion in a mammogram. These systems can be incredibly helpful extensions of how humans work, and with more data, the systems will keep improving. Although an AI machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists. And it won’t be able to empathise with a patient in the way that a doctor does. This is where AI presents its greatest risk and what we really need to worry about — use of AI in tasks that may have objective outcomes but incorporate what we would normally call judgement. Some such tasks exercise much influence over people’s lives. Granting a loan, admitting a student to a university, or deciding whether children should be separated from their birth parents due to suspicions of abuse falls into this category. Such judgements are highly susceptible to human biases — but they are biases that only humans themselves have the ability to detect.

And AI throws up many ethical dilemmas around how we use technology. It is being used to create killing machines for the battlefield with drones which can recognise faces and attack people. China is using AI for mass surveillance, and wielding its analytical capabilities to assign each citizen a social credit based on their behaviour. In America, AI is mostly being built by white people and Asians. So, it amplifies their inbuilt biases and misreads African Americans. It can lead to outcomes that prefer males over females for jobs and give men higher loan amount than women. One of the biggest problems we are facing with Facebook and YouTube is that you are shown more and more of the same thing based on your past views, which creates filter bubbles and a hotbed of misinformation. That’s all thanks to AI.

Rather than worrying about super-intelligence, we need to focus on the ethical issues about how we should be using this technology. Should it be used to recognise the faces of students who are protesting against the Citizenship (Amendment) Act? Should India install cameras and systems like China has? These are the types of questions the country needs to be asking.

Vivek Wadhwa is a distinguished fellow and professor, Carnegie Mellon University’s College of Engineering, Silicon Valley.

This article was republished with permission from the author.

https://economictimes.indiatimes.com/tech/ites/why-we-need-to-focus-on-the-ethical-challenges-of-artificial-intelligence/articleshow/73010169.cms

Sankara Taps AI in Blindness Campaign

Long lines in front of eye doctors’ clinics do not include children who have never complained that they can’t see.

They are blind, and they don’t know it. Their world has always been hazy. They have never seen it any other way. Thousands of others are silently turning blind with diabetes, and don’t even know it. The challenge to help these people increases manifold when they are in remote areas, where no specialized treatment can reach them.

It is in cases like these, when a mere selfie can alert the doctors of impending visual problems, that Dr. Kaushik Murali, an  ophthalmologist/ eye surgeon at Sankara Eye Hospital Bangalore, is hoping artificial intelligence can be the answer. The problem, if nipped in the bud, can save the vision of the child or a diabetic individual.

Murali and his team have annotated 8,000 retinal scan pictures, including a range of people with vision problems resulting from diabetes. This will help identify the beginnings of this specific cause of blindness, a disease called diabetic retinopathy. ROP is caused by type-two diabetes. They are training the algorithm to identify key markers of diabetic retinopathy, such as nerve tissue damage, swelling, and hemorrhaging. The disease creates lesions in the back of the retina that can lead to total blindness. “Today the algorithm has 98 percent capability of identifying any patient who has a ROP change,” said Murali.

Seventy million people have diabetes in India and 18 percent of diabetic Indians already have the ailment, according to the International Diabetes Federation. By 2045, India is projected to have 134 million cases, making India the country with the most number of diabetics,

Many diabetic patients assume that early signs of the disease are simply minor vision problems. Some don’t even know they are diabetic. In these cases, where blindness often is preventable if diabetic retinopathy is caught early, loss of vision is unnecessary. Medications, therapies, exercise, and a healthy diet are highly effective treatments for preventing further damage if the disease is diagnosed early enough,

The system is trained by deep learning. The program’s diagnosis for each image is compared with that of the ophthalmological panel and the parameters of the function are adjusted to reduce the error margin for each image. This process is repeated for each image until the program can make an accurate diagnosis based on the intensity of pixels in the retinal image. The results are extremely encouraging. The algorithm showed similar — in fact slightly better — levels of sensitivity and specificity as a panel of ophthalmologists.

A similar campaign is spearheading the cause of preventing blindness in children. A lazy eye will be quickly caught by the machine and preemptive action planned.

Ritu Marwah is an award winning author, chef, debate coach, and mother of two boys. She lives in the bay area and  has deep experience in Silicon Valley start-ups as well as large corporations as a senior executive.

Don’t Believe the Hype About AI

To borrow a punch line from Duke professor Dan Ariely, artificial intelligence is like teenage sex: “Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.” Even though AI systems can now learn a game and beat champions within hours, they are hard to apply to business applications.

M.I.T. Sloan Management Review and Boston Consulting Group surveyed 3,000 business executives and found that while 85 percent of them believed AI would provide their companies with a competitive advantage, only one in 20 had “extensively” incorporated it into their offerings or processes. The challenge is that implementing AI isn’t as easy as installing software. It requires expertise, vision, and information that isn’t easily accessible.

When you look at well known applications of AI like Google’s AlphaGo Zero, you get the impression it’s like magic: AI learned the world’s most difficult board game in just three days and beat champions. Meanwhile, Nvidia’s AI can generate photorealistic images of people who look like celebrities just by looking at pictures of real ones.

AlphaGo and Nvidia used a technology called generative adversarial networks, which pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.

Most business problems can’t be turned into a game, however; you have more than two players and no clear rules. The outcomes of business decisions are rarely a clear win or loss, and there are far too many variables. So it’s a lot more difficult for businesses to implement AI than it seems.

Today’s AI systems do their best to emulate the functioning of the human brain’s neural networks, but they do this in a very limited way.  They use a technique called deep learning, which adjusts the relationships of computer instructions designed to behave like neurons. To put it simply, you tell an AI exactly what you want it to learn and provide it with clearly labelled examples, and it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on data, so the more examples you give it, the more useful it becomes.

Herein lies a problem: An AI is only as good as the data it receives. And it is able to interpret that data only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation. AI is more like an Excel spreadsheet on steroids than a thinker.

The bigger difficulty in working with this form of AI is that what it has learned remains a mystery — a set of indefinable responses to data.  Once a neural network is trained, not even its designer knows exactly how it is doing what it does. As New York University professor Gary Marcus explains, deep learning systems have millions or even billions of parameters, identifiable to their developers only in terms of their geography within a complex neural network. They are a “black box,” researchers say.

Speaking about the new developments in AlphaGo, Google/DeepMind CEO Demis Hassabis reportedly said, “It doesn’t play like a human, and it doesn’t play like a program. It plays in a third, almost alien, way.”

Businesses can’t afford to have their systems making alien decisions. They face regulatory requirements and reputational concerns and must be able to understand, explain, and demonstrate the logic behind every decision they make.

For AI to be more valuable, it needs to be able to look at the big picture and include many more sources of information than the computer systems it is replacing. Amazon is one of the few companies that has already understood and implemented AI effectively to optimize practically every part of its operations from inventory management and warehouse operation to running data centers.

In inventory management, for example, purchasing decisions are traditionally made by experienced individuals, called buyers, department by department. Their systems show them inventory levels by store, and they use their experience and instincts to place orders. Amazon’s AI consolidates data from all departments to see the larger trends — and relate them to socioeconomic data, customer-service inquiries, satellite images of competitors’ parking lots, predictions from The Weather Company, and other factors. Other retailers are doing some of these things, but none as effectively as Amazon.

This type of approach is also the basis of Echo and Alexa, Amazon’s voice-based home appliances. According to Wired, by bringing all of its development teams together and making machine learning a corporate focus, Amazon is solving a problem many companies have: disconnected islands of data. Corporate data are usually stored in disjointed datasets in different computer systems. Even when a company has all the data needed for machine learning, they usually aren’t labelled, up-to-date, or organized in a usable manner. The challenge is to create a grand vision for how to put these datasets together and use them in new ways, as Amazon has done.

AI is advancing rapidly and will surely make it easier to clean up and integrate data. But business leaders will still need to understand what it really does and create a vision for its use. That is when they will see the big benefits.

The article has been posted here with the express permission of the author.