Tag Archives: artificial intelligence

Giving Back, Indian Education, and Artificial Intelligence

Of late, I have been reflecting on the meaning of life and trying to come to grips with the loss of my soulmate, Tavinder.  Over Christmas, I went on a four-day silence and meditation retreat in Southern India and then visited the ancient holy city of Varanasi.  Silence and meditation are amazing — and cleansing — and I recommend that you try it sometime.  There are retreats all over the world, offered by modern-day gurus who have repackaged ancient wisdom, and the messages and techniques are more or less the same.

I couldn’t, however, find the answers to my deepest questions, even after spending hours in the company of the Dalai Lama, visiting the holiest of religious sites, and reading several sacred texts.  All I could conclude was that there is more to life than we understand and that the best way to live it is by helping others and giving back to the world.  Life is unpredictable, and everything changes before you know it.  Money may be necessary for survival, but too much of it becomes a burden and leads to greed and unhappiness.  What you will be remembered for is not what you accumulated while you were alive, but what you gave to others.

This is why I made the decision to donate to India the most valuable intellectual asset I have, something I spent a decade creating: the curriculum that I teach to students at Carnegie Mellon University and the workshops that I charge leading companies six-figure sums for.  It is the most advanced course on exponential technologies, industry disruptions, innovation methods, and technology ethics in the world.  At CMU’s engineering school, which is the best in the U.S., the class is amongst the highest ranked.  And what I am proudest of is that my son Tarun, who teaches with me, is rated higher than most of the entire university’s tenured faculty — and me!  We have long waiting lists of students wanting to be admitted and get frequent emails from former students thanking us for changing their lives by teaching them to think bigger and focus on the opportunities to better humankind.

When I met Indian Prime Minister Narendra Modi last October and presented him with a grand plan for curing cancer and building a $200-billion medical industry, he was very supportive and asked his Principal Scientific Adviser, K. Vijay Raghavan, to make it happen.  What he wanted from me were more ideas on boosting the Indian economy to meet his target of an annual GDP of $5 trillion.  I shared some possibilities with him, but began to realize that it will take more than technology.  The key is for India to improve its education system and unleash the ability of its entrepreneurs to solve the grand challenges of humanity.  This is exactly what I teach, how to build trillion-dollar industries; and it’s what I am offering India — because of Modi’s request.

While I was in Varanasi, I was delighted to receive a message over Twitter from India’s greatest mover and shaker, Amitabh Kant, CEO of the powerful think tank and government planning commission NITI Aayog.  He asked me to meet him and address a who’s who of Indian policy, which I did.  I was amazed at how open-minded and grounded Amitabh and his team were.  I have advised several heads of state and government innovation initiatives, including in the U.S., Chile, Japan, Malaysia, Russia, Hong Kong, and Mexico, and none is even close to matching this group’s vision and impact.

I offered Amitabh the curriculum, and he was very excited about accepting it.  But I felt like the dog chasing the bus: I had caught it, but now what did I do with it?  How do you convert a graduate engineering class and executive education program into something that can be taught to hundreds of thousands of students every year in a country in which the basic education system is weak?

This is going to be my next challenge.  I am not going to be teaching at CMU beyond this semester.  I plan to work instead with a who’s who of Indian education, including the chairman of its engineering education regulatory body, Anil Sahasrabudhe, to create a curriculum that helps India leap ahead of the rest of the world.  Instead of Indian students’ having to travel abroad for advanced education, my goal is to create something that students from all over the world, including the U.S. and Europe, flock to India for.  India’s engineering education may be stuck in the past, but it is not that much behind the West: all universities have dated curricula and ancient teaching methods.  And none are communicating adequately on the convergences of technologies and their disruptive capabilities.

Is my goal possible?  Frankly, I don’t know, because everything is hard in India, and there are always unnecessary obstacles.  But I will give it my best.

I wrote my first article after many months, at the prodding of Malini Goyal of The Economic Times.  She kept asking me to comment about the risks of A.I. and superintelligence.  I told her that it was complete nonsense, to think more deeply and look at her own spiritual values.  The premise that you can upload or recreate human consciousness assumes that there is no such thing as a soul.  Yes, A.I. may seem intelligent, but it is in no way “intelligent” as humans are and will never have human values or emotion…. And yes, my views are evolving as I think more deeply about life, the sprit, and humanity.

This was published with permission from the author.

Sankara Taps AI in Blindness Campaign

Long lines in front of eye doctors’ clinics do not include children who have never complained that they can’t see.

They are blind, and they don’t know it. Their world has always been hazy. They have never seen it any other way. Thousands of others are silently turning blind with diabetes, and don’t even know it. The challenge to help these people increases manifold when they are in remote areas, where no specialized treatment can reach them.

It is in cases like these, when a mere selfie can alert the doctors of impending visual problems, that Dr. Kaushik Murali, an  ophthalmologist/ eye surgeon at Sankara Eye Hospital Bangalore, is hoping artificial intelligence can be the answer. The problem, if nipped in the bud, can save the vision of the child or a diabetic individual.

Murali and his team have annotated 8,000 retinal scan pictures, including a range of people with vision problems resulting from diabetes. This will help identify the beginnings of this specific cause of blindness, a disease called diabetic retinopathy. ROP is caused by type-two diabetes. They are training the algorithm to identify key markers of diabetic retinopathy, such as nerve tissue damage, swelling, and hemorrhaging. The disease creates lesions in the back of the retina that can lead to total blindness. “Today the algorithm has 98 percent capability of identifying any patient who has a ROP change,” said Murali.

Seventy million people have diabetes in India and 18 percent of diabetic Indians already have the ailment, according to the International Diabetes Federation. By 2045, India is projected to have 134 million cases, making India the country with the most number of diabetics,

Many diabetic patients assume that early signs of the disease are simply minor vision problems. Some don’t even know they are diabetic. In these cases, where blindness often is preventable if diabetic retinopathy is caught early, loss of vision is unnecessary. Medications, therapies, exercise, and a healthy diet are highly effective treatments for preventing further damage if the disease is diagnosed early enough,

The system is trained by deep learning. The program’s diagnosis for each image is compared with that of the ophthalmological panel and the parameters of the function are adjusted to reduce the error margin for each image. This process is repeated for each image until the program can make an accurate diagnosis based on the intensity of pixels in the retinal image. The results are extremely encouraging. The algorithm showed similar — in fact slightly better — levels of sensitivity and specificity as a panel of ophthalmologists.

A similar campaign is spearheading the cause of preventing blindness in children. A lazy eye will be quickly caught by the machine and preemptive action planned.

Ritu Marwah is an award winning author, chef, debate coach, and mother of two boys. She lives in the bay area and  has deep experience in Silicon Valley start-ups as well as large corporations as a senior executive.

Artificial Intelligence: Beyond the hype

To gauge by the news headlines, it would be easy to believe that artificial intelligence (AI) is about to take over the world. Kai-Fu Lee, a Chinese venture capitalist, says that AI will soon create tens of trillions of dollars of wealth and claims China and the U.S. are the two AI superpowers.

There is no doubt that AI has incredible potential. But the technology is still in its infancy; there are no AI superpowers. The race to implement AI has hardly begun, particularly in business. As well, the most advanced AI tools are open source, which means that everyone has access to them.

Tech companies are generating hype with cool demonstrations of AI, such as Google’s AlphaGo Zero, which learned one of the world’s most difficult board games in three days and could easily defeat its top-ranked players. Several companies are claiming breakthroughs with self-driving vehicles. But don’t be fooled: The games are just special cases, and the self-driving cars are still on their training wheels.

AlphaGo, the original iteration of AlphaGo Zero, developed its intelligence through use of generative adversarial networks, a technology that pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.

Unlike board games and arcade games, business systems don’t have defined outcomes and rules. They work with very limited datasets, often disjointed and messy. The computers also don’t do critical business analysis; it’s the job of humans to comprehend information that the systems gather and to decide what to do with it. Humans can deal with uncertainty and doubt; AI cannot. Google’s Waymo self-driving cars have collectively driven over 9 million miles, yet are nowhere near ready for release. Tesla’s Autopilot, after gathering 1.5 billion miles’ worth of data, won’t even stop at traffic lights.

Today’s AI systems do their best to reproduce the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called deep learning: After you tell an AI exactly what you want it to learn and provide it with clearly labeled examples, it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data, so the more examples you give it, the more useful it becomes.

Herein lies a problem, though: An AI is only as good as the data it receives, and is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation.

The larger issue with this form of AI is that what it has learned remains a mystery: a set of indefinable responses to data. Once a neural network has been trained, not even its designer knows exactly how it is doing what it does. They call this the black box of AI.

Businesses can’t afford to have their systems making unexplained decisions, as they have regulatory requirements and reputational concerns and must be able to understand, explain, and prove the logic behind every decision that they make.

Then there is the issue of reliability. Airlines are installing AI-based facial-recognition systems and China is basing its national surveillance systems on such systems. AI is being used for marketing and credit analysis and to control cars, drones, and robots. It is being trained to perform medical data analysis and assist or replace human doctors. The problem is that, in all such uses, AI can be fooled.

Google published a paper last December that showed that it could trick AI systems into recognizing a banana as a toaster. Researchers at the Indian Institute of Science have just demonstrated that they could confuse almost any AI system without even using, as Google did, knowledge of what the system has used as a basis for learning. With AI, security and privacy are an afterthought, just as they were early in the development of computers and the Internet.

Leading AI companies have handed over the keys to their kingdoms by making their tools open source. Software used to be considered a trade secret, but developers realized that having others look at and build on their code could lead to great improvements in it. Microsoft, Google, and Facebook have released their AI code to the public for free to explore, adapt, and improve. China’s Baidu has also made its self-driving software, Apollo, available as open source.

Software’s real value lies in its implementation: what you do with it. Just as China built its tech companies and India created a $160 billion IT services industry on top of tools created by Silicon Valley, anyone can use openly available AI tools to build sophisticated applications. Innovation has now globalized, creating a level playing field—especially in AI.


Vivek Wadhwa is a distinguished fellow at Carnegie Mellon University’s College of Engineering. He is the co-author of Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back.

This article first appeared in Fortune magazine.

 

Don’t Believe the Hype About AI

To borrow a punch line from Duke professor Dan Ariely, artificial intelligence is like teenage sex: “Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.” Even though AI systems can now learn a game and beat champions within hours, they are hard to apply to business applications.

M.I.T. Sloan Management Review and Boston Consulting Group surveyed 3,000 business executives and found that while 85 percent of them believed AI would provide their companies with a competitive advantage, only one in 20 had “extensively” incorporated it into their offerings or processes. The challenge is that implementing AI isn’t as easy as installing software. It requires expertise, vision, and information that isn’t easily accessible.

When you look at well known applications of AI like Google’s AlphaGo Zero, you get the impression it’s like magic: AI learned the world’s most difficult board game in just three days and beat champions. Meanwhile, Nvidia’s AI can generate photorealistic images of people who look like celebrities just by looking at pictures of real ones.

AlphaGo and Nvidia used a technology called generative adversarial networks, which pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.

Most business problems can’t be turned into a game, however; you have more than two players and no clear rules. The outcomes of business decisions are rarely a clear win or loss, and there are far too many variables. So it’s a lot more difficult for businesses to implement AI than it seems.

Today’s AI systems do their best to emulate the functioning of the human brain’s neural networks, but they do this in a very limited way.  They use a technique called deep learning, which adjusts the relationships of computer instructions designed to behave like neurons. To put it simply, you tell an AI exactly what you want it to learn and provide it with clearly labelled examples, and it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on data, so the more examples you give it, the more useful it becomes.

Herein lies a problem: An AI is only as good as the data it receives. And it is able to interpret that data only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation. AI is more like an Excel spreadsheet on steroids than a thinker.

The bigger difficulty in working with this form of AI is that what it has learned remains a mystery — a set of indefinable responses to data.  Once a neural network is trained, not even its designer knows exactly how it is doing what it does. As New York University professor Gary Marcus explains, deep learning systems have millions or even billions of parameters, identifiable to their developers only in terms of their geography within a complex neural network. They are a “black box,” researchers say.

Speaking about the new developments in AlphaGo, Google/DeepMind CEO Demis Hassabis reportedly said, “It doesn’t play like a human, and it doesn’t play like a program. It plays in a third, almost alien, way.”

Businesses can’t afford to have their systems making alien decisions. They face regulatory requirements and reputational concerns and must be able to understand, explain, and demonstrate the logic behind every decision they make.

For AI to be more valuable, it needs to be able to look at the big picture and include many more sources of information than the computer systems it is replacing. Amazon is one of the few companies that has already understood and implemented AI effectively to optimize practically every part of its operations from inventory management and warehouse operation to running data centers.

In inventory management, for example, purchasing decisions are traditionally made by experienced individuals, called buyers, department by department. Their systems show them inventory levels by store, and they use their experience and instincts to place orders. Amazon’s AI consolidates data from all departments to see the larger trends — and relate them to socioeconomic data, customer-service inquiries, satellite images of competitors’ parking lots, predictions from The Weather Company, and other factors. Other retailers are doing some of these things, but none as effectively as Amazon.

This type of approach is also the basis of Echo and Alexa, Amazon’s voice-based home appliances. According to Wired, by bringing all of its development teams together and making machine learning a corporate focus, Amazon is solving a problem many companies have: disconnected islands of data. Corporate data are usually stored in disjointed datasets in different computer systems. Even when a company has all the data needed for machine learning, they usually aren’t labelled, up-to-date, or organized in a usable manner. The challenge is to create a grand vision for how to put these datasets together and use them in new ways, as Amazon has done.

AI is advancing rapidly and will surely make it easier to clean up and integrate data. But business leaders will still need to understand what it really does and create a vision for its use. That is when they will see the big benefits.

The article has been posted here with the express permission of the author.

Why You Need to Live in the Future — As I Do

I live in the future as it is forming and this is happening far faster than most people realise, and far faster than the human mind can comfortably perceive.

I live in the future. I drive an amazing Tesla electric vehicle, which takes control of the steering wheel on highways. My house, in Menlo Park, California, is a “passive” home that expends minimal energy on heating or cooling. With the solar panels on my roof, my energy bills are close to zero. I have a medical device at home, which was made in New Delhi, Healthcubed, that does the same medical tests as hospitals—and provides me with immediate results. Because I have a history of heart trouble I have all of the data I need to communicate with a doctor anywhere in the world, anytime I need.

I spend much of my time talking to entrepreneurs and researchers about breakthrough technologies such as artificial intelligence and robotics. These entrepreneurs are building a better future. I live in the future as it is forming and this is happening far faster than most people realise, and far faster than the human mind can comfortably perceive.

The distant future is no longer distant. The pace of technological change is rapidly accelerating, and those changes are coming to you very soon. Look at the way smartphones crept up on us. Just about everyone now has one. We are always checking email, receiving texts, ordering goods online, and sharing our lives with distant friends and relatives on social media.

These technologies changed our lives before we even realised it. Just as we blindly follow the directions that Google Maps gives us—even when we know better—we will comply with the constant advice that our digital doctor provides. I’m talking about an artificially intelligent app on our smartphone that will have read our medical data and monitor our lifestyles and habits. It will warn us not to eat more gulab jamuns lest we gain another 10 pounds.

So you say that I live in a technobubble, a world that is not representative of the lives of the majority of people in the US or India? That’s true. I live a comfortable life in Silicon Valley and am fortunate to sit near the top of the technology and innovation food chain. So I see the future sooner than most people. The noted science-fiction writer William Gibson, who is a favourite of hackers and techies, once wrote: “The future is here. It’s just not evenly distributed yet”. But, from my vantage point at its apex, I am watching that distribution curve flatten, and quickly. Simply put, the future is happening faster and faster. It is happening everywhere.

Technology is the great leveller, the great unifier, the great creator of new and destroyer of old.

Once, technology could be put in a box, a discrete business dominated by business systems and some cool gadgets. It slowly but surely crept into more corners of our lives. Today the creep has become a headlong rush. Technology is taking over every part of our lives; every part of society; every waking moment of every day. Increasingly, pervasive data networks and connected devices are causing rapid information flows from the source to the masses—and down the economic ladders from the developed societies to the poorest.

Perhaps my present life in the near future, in the technobubble in Silicon Valley, sounds unreal. Believe me, it is something we will laugh at within a decade as extremely primitive.

We are only just commencing the greatest shift that society has seen since the dawn of humankind. And, as in all other manifest shifts – from the use of fire to the rise of agriculture and the development of sailing vessels, internal-combustion engines, and computing – this one will arise from breathtaking advances in technology. This shift, though, is both broader and deeper, and is happening far more quickly.

Such rapid, ubiquitous change has a dark side. Jobs as we know them will disappear. Our privacy will be further compromised. Our children may never drive a car or ride in one driven by a human being. We have to worry about biological terrorism and killer drones. Someone —maybe you—will have his or her DNA sequence and fingerprints stolen. Man and machine will begin to merge. You will have as much food as you can possibly eat, for better and for worse.

The ugly state of global politics illustrates the impact of income inequality and the widening technological divide. More people are being left behind and are protesting. Technologies such as social media are being used to fan the flames and to exploit ignorance and bias. The situation will get only worse—unless we find ways to share the prosperity we are creating.

We have a choice: to build an amazing future such as we saw on the TV series Star Trek, or to head into the dystopia of Mad Max. It really is up to us; we must tell our policy makers what choices we want them to make.

The key is to ensure that the technologies we are building have the potential to benefit everyone equally; balance risks and the rewards; and minimise the dependence that technologies create. But first, we must learn about these advances ourselves and be part of the future they are creating.

 

This article is re-published here with the express permission of the author.