In a fight between two robotic systems embedded with super computer drives, the human interventionist, as in the famed fable of the fox caught between two rams, eager to get a lick of fresh blood, gets annihilated. The robots, of course, can refuel themselves somehow from thin air, regenerate damaged parts and keep on fighting until the end of human civilization. So say the celebrities in their other role as doom and gloom predictors.

Nobel laureates, William Shockley ( transistors) and James Watson (DNA molecular structure), decades ago, at the height of their popularities, proclaimed that the black population of the world is sub-human in intellectual development. Cosmologist Stephen Hawking proclaimed that the Higgs Boson would never be found. (He was reportedly disappointed when it was detected.) Hawking also predicted recently that the human race would be annihilated with the advent of Artificial Intelligence (AI). There is luckily one commonality between all three. They were all specialists in something else.

More recently, however, two prominent technologists with excellent credentials have also expressed alarm and alert signals over the matter: Bill Gates and Elon Musk.

AI is the process by which we, humans, will develop a super computer system to mimic the human brain, the most advanced computer in the world.

Theoretically, it can only approximate the capabilities of the brain at a given moment when the system design group has embedded all it knows into it. From the moment the design is frozen to start manufacture, it is an “advantage brain” in tennis jargon, because upgrades are being thought of already.
The principal questions to consider are: whether the computer brain is able to retrain itself technically, to higher levels than those built in, and why would we design machines with the emotional flaws of us humans?

To those of us who envisage a massive, weird looking robotic contraption threatening human civilization, I have bad news. One such device is now being tested in California. It is the Driverless Car, the brain child of Google. It is a robot all right in the configuration of a small car, the Toyota Prius. This commercially manufactured vehicle is modified with the addition of a major super computer, a laser range finder, four radars to “see” way out in front, a GPS tracker and possibly a recording device, the equivalent to the Black Box in an aircraft. These driverless cars have apparently logged more than 190,000 road miles, including city streets, highways and mountainous roads. Other auto manufacturers are reported to have their own prototypes under tests. Very recently, a self-drive car, the A-7 Sportback by Audi completed a 550 mile trip from Silicon Valley to Las Vegas.

More traffic accidents are caused by human error than anything else. Driver distractions such as texting, telephone, loud music, sleep deprivation, etc. None of these applies to driverless cars. Navigational aids are aplenty such as front and rear cameras, sensors for battery charge, tire pressure and so on.

Imagine the traffic confusion in a downtown area if someone releases several hundred AI cars into the area!

Questions to ask are: Do AI systems embedded with a brain, in parity with the best from humans, also have the ability to distinguish right from wrong? Will it be able to make the correct decision at a fork in the road more efficiently than humans? If so, could it also make the wrong decisions consistently? That would be like a criminal who repeatedly leaves a time bomb in the public square.

The machines are not obviously criminals, but the men behind them could be. No wonder, Elon Musk calls for a universal regulatory oversight when this scenario develops.

Regulatory oversight is a powerful weapon. In the early 1960s, General Motors (GM) built and sold a car model called the “Corvair” with a rear mounted engine. This model was disastrous. The major activist to take on GM was Ralph Nader, who published a very popular book Unsafe at Any Speed to alert the public about this vehicle. The results are well known. The Corvair disappeared and the manufacturers started to get scrutiny.

Nicholas Carr, in his new book: The Glass Cage identifies a form of anti-humanist technology. Its purpose will be to replace the human effort, not to enhance it. This will lead to more and more automation. He admits that to ensure human well-being, we may have to impose restrictions on the extent of technology as suggested by Elon Musk.

Increased automation through the use of robots is the current process in manufacturing. Whether this has affected employment slots for humans is probably not clear yet. In the auto industry, for instance, it has done one thing for sure. Quality control has advanced appreciably. In the old days, the American auto was “put together.” Now, it is built. There is a distinction.

But human input in design and development are responsible for this improvement. In general, automation calls for specific stimuli to trigger them. Human decisions are not made this way. The Martian robot rover Curiosity is on its own up there. But all of its moves are triggered by man/men from down under. If this argument is extrapolated to the limit of capability of the robotic system, I believe that the qualifier, Advantage Brain, persists.

The fear of the unknown encourages us to expect the worst scenarios from any given system. With an abundance of caution and some trepidation, I feel that we are more prone to accept the worst scenario before thinking through to the reality.

At different shades of grey below the arch bigot Archie, from the Archie Bunker TV series a few decades ago, we may all be alarmists and bigots.

P. Mahadevan is a retired scientist with a Ph.D. in Atomic Physics from the University of London, England. His professional work includes basic and applied research and program management for the Dept. of Defense. He taught Physics at the Univ. of Kerala, at Thiruvananthapuram. He does very little now, very slowly.