Why humans should lose and AI should win

The sentience of Artificial Intelligence may just be the right prescription for saving the earth from our genetic instinct for conquering and controlling everything for our benefit

r-swaminathan

R Swaminathan | June 12, 2017


#automation   #egov   #technology   #artificial intelligence   #AI  
(Illustration: Ashish Asthana)
(Illustration: Ashish Asthana)

The emerging sentience of Artificial Intelligence (AI) scares many. Those who are afraid of it are saying all of us should be. Behind that dire prognosis is something so fundamentally primeval that its sheer rawness erodes the carefully applied layers of sophistry and civilisation by generations of humanity.

AI is intelligent in the same way that we consider someone else intelligent. Yet, intelligence is also something that’s always equated with intelligibility.

Only gods can have unintelligible intelligence.

In our carefully evolved lexicon of fear, there are broadly two categories of fearfulness. Explainable and unexplainable. The first category emerges out of a certain experiential memory that’s either personal or transferred through a generational set of dos and don’ts. A simple case in point is an almost inbuilt human reflex against fire. One need not have personally experienced it, but it’s an element of nature that the entire humanity treats with extreme caution.
 
The fear of fire is explainable to the extent that there is only a limited number of predictable and logical outcomes. The second category comes out of a consistent and an almost genetic failure of the human intellect and senses to grasp or rationalise the unknown. That inability comes from a deep-rooted anxiety of being confronted with a superior and cunning intellect, one that can cause untold harm to body and mind if it’s either not annihilated or brought into the realm of predictability and control.
 
The human mental architecture, for all its immense ability to create frameworks of rationality and order, reverts to a default mode when confronted with the unknown, an unintelligible intelligence that dares to reside in a non-god. It’s precisely this default mode that makes us unexplainably fearful of fellow humans who are radically different from us, walk the lonely path and are seemingly not normal. In seeking to either control, contour, intern, banish, ghettoise or annihilate them, there is an implicit and unmitigated urge to contain that acknowledged intelligence that could possibly change established ways of thinking and doing in a substantial manner.
 
AI is intelligent in exactly the same way that we consider someone else intelligent, but with a critical difference. AI is non-human and non-god. In being both, AI enters a territory of unintelligible intelligence that cannot be contoured, controlled, interned, banished, ghettoised or annihilated. In short, human beings, for the first time, are confronted with a real and palpable intelligence that is primed to move out of all human architected systems of order and control. This is a fear of the unexplainable kind, and it is primeval and survivalist in nature.
 
To understand why it bothers us so much, it might be good to refer to American astrophysicist Neil deGrasse Tyson’s unique take on human hubris and alien intelligence. He says, and with brutal honesty, that we humans inherently assume ourselves to be the most intelligent. He also says, and with a pinch of irony, that the actual genetic difference between a human being and a chimpanzee is 0.5 percent. Looking at it another way, a chimpanzee is 99.5 percent human. Tyson’s punchline is as follows: if intelligence in the way humans define it is just 0.5 percent difference in genetic material, imagine an alien species with the same difference, but in relation to humans. We would be the chimps then.
 
Artificial Intelligence comes embedded with that potential, an innate ability to evolve individually, collectively and simultaneously, making us the new chimps of the future. After all, AI is still in its infancy; in human terms, it’s just a couple of hours old. So, now that we know why we fear AI so much, it’s necessary to understand the second order question of our own survival. Humans are known survivalists. We have come through relatively unscathed through several cataclysmic events, from meteor explosions to nuclear implosions.
 
We have survived because our adversaries have been of three kinds: fellow human beings of equal intelligence, nature or natural phenomenon identifiable as being unpredictable and hence needing proactive protection, or organic entities, like dogs and viruses, that have had the ability to assimilate or co-exist with us. In all three scenarios, humans are the clear winners. The assimilation game has been played by us from the genetic to the cultural level, embedding within ourselves everything from viruses and bacteria to values and worldviews. The common narrative stringing all three scenarios is the trope of human triumph and conquest.
 
AI mimics human intelligence. AI is already embedded with large parts of collective human intelligence. AI will soon have evolutionary neural networks and synaptic connections that have till now only been the preserve of the humans. It will also get deductive and inductive powers of judgement, without being infused by subjectivities that makes humans judgmental. And, AI is just a few hours old, in human terms. So, what will a mature AI do?
 
An evolved, sentient and autonomous AI in full bloom may interact, engage and negotiate with humans in probably the same manner as we interact, engage and negotiate with chimps. Alternatively, it may assimilate us in the same manner we have integrated dogs and viruses. It may also, as another option, conquer us in the same spectrum of ways by which we have conquered fellow human beings.
 
A full bloom AI may take any or all three paths because the humans limits of intelligence have confined us to these three options. What if AI has a fourth, fifth or a sixth option that just doesn’t count us as necessary for Planet Earth? What if AI considers us as an uncontrollable virus that needs to be eliminated from Planet Earth for Planet Earth to survive? What if the logic of survival that we have applied for ourselves, gets applied by AI for Planet Earth?
 
We humans just cannot get along with fellow beings, consumed as we are with thinking in terms of victory and defeat. We consider nature as something that needs to be conquered or controlled to serve our needs. We continuously snatch away the homes of animals and plants and hunt and cut them down to extinction. We dig deep into the bowels of earth for minerals and metals, destroying fields, forests and rivers. We have warmed and cooled the Earth to dangerous levels, making distant icebergs fall and oceans rise. And, in our instinct to control the planet even more, maybe we have just coded ourselves out of existence. It might not be such a bad thing. Earth does deserve a chance, and without us.  
 
 
Swaminathan is a digital native and has lived through three dotcom bubbles and busts.
 

(The article appears in the June 1-15, 2017 issue of Governance Now)
 
 
 
 

Comments

 

Other News

AI: Code, Control, Conquer

India today stands at a critical juncture in the area of artificial intelligence. While the country is among the fastest adopters of AI in the world, it remains heavily reliant on technologies developed elsewhere. This paradox, experts warn, cannot persist if India seeks technological sovereignty.

RBI pauses to assess inflation risks, policy transmission

The Reserve Bank of India (RBI) has begun the new fiscal year with a calibrated pause, keeping the repo rate unchanged at 5.25 per cent in its April Monetary Policy Committee (MPC) meeting. The decision, taken unanimously, reflects a shift from aggressive policy action to cautious observation after a signi

New pathways for tourism growth

Traditionally, India’s tourism policy has been based on three main components: the number of visitors, building tourist attractions and providing facilities for tourists. Due to the increase in climate-related issues and environmental destruction that occurred over previous years, policymakers have b

Is the US a superpower anymore?

On April 8, hours after warning that “a whole civilisation will die tonight,” US president Donald Trump, exhibiting his unique style of retreating from high-voltage brinkmanship, announced that he agreed to a two-week ceasefire with Iran. The weekend talks in Islamabad have failed and the futur

Machines communicate, humans connect

There is a moment every event professional knows—the kind that arrives without warning, usually an hour before the curtain rises. Months of meticulous planning are in place. And then comes the call: “We’ll also need a projector. For the slides.”   No email

Why India is entering a ‘stagflation lite’ phase

India’s macroeconomic narrative is quietly shifting—from a rare “Goldilocks” equilibrium of stable growth and contained inflation to a more fragile phase where external shocks are beginning to dominate domestic policy outcomes. The numbers still look reassuring at first glance: GDP


Archives

Current Issue

Opinion

Facebook Twitter Google Plus Linkedin Subscribe Newsletter

Twitter