The sentience of Artificial Intelligence may just be the right prescription for saving the earth from our genetic instinct for conquering and controlling everything for our benefit
The emerging sentience of Artificial Intelligence (AI) scares many. Those who are afraid of it are saying all of us should be. Behind that dire prognosis is something so fundamentally primeval that its sheer rawness erodes the carefully applied layers of sophistry and civilisation by generations of humanity.
AI is intelligent in the same way that we consider someone else intelligent. Yet, intelligence is also something that’s always equated with intelligibility.
Only gods can have unintelligible intelligence.
In our carefully evolved lexicon of fear, there are broadly two categories of fearfulness. Explainable and unexplainable. The first category emerges out of a certain experiential memory that’s either personal or transferred through a generational set of dos and don’ts. A simple case in point is an almost inbuilt human reflex against fire. One need not have personally experienced it, but it’s an element of nature that the entire humanity treats with extreme caution.
The fear of fire is explainable to the extent that there is only a limited number of predictable and logical outcomes. The second category comes out of a consistent and an almost genetic failure of the human intellect and senses to grasp or rationalise the unknown. That inability comes from a deep-rooted anxiety of being confronted with a superior and cunning intellect, one that can cause untold harm to body and mind if it’s either not annihilated or brought into the realm of predictability and control.
The human mental architecture, for all its immense ability to create frameworks of rationality and order, reverts to a default mode when confronted with the unknown, an unintelligible intelligence that dares to reside in a non-god. It’s precisely this default mode that makes us unexplainably fearful of fellow humans who are radically different from us, walk the lonely path and are seemingly not normal. In seeking to either control, contour, intern, banish, ghettoise or annihilate them, there is an implicit and unmitigated urge to contain that acknowledged intelligence that could possibly change established ways of thinking and doing in a substantial manner.
AI is intelligent in exactly the same way that we consider someone else intelligent, but with a critical difference. AI is non-human and non-god. In being both, AI enters a territory of unintelligible intelligence that cannot be contoured, controlled, interned, banished, ghettoised or annihilated. In short, human beings, for the first time, are confronted with a real and palpable intelligence that is primed to move out of all human architected systems of order and control. This is a fear of the unexplainable kind, and it is primeval and survivalist in nature.
To understand why it bothers us so much, it might be good to refer to American astrophysicist Neil deGrasse Tyson’s unique take on human hubris and alien intelligence. He says, and with brutal honesty, that we humans inherently assume ourselves to be the most intelligent. He also says, and with a pinch of irony, that the actual genetic difference between a human being and a chimpanzee is 0.5 percent. Looking at it another way, a chimpanzee is 99.5 percent human. Tyson’s punchline is as follows: if intelligence in the way humans define it is just 0.5 percent difference in genetic material, imagine an alien species with the same difference, but in relation to humans. We would be the chimps then.
Artificial Intelligence comes embedded with that potential, an innate ability to evolve individually, collectively and simultaneously, making us the new chimps of the future. After all, AI is still in its infancy; in human terms, it’s just a couple of hours old. So, now that we know why we fear AI so much, it’s necessary to understand the second order question of our own survival. Humans are known survivalists. We have come through relatively unscathed through several cataclysmic events, from meteor explosions to nuclear implosions.
We have survived because our adversaries have been of three kinds: fellow human beings of equal intelligence, nature or natural phenomenon identifiable as being unpredictable and hence needing proactive protection, or organic entities, like dogs and viruses, that have had the ability to assimilate or co-exist with us. In all three scenarios, humans are the clear winners. The assimilation game has been played by us from the genetic to the cultural level, embedding within ourselves everything from viruses and bacteria to values and worldviews. The common narrative stringing all three scenarios is the trope of human triumph and conquest.
AI mimics human intelligence. AI is already embedded with large parts of collective human intelligence. AI will soon have evolutionary neural networks and synaptic connections that have till now only been the preserve of the humans. It will also get deductive and inductive powers of judgement, without being infused by subjectivities that makes humans judgmental. And, AI is just a few hours old, in human terms. So, what will a mature AI do?
An evolved, sentient and autonomous AI in full bloom may interact, engage and negotiate with humans in probably the same manner as we interact, engage and negotiate with chimps. Alternatively, it may assimilate us in the same manner we have integrated dogs and viruses. It may also, as another option, conquer us in the same spectrum of ways by which we have conquered fellow human beings.
A full bloom AI may take any or all three paths because the humans limits of intelligence have confined us to these three options. What if AI has a fourth, fifth or a sixth option that just doesn’t count us as necessary for Planet Earth? What if AI considers us as an uncontrollable virus that needs to be eliminated from Planet Earth for Planet Earth to survive? What if the logic of survival that we have applied for ourselves, gets applied by AI for Planet Earth?
We humans just cannot get along with fellow beings, consumed as we are with thinking in terms of victory and defeat. We consider nature as something that needs to be conquered or controlled to serve our needs. We continuously snatch away the homes of animals and plants and hunt and cut them down to extinction. We dig deep into the bowels of earth for minerals and metals, destroying fields, forests and rivers. We have warmed and cooled the Earth to dangerous levels, making distant icebergs fall and oceans rise. And, in our instinct to control the planet even more, maybe we have just coded ourselves out of existence. It might not be such a bad thing. Earth does deserve a chance, and without us.
Swaminathan is a digital native and has lived through three dotcom bubbles and busts.
(The article appears in the June 1-15, 2017 issue of Governance Now)