An eye on AI

How policymakers should look at it? Restrictions without understanding its potential or safeguards to avert crisis? Getting the balance right is the trick

Aparajita Gupta | September 21, 2018


#Artificial Intelligence   #Google Assistant   #COMPAS   #Amazon  
(Illustration: Ashish Asthana)
(Illustration: Ashish Asthana)

Google Assistant, Rekognition and Tay. All these, often seen in news, have a common thread – they are powered by Artificial Intelligence (AI). Only difference is that while some have been in news for right reasons, some others have made it to the headlines for all the wrong reasons. For instance, Google Assistant is an AI assistant that connects with several devices. Other companies are not far behind. To detect frauds, MasterCard and Visa are relying on machine learning algorithms. But there is a flip side too. American Civil Liberties Union found that Amazon’s Rekognition, a facial recognition AI, falsely matched pictures of 28 members of US Congress with those of arrested criminals. Microsoft came up with Tay, its AI chatbox that was meant to learn from conversations. But it all seemed to go wrong when it eventually picked up prejudices and gave racist and sexist messages on Twitter. Here’s another one. Facebook’s translation service, that uses AI, got a Palestinian man arrested by Israeli police for an innocent social media post. He had put a picture of himself against a bulldozer with a caption that meant ‘good morning’. But the translation service translated it to mean ‘attack them’ or ‘hurt them’.

When people were asked whom they would trust, a human or an AI, especially in sensitive areas, the results gave a clear choice in a World Economic Forum (WEF) poll. When asked if one would choose a human prescribed treatment or an AI prescribed one in case of being diagnosed with a life limiting illness, 53% favoured a human doctor prescribed treatment. Similarly, on being asked if one would prefer a human judge or an AI judge when brought to trial on a false allegation of having committed a serious offence, 63% favoured a human judge.

Clearly, many don’t seem to trust AI over humans in areas of sensitive decision making, though the margin doesn’t seem to be huge.
But juxtapose this to the reality of AI and algorithms increasingly being used in sensitive areas like judicial proceedings and medicine.

A few years back, an interesting case in the US caught everyone’s attention for the unique issue involved. Eric L Loomis was held guilty of running away from police and driving a stolen car. He was sentenced to six years of imprisonment. What is unique about this case is that the court, among other factors, relied on the score given by a risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). According to COMPAS score, Loomis was at a high risk of committing crime in future. Even the Wisconsin supreme court agreed with this decision and said that since the COMPAS score was not the only factor for the sentence, it could be used. But in an article, a criminal law scholar reasoned that such tools are based on group statistics, so they can’t make predictions on behaviour of individuals. Anyway, humans are definitely unpredictable.

Another sensitive area where AI is being used is medicine. From diagnosis to treatment, AI seems to be entering every area. IBM Watson is used in several countries for helping doctors in patient diagnosis. Even in India, Manipal Hospitals are reportedly using IBM Watson for oncology to help physicians give personalised options for cancer care. But it was recently reported in news abroad that IBM Watson gave unsafe and inaccurate treatment suggestions.

While such applications may help human experts and better inform them, there is need for caution. As highlighted above, AIs have gone wrong since many of its applications are still in the early stage of development. Further, absence of clear laws and rules to regulate AI has complicated matters. What happens if something goes wrong? Who will be liable? People are already asking these questions. But let’s take a step backward. Do people affected by the decisions of AI know that AI is being used? Even if they know, have they consented to the same beforehand?  These are some other questions being asked.

Thinking on these lines, the Artificial Intelligence Committee of the House of Lords (UK) provides useful insights in its report titled “AI in the UK: ready, willing and able?” One of its recommendations is about informing public when significant or sensitive decisions are being made by AI. It says “It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns.

“Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers.…The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.” Such fair disclosure and transparency to people would be in their best interests. It would also allow people to consent to use of AI in such matters.

The new General Data Protection Regulation in European Union has gone a step ahead. Article 22 explicitly talks about rights of people not to be subject to decisions that are solely based on automated processing and which have legal effects on people. This is subject to some exceptions.

In light of these developments, India would need to decide for itself. At this stage, some feel that it would not be a good idea to impose restrictions on advancement of AI without understanding its true potential. At the same time, it is important to have safeguards in place to tackle instances where it goes wrong in certain fields that can have huge impact on humans. Getting the balance right is the trick. A good starting point could be to follow the recommendation of UK’s Artificial Intelligence Committee and inform people when AI is taking sensitive decisions that impact them. Further, it would be a good idea to take informed consent before such use in certain areas.

This would not be an easy path. For a diverse country like India, the challenge would first be to raise general awareness about AI among people. Next would be to empower people to give an informed consent to being subjected to AI decision making in critical fields. While the answer to ‘how to do it’ may be difficult, the question is worth pondering upon.

Gandhiji had once said, ‘An eye for an eye will make the whole world blind.’ In today’s world that is witnessing the Fourth Industrial Revolution, an eye on AI may not be too much to ask for.

Gupta is a lawyer and currently a Young Professional with the Economic Advisory Council to the PM and NITI Aayog. The views expressed are personal.

(The article appears in the September 30, 2018 issue)

Comments

 

Other News

"Fitter, healthier people deal better with any kind of pollution"

Supermodel Milind Soman takes on other roles too: actor, film producer, fitness icon. A competitive swimmer in school and college, Soman was a national champion and won a silver for India at the South Asian Federation Games in 1984. He now completes marathons barefoot. In 2015 he prevailed in the Ironm

India’s fastest prototype Train 18 not fully made in India

The much talked-about Train 18 of the Indian Railways, which recently breached 180 kmph speed limit in the trials, has 20 percent of its components being imported from foreign countries. Although the train is seen as the finest example of technical prowess of the Indian Railways’ coaching f

A portal for ports and ships

Riding on the Digital India wave, the directorate general of shipping is contemplating to unveil a dedicated shipping portal to bring transparency, curb corruption by unscrupulous agents involved in maritime industry and speed up the process of granting approvals.   The DG sh

Yogi Adityanath and the law of karma

Ahead of the Uttar Pradesh assembly elections, a BJP poster made news for showing Yogi Adityanath riding a tiger and other parties’ leaders riding a donkey each. Yogi’s fondness for tigers is well known, but in hindsight, the chief minister seems to be riding a metaphorical tiger – and do

CAG report reveals why trains run late in India

Ever wonder why your train gets delayed every time? Well, the Comptroller and Auditor General (CAG) has found an answer. As per a CAG report only 101 out of 163 platforms of 15 major stations of the Indian Railways can accommodate trains with 24 or more coaches. The creepy infrastructure and delays in comp

Quantum urbanism in action: To protect earth, rethink waste

How can Quantum Urbanism help us rethink waste? It is as much a question as it is an open invitation to explore our relationship with waste and how deeply wedded that relationship is to systems thinking and everything else that comes bundled with it.    Simply put, Quant

Current Issue

Current Issue

Video

CM Nitish’s convoy attacked in Buxar

Opinion

Facebook    Twitter    Google Plus    Linkedin    Subscribe Newsletter

Twitter