An eye on AI

How policymakers should look at it? Restrictions without understanding its potential or safeguards to avert crisis? Getting the balance right is the trick

Aparajita Gupta | September 21, 2018


#Artificial Intelligence   #Google Assistant   #COMPAS   #Amazon  
(Illustration: Ashish Asthana)
(Illustration: Ashish Asthana)

Google Assistant, Rekognition and Tay. All these, often seen in news, have a common thread – they are powered by Artificial Intelligence (AI). Only difference is that while some have been in news for right reasons, some others have made it to the headlines for all the wrong reasons. For instance, Google Assistant is an AI assistant that connects with several devices. Other companies are not far behind. To detect frauds, MasterCard and Visa are relying on machine learning algorithms. But there is a flip side too. American Civil Liberties Union found that Amazon’s Rekognition, a facial recognition AI, falsely matched pictures of 28 members of US Congress with those of arrested criminals. Microsoft came up with Tay, its AI chatbox that was meant to learn from conversations. But it all seemed to go wrong when it eventually picked up prejudices and gave racist and sexist messages on Twitter. Here’s another one. Facebook’s translation service, that uses AI, got a Palestinian man arrested by Israeli police for an innocent social media post. He had put a picture of himself against a bulldozer with a caption that meant ‘good morning’. But the translation service translated it to mean ‘attack them’ or ‘hurt them’.

When people were asked whom they would trust, a human or an AI, especially in sensitive areas, the results gave a clear choice in a World Economic Forum (WEF) poll. When asked if one would choose a human prescribed treatment or an AI prescribed one in case of being diagnosed with a life limiting illness, 53% favoured a human doctor prescribed treatment. Similarly, on being asked if one would prefer a human judge or an AI judge when brought to trial on a false allegation of having committed a serious offence, 63% favoured a human judge.

Clearly, many don’t seem to trust AI over humans in areas of sensitive decision making, though the margin doesn’t seem to be huge.
But juxtapose this to the reality of AI and algorithms increasingly being used in sensitive areas like judicial proceedings and medicine.

A few years back, an interesting case in the US caught everyone’s attention for the unique issue involved. Eric L Loomis was held guilty of running away from police and driving a stolen car. He was sentenced to six years of imprisonment. What is unique about this case is that the court, among other factors, relied on the score given by a risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). According to COMPAS score, Loomis was at a high risk of committing crime in future. Even the Wisconsin supreme court agreed with this decision and said that since the COMPAS score was not the only factor for the sentence, it could be used. But in an article, a criminal law scholar reasoned that such tools are based on group statistics, so they can’t make predictions on behaviour of individuals. Anyway, humans are definitely unpredictable.

Another sensitive area where AI is being used is medicine. From diagnosis to treatment, AI seems to be entering every area. IBM Watson is used in several countries for helping doctors in patient diagnosis. Even in India, Manipal Hospitals are reportedly using IBM Watson for oncology to help physicians give personalised options for cancer care. But it was recently reported in news abroad that IBM Watson gave unsafe and inaccurate treatment suggestions.

While such applications may help human experts and better inform them, there is need for caution. As highlighted above, AIs have gone wrong since many of its applications are still in the early stage of development. Further, absence of clear laws and rules to regulate AI has complicated matters. What happens if something goes wrong? Who will be liable? People are already asking these questions. But let’s take a step backward. Do people affected by the decisions of AI know that AI is being used? Even if they know, have they consented to the same beforehand?  These are some other questions being asked.

Thinking on these lines, the Artificial Intelligence Committee of the House of Lords (UK) provides useful insights in its report titled “AI in the UK: ready, willing and able?” One of its recommendations is about informing public when significant or sensitive decisions are being made by AI. It says “It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns.

“Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers.…The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.” Such fair disclosure and transparency to people would be in their best interests. It would also allow people to consent to use of AI in such matters.

The new General Data Protection Regulation in European Union has gone a step ahead. Article 22 explicitly talks about rights of people not to be subject to decisions that are solely based on automated processing and which have legal effects on people. This is subject to some exceptions.

In light of these developments, India would need to decide for itself. At this stage, some feel that it would not be a good idea to impose restrictions on advancement of AI without understanding its true potential. At the same time, it is important to have safeguards in place to tackle instances where it goes wrong in certain fields that can have huge impact on humans. Getting the balance right is the trick. A good starting point could be to follow the recommendation of UK’s Artificial Intelligence Committee and inform people when AI is taking sensitive decisions that impact them. Further, it would be a good idea to take informed consent before such use in certain areas.

This would not be an easy path. For a diverse country like India, the challenge would first be to raise general awareness about AI among people. Next would be to empower people to give an informed consent to being subjected to AI decision making in critical fields. While the answer to ‘how to do it’ may be difficult, the question is worth pondering upon.

Gandhiji had once said, ‘An eye for an eye will make the whole world blind.’ In today’s world that is witnessing the Fourth Industrial Revolution, an eye on AI may not be too much to ask for.

Gupta is a lawyer and currently a Young Professional with the Economic Advisory Council to the PM and NITI Aayog. The views expressed are personal.

(The article appears in the September 30, 2018 issue)

Comments

 

Other News

Aero India 2019 kicks off in Bengaluru

Five day long Aero India -2019 organised by Hindustan Aeronautics Limited (HAL) was inaugurated at Air Force Station, Yelahanka, Bengaluru on Wednesday.   The French origin Rafale fighter jets are also participating in the event.   Asia’ largest

BHEL builds India’s first regenerative 5,000 HP electric locomotive

In a major technological breakthrough, BHEL has developed the country’s first such regenerative 5,000 horse power (HP) WAG-7 electric locomotive with a modern regeneration system for Indian Railways.   The electric loco was flagged-off by member traction, railway board, Gha

Prime Minister dedicates Rs 33,000 crore projects in Begusarai

Prime Minister Narendra Modi dedicated a host of developmental projects at Ulao airport in Begusarai, Bihar. Modi said these projects will fuel the progress of Bihar and the eastern states. He said that Bihar had the potential to play a pivotal role in driving the growth of the country. “The

IOCL finalises term contract for import of US crude oil grades

IndianOil Corporation has finalised a term contract for import of up to three MMT (million metric tonnes) of crude oil of US origin grades to diversify term crude sources in 2019-20 fiscal. The enterprise has finalised the contact on February 15, 2019.   The value of the cont

For a healthy tomorrow

When Dr Shruti Kamdi (pictured on left), a transfusion specialist at a leading Mumbai hospital, had her first child, she struggled to nurse her baby as she was unable to secrete enough milk. Admitted to a private hospital, she was put on medication to increase breast milk. But that didn’t help much.

Reinvigorating reinsurance

After the liberalisation of the insurance industry in 2000, private firms (mostly in partnerships with foreign firms) have readily taken to the sector. Till FY18 there were 23 private firms in the life insurance sector and 21 in general insurance segment. However, it was only in 2016-17 that the first priv

Current Issue

Current Issue

Video

CM Nitish’s convoy attacked in Buxar

Opinion

Facebook    Twitter    Google Plus    Linkedin    Subscribe Newsletter

Twitter