… provided its challenges – legal accountability, data privacy, concerns on job front – are taken care of
Over the last decade, India has initiated a range of progress-driven policies to advance the health sector towards greater overall digitisation. Frontline health workers are now trained to adopt digital health solutions, evolving from traditional paper-based record keeping systems to a centralised digital portal, and now to using mobile phone applications that allow real-time information upload, storing the data in universal registries. India is now set on its next groundbreaking achievement – the integration of Artificial Intelligence (AI) in the health sector.
However, before taking the leap, one must consider the key challenges and risks of developing AI-based solutions for healthcare in India. To what extent do these considerations diverge or align with the ethical and operational concerns highlighted by global discourse? The World Health Organisation (WHO) defines the use of AI in health in a way that is safe, ethical and equitable. This article aims to explore the ethical challenges surrounding the implementation of AI in the healthcare sector.
Recently, an AI-based breast-cancer screening device was introduced, that produces a non-invasive, low-cost heat mapping solution and has been able to detect breast cancer up to five years earlier than a mammography. Numerous such advanced AI-driven diagnostic tools are swiftly being integrated into the healthcare sector. In fact, we can see that AI applications have been developed for tasks ranging from scheduling an appointment with the doctor, to early detection of tumours, predicting cancer recurrence through risk scores as well as analysing blood images and more. Initiatives are underway to enhance national telemedicine platforms, and e-Sanjeevani for the development of AI and machine learning models aimed at strengthening data collection and improving the quality of care in doctor-patient consultations.
There is a vast and growing potential for both AI and robotics in healthcare. AI has exhibited its remarkable capacity to replicate and even surpass human capabilities, by executing tasks with efficiency, accelerated speed and cost-effectiveness. This shows the growing AI contribution and its capacity to replace sophisticated roles that were previously dependent on human intervention. Though this is a great mark of progress, the process of integrating AI into healthcare is unlikely to be straightforward or simple as several concerns arise. While digitalisation is a promising first step to creating interoperable digital systems, there seem to be a plenty of challenges for its adoption.
Legal Accountability
The use of AI is limited by its complexity, which makes it challenging for users to grasp certain principles. AI personalises treatment plans for patients based on unique patient data. With its intricate algorithms aggregating thousands of data points, their decision making processes are often too complex to be retraced and explained to users. In this regard, an imperative question will be: Who will be held responsible if an AI system makes an incorrect diagnosis that leads to patient harm? Conflicts arise in matters of accountability for AI errors. As AI systems become more sophisticated, their decision-making often exceeds the comprehension of their human operators, further complicating the issue of responsibility in the event of adverse outcomes. In such cases, to hold a human accountable for a decision made by an AI system based on complex algorithms and vast datasets is unjust. Given that AI has a causal agency without a legal agency, would the AI developer, deployer or the AI system itself be held accountable?
The current Indian and International AI frameworks and legislation are yet to address the complex issue of assigning legal personhood to AI, and if so, how to manage the implications of such a responsibility. If humans are held solely accountable, it may stifle innovation. If healthcare providers fear legal repercussion and become overly cautious in using AI, the advantages of the new technology will be jeopardised. AI holds the transformative potential to revolutionise healthcare. However, for this potential to be realised, a balanced approach to accountability is necessary, one that recognises the unique contributions of both humans and machines. Moreover, human oversight remains essential.
Diagnosing Errors
Although AI possesses the capacity to process and analyse data at a remarkable speed, the accuracy is only as good as the data it is provided or trained on. The complexity of human physiology and the nuances of individual patient cases imply that AI systems may not encompass every variable that a human physician may take into account. There exist more factors to a patient's health condition which fall outside the training data that AI systems cannot replicate. Doctors, with their years of education and experience, are more adept at decrypting atypical or unforeseen symptoms, drawing from a level of expertise, intuition and empathy that considers a broader understanding of a patient’s health history, lifestyle and other relevant factors that an AI may overlook, resulting in errors and significant health repercussions in patients. Ensuring human involvement in the diagnostic process helps mitigate the risks associated with AI errors, maintaining the confidence and trust of the patients.
Human Connection
The practice of medicine extends beyond the scientific application of clinical knowledge. It correspondingly encompasses the art of providing emotional support and comfort. While AI can assist by providing data-driven insights and suggestions, it lacks human connection necessary for providing holistic patient care. Doctors possess a unique ability to cultivate an emotional bond with patients, to comprehend their concerns and make effective decisions that harmonise clinical treatment with individual patient preferences and values.
(Un)Employment
With the integration of technologies such as Robotic Process Automation (RPA), a software used to handle high-volume, repeatable tasks that previously required humans, will be taken over. RPA combined with AI will be used for solutions that either directly assist people in the performance of non-routine tasks or automate those tasks entirely. Though this is expected to help doctors who are often piled up with paperwork and administrative tasks which further limit time for patient care, it may also lead to job loss in certain roles in healthcare, such as medical coding, administrative tasks and basic diagnostic tasks, that may become automated.
Looking into the future, healthcare professionals will require some level of digital skills such as data analysis and AI system management to remain competitive in the job market. While AI enhances efficiency and generates valuable insights, a growing concern arises about the excessive reliance on these systems by healthcare professionals, eroding the critical thinking and clinical judgment of those in the industry.
Data Privacy
Beyond these technological and regulatory challenges, ethical considerations and social implications such as data privacy, informed consent, and equity are key factors to evaluate for the smooth sailing progress of the AI industry. Regulatory frameworks should be set in motion to mitigate the potential misuse of AI. Patients need to trust that their data is being used responsibly and that AI systems are designed to benefit all individuals equally, without reinforcing existing biases. With correct ethical utilisation of AI, it promises potential benefits to both healthcare workers and patients.
A balanced approach that synchronises the capabilities of AI with those of human practitioners in healthcare is necessary for the sector to be in harmony. With a rising population and an increased demand for healthcare services, AI has the potential to create an affordable healthcare system by combating the issue of regular hospital visits, particularly in rural areas, which can be expensive due to transportation costs. Using new mediums of technology such as Telemedicine, which could enable diagnostics through video conferencing and virtual platforms with correct records of patient history, reduces medical visits and accelerates the treatment process, thereby reducing much of the cost.
AI’s advantage of expediting medical procedures also has been recently introduced to drug creation, as a study conducted by the California Biomedical Research Association showed that developing drugs takes an average of 12 years, costing $359 million, with a minimal success rate of only 5 in 5,000. Meanwhile, a start-up supported by the University of Toronto created a supercomputer-based algorithm that developed millions of potential medications against the Ebola virus. This proves AI's potential to drastically reduce the time, process and cost of drug development.
As new technologies emerge daily, it is more advantageous that we embrace their utility rather than retreat in fear. By utilising these innovations ethically and adhering to appropriate regulations, we can harness their potential to enhance our lives. Ultimately, the essence of technology lies in its capacity to serve humanity. The role of organisations such as WHO, FDA, and CDC provide valuable guidance on ethical practices, professional standards, and policy advocacy. With the right parameters kept in factor, India might enjoy a flourishing healthcare ecosystem embedded with AI.
Dr Palakh Jain is Associate Professor at the School of Management at Bennett University and a Senior Visiting Fellow at Pahle India Foundation, Delhi.
Ms Surabhi Santhosh, Research Assistant at Pahle India Foundation, Delhi