AI surveillance comes with inherent bias

Without proper checks and balances, it may undermine privacy, accountability, and civil liberties, posing a challenge to liberal democratic frameworks

Nidhi Singh | July 28, 2025


#AI   #Society   #Technology   #Surveillance   #Gender  
(Illustration: Ashish Asthana)
(Illustration: Ashish Asthana)

Surveillance can primarily be understood as data collection for the accumulation of power. The integration of artificial intelligence (AI) adds a further layer of complexity to it. AI can be defined as ‘machine learning’ that involves replicating already available responses through the process of iteration. This has enabled different forms of surveillance, the boundaries which remain quite ambiguous. AI is an integrated system that incorporates large amounts of information in order to discern a pattern to explain current data and predict answers in the future. 

Rapid technological development has resulted in the proliferation of AI technologies in almost all spheres of life, and the amount of research on AI has increased exponentially. Different AI technologies are employed by governments and industries to ensure surveillance over individuals, including measures like smart city/safe city platforms, facial recognition systems, and smart policing systems. The use of these technologies is, nevertheless, purposely hidden by the governments. 

The autocratic nature of AI-driven technologies can clash with the foundational principles of liberal democracies, which prioritise individual rights, the rule of law, and political pluralism. These technologies, if used without proper checks and balances, may undermine privacy, accountability, and civil liberties, posing a challenge to the prevailing global norm of organising political life within liberal democratic frameworks. 

AI augmented surveillance has also received much attention and the multifaceted challenges posed by feminist commentators. Feminist surveillance studies focus on the intersection of surveillance and gender. It demonstrates how algorithms can reproduce as well as further strengthen the already existing bias and how techniques which are seen as belonging to the realm of documentation and monitoring have played a key role in reinforcing power relations in the context of gender, race, class, etc and delineate how the claims of scientific neutrality are untrue and is a ruse. Scholarly investigations reveal the inherent biases in biometric technologies. These studies illustrate how these technologies are immersed in preconceived notions and prejudices, embedded in their codes, and operate in a distorted manner. 

The intersection of surveillance, AI, and the evolving role of technology in our lives presents a complex and interconnected web of themes. The term ‘surveillance’ finds its roots in the French verb ‘surveiller’ (to watch over, monitor, supervise) and is linked to the Latin term ‘vigilare’ (to watch, to stay alert). While surveillance encompasses a range of activities such as observation, supervision, and monitoring, its most prominent application is by states and organisations. The exercise of authority and power plays a central role in data collection through surveillance practices. 

However, surveillance is more than just hierarchical observation; it carries multifaceted dimensions. Surveillance is often a contentious issue, carrying profound implications for privacy, freedom, and societal activities. This extended reach of surveillance, as noted by Konrad Lachmayer and Norman Witzelb, transcends national borders and challenges conventional notions of privacy. 

In order to fully comprehend modern surveillance, it is imperative to consider the contextual nuances that shape its various forms and functions. This contextual understanding is essential for navigating the intricate landscape of surveillance in today's world. With the integration of AI in state surveillance, concerns arise about potential biases in data and the harm it may cause to the public. 

Algorithms, for the most part, are only partially transparent to users and will be opaque in nature for two reasons. First, the mining of big data, and secondly, little incentive for the controllers of the database to make the basis of algorithm decision-making available. A gender-blind approach to surveillance studies, according to feminist care ethic paradigm, might prove to be insufficient to understand the “surveillant gaze”. At an empirical level this will help us emphasize on difference and identity and at the normative level it will challenge the hegemonic focus on “privacy” which is understood in an atomic individual sense rather than acknowledging the contexts in which individuals are located. 

Feminist perspectives on state surveillance add depth to our understanding by highlighting the gendered assumptions in control mechanisms and the internalisation of control through self-regulation. The emergence of feminist state surveillance studies responds to the historical neglect of gender and sexuality issues in mainstream surveillance research. It seeks to redirect the focus of surveillance studies beyond privacy, security, and efficiency toward addressing ethical challenges tied to privilege, access, and risk. 

The integration of AI into surveillance has made the political landscape more complex, sparking debates about trust and legitimacy. Research reveals that in many developing countries, AI is increasingly employed as a tool of authoritarianism, albeit influenced by political and institutional conditions. Moreover, scholars have begun to draw connections between autocratic regimes and the use of AI. For example, a case study on Chinese facial-recognition AI demonstrates this linkage and suggests that other countries with autocratic tendencies might follow suit. However, AI surveillance is not limited to autocratic states; democratic nations, including the United States, Germany, Japan, and France, have also embraced AI surveillance technologies. These instances reflect the repressive potential inherent in AI systems, with their future use contingent on state intent and policy measures. 

Furthermore, AI systems are not neutral; they contain biases related to race, sexuality, social classes, and more. OpenAI acknowledges the presence of bias within its system, underscoring the concern that the widespread use of AI has the potential to perpetuate and reinforce gender bias. Feminist engagement with AI has demonstrated how biases were inherent in AI-driven technologies and systems. 

Women, especially those from marginalised groups, are disproportionately affected by new AI-driven surveillance technologies. An example is Google Maps, which has enabled the dissemination of pictures of women employed as sex workers on the internet, increasing risks for women of colour, queer, and indigenous women. In addition, feminist observations, such as the one made by Jezebel in 2011 regarding Google images of sex workers leading to the publication of her book “Roadside Prostitute”, objectifying women without compensation, further highlight the gendered implications of AI-driven surveillance. 

The advancement and integration of AI are enabling and enhancing state surveillance methods. This development carries significant gendered implications due to the inherent biases present within AI systems. As AI continues to play a central role in surveillance, it becomes crucial to examine and address the gendered consequences stemming from these technologies.

Nidhi Singh is a Delhi-based researcher, and her research passions encompass feminism, global economy, and AI. 

Comments

 

Other News

India-Russia ties: Anchored in trust, deep mutual understanding

Unlike the US and the West, Russia does not have a relationship with India merely based on transactional exchange of goods or commodities. Instead, it is based on deep mutual understanding and trust, which have helped the two countries in navigating ups and downs of international geopolitics for decades.

Centre accelerates railway upgrades across Maharashtra

The union government has reiterated its commitment to the development and expansion of the railway network in Maharashtra. Responding to a question in the Rajya Sabha on Friday, railways minister Ashwini Vaishnaw highlighted the significant progress achieved in recent years through enhanced budgetary suppo

On Ukraine, Modi tells Putin India stands for peace

India stands for peace from the beginning when it comes to Ukraine, prime minister Narendra Modi said here Friday, as he welcomed Russian president Vladimir Putin. "We welcome all efforts being made for a peaceful and sustainable resolution of this subject. India has always been ready to contribute

Passport Verification Record now available on DigiLocker

The National e-Governance Division (NeGD) under the Ministry of Electronics and Information Technology (MeitY), in collaboration with the Ministry of External Affairs (MEA), has announced a major enhancement to citizen services through the enablement of the Passport Verification Record (PVR) on the DigiLoc

Strengthening telecom sector for significant growth in its digital infra

The government has undertaken several measures to strengthen the telecom sector in the country which has led to the growth of telecom sector. These measures include Rationalization of Adjusted Gross Revenue; Rationalization of Bank Guarantees (BGs); Rationalization of interest rates and remo

Praja Foundation launches Citizens’ Manifesto for Maharashtra civic polls

Praja Foundation has unveiled the Citizens’ Manifesto for Maharashtra’s Municipal Elections 2025, presenting a comprehensive reform roadmap aimed at strengthening municipal governance, enhancing service delivery, and advancing transparency and citizen participation across urban local bodies.

Visionary Talk: Amitabh Gupta, Pune Police Commissioner with Kailashnath Adhikari, MD, Governance Now





Archives

Current Issue

Opinion

Facebook Twitter Google Plus Linkedin Subscribe Newsletter

Twitter