Nickeled & Dimed

Penny for your thoughts?

We are accepting articles on our new email: cnes.ju@gmail.com

Is AI Rewriting Rights in India?

By — Hansin Kapoor

Abstract

In India’s high-stakes technological renaissance, artificial intelligence is a Pandora’s box brimming with promise but also peril. On the one hand, AI fuels innovations in healthcare, agriculture, and governance; on the other, it raises profound human rights dilemmas. UNESCO notes that AI is re-shaping the way we work, interact, and live, and without strong ethical guardrails, it risks reproducing real-world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms. In other words, the siren song of AI can lure us toward progress or a pitfall. India’s diverse social tapestry intensifies this dilemma, as the “promises and perils of AI converge with the enduring principles of human dignity and justice” in India’s transforming society. This tension, between innovation and rights, sets the stage for this critical analysis through an interdisciplinary approach.

Mirror or Mirage?

AI systems mirror society’s biases unless actively guarded, as machine learning algorithms often rely on historical data rife with stereotypes. If unchecked, they can perpetuate or even worsen inequalities. For example, India’s AI-driven policing and welfare tools have come under fire. In the Samagra Vedika welfare program (Telangana), an AI database “flattened people’s lives by reducing them to numbers,” with opaque rules denying eligible citizens basic services. Similarly, Delhi Police’s facial-recognition net reportedly “solved” nearly 99% of riot cases with AI, identifying 73% of suspects via algorithms, yet later reports show over 80% of these cases ended in acquittals, suggesting grave errors. 

This is fundamentally a justice problem. A Rawlsian perspective would demand that AI be designed under a “veil of ignorance”, treating all outcomes fairly regardless of identity. Yet, entrenched data biases act like a rigged dice. If algorithms are a “funhouse mirror” to society, the distortions can be deadly, as Amnesty puts it, “algorithmic systems…are opaque, and they flatten people’s lives by reducing them to numbers.” In order to ensure AI facilitates rather than frustrates equality, designers must bake in fairness and transparency. UNESCO’s recent ethics guidelines insist that AI actors promote social justice, fairness, and non-discrimination and take an inclusive approach to make AI’s benefits accessible to all. In India’s constitutional democracy, where Articles 14 (equality) and 15 (non-discrimination) reign, algorithmic bias is not just a technical glitch but a violation of basic rights. Unless we deliberately cleanse data and audit algorithms, the machine’s mirror will remain a dangerous distortion of social reality.

A Digital Panopticon

The Supreme Court of India has declared privacy intrinsic to life and liberty under Article 21. Yet the state’s new toys, facial-recognition cameras, emotion-readers, and social-mapping AI, risk turning India into a digital Panopticon. Civil-society groups warn that the Delhi Police’s AI surveillance is an “illegal act of mass surveillance,” breaching the principle of proportionality. Thousands of CCTV cameras now feed AI that can identify individuals in crowds, even without probable cause, such indiscriminate scanning breaches the principle of proportionality by collecting data not just on criminals but on ordinary citizens. This echoes Foucault’s nightmare of the all-seeing Panopticon, but worse is that it is invisible, unaccountable, and untouchable.

Meanwhile, India’s digital identity infrastructure, like Aadhaar and new data laws, creates conflicting pressures. After K.S. Puttaswamy (2017), privacy is a fundamental right. Paradoxically, Aadhaar centralized 1.25 billion Indians’ biometrics, and now proposed surveillance AI stands to erode that right. Digital Personal Data Protection law of 2023 stresses consent-based use of data, but carves out broad public data exceptions. In practice, tech-hungry agencies can scrape social media or public registries under a claim of serving public interest. UNESCO insists that the “right to privacy” and data protection must be guarded throughout the AI lifecycle. Absent clear limits, privacy, which is the “right to not be watched,” may become paper-thin. India must tread a tightrope by upholding constitutional dignity in a world where every face can be turned into a digital string.

Behind the Black Box

Who watches the watchers? Modern AI is notoriously opaque, as its decisions are often inscrutable even to developers. This “black box” nature clashes with basic notions of justice and accountability. When an AI decides to approve a loan or enroll a child in school, that outcome can make or break a life. Yet affected individuals usually have no clue how the decision was made. The Amnesty report on Samagra Vedika lamented a “regulatory vacuum” with “no transparency”, so those denied benefits are left in bureaucratic limbo with no remedy. A parallel problem arises in predictive policing, in Delhi’s recent cases, suspects were arrested solely on AI “matches” with video frames, often side-profiles with no human witness or evidence. Defense lawyers point out the absurdity: suspects were “identified” by FRT even when not clearly visible in the footage.

The ethical imperative is clear that AI systems must be auditable and explainable. UNESCO’s AI ethics recommendation demands that AI be traceable, with oversight and impact assessments. It also stresses transparency and explainability, so affected people can challenge automated decisions. In practice this means logs, impact studies and even “AI right to explanation” laws, as some U.S. jurisdictions propose. India’s own DPDP Act requires that any personal data used for automated decisions be “accurate, consistent and complete,” hinting at the need for oversight. But without strong enforcement or whistleblower rights, accountability remains elusive. The defendants often aren’t even informed that an algorithm “identified” them, and thus have no way to contest the flawed process. With accuracy rates plummeting into single digits, the stakes are life and liberty. A culture of transparency, open source code, public audits, clear legal rules is needed so that tech-driven decisions can be questioned. Otherwise, justice risks being reduced to a random algorithmic lottery, and the haves (who control the code) will continue to have their way, while the have-nots bear the brunt.

Regulating the Uncanny

How can Indian law catch up with this runaway train? Some building blocks exist. The 2017 Puttaswamy judgment cemented that privacy is foundational, giving courts a constitutional tool against overreach. In 2023, Parliament enacted the Digital Personal Data Protection Act to reign in data misuse though civil libertarians worry its exemptions may dilute its bite. The government has also issued AI ethics frameworks, NITI Aayog calls for a policy that “protects human rights and privacy without stifling innovation”, and global ethics boards like UNESCO offer guiding star principles. These doctrines emphasize familiar values, do-no-harm, proportionality, accountability, and human oversight.

Yet gaps persist as India has no omnibus AI law or regulator, instead it must graft rules onto existing statutes. For instance, if an AI-powered welfare eligibility algorithm denies benefits arbitrarily, which remedy applies, the Right to Information Act? The Digital Personal Data Act? At least one Supreme Court case implies any use of personal data must not violate dignity. The principle of “proportionality” (UNESCO’s first rule) demands that any AI interference be narrowly tailored to legitimate goals. In practice, however, Indian regulators move at Internet speed. Civil-society petitions and public interest litigation will be needed to hold authorities to those standards. Meanwhile, multilateral pledges like UN guidance and OECD AI principles push India to harmonize with global norms.

Technopolitical theory reminds us that law alone won’t suffice and Culture, public discourse, and philosophy must evolve too. We must counter the old Silicon Valley mantra that “science is value-neutral.” In reality, every line of code encodes a bias, philosophers like Kant would object that a society that sees people as mere data points violates human dignity. Rawlsians would demand that algorithmic rules be designed as if under a veil of ignorance especially important in a country as unequal as India. And on the other hand Amartya Sen’s capability approach warns that automation must expand people’s real freedoms–health, education and livelihood, not restrict them. So the ethos of AI in government, academia and industry must be human rights-centered by design, not as an afterthought.

Walking the Tightrope: Balancing Innovation with Rights

The challenge is to walk a tightrope as on one side lie utopian dreams of AI-driven prosperity, on the other, a dystopian sweatshop of surveillance and discrimination. The solution is neither techno-optimism nor Luddite rejection, but wisdom. India needs a new social contract for the digital age, one that includes laws and policies that demand redressability, clearly defined appeals in the event of AI errors, algorithmic impact assessments, pre-deployment audits of social effects, and multistakeholder governance by bringing technologists, ethicists, and laypersons together. It also encompasses digital literacy, enabling people to understand their rights in an increasingly automated world.

By drawing on its rich legal and philosophical heritage, from Ambedkar’s vision of equality to Nehru’s faith in science, India can craft an AI path that is both high-tech and high-touch. In the end, the test of any technology is how it treats the least powerful. India stands at a crossroads where it can either embrace AI as a servant of its constitutional ideals or allow it to become a master that ignores them. The path forward requires constant vigilance, creativity, and an unwavering commitment; only then can we turn this Pandora’s box into a true panacea.

About the Author

Hansin Kapoor is a third-year law student pursuing a B.A. (Hons.) in Criminology and Criminal Justice. He interrogates justice through the lens of victimology, wielding criminological critique to expose the silences law was designed to keep.

Source: India/Global: New technologies in automated social protection systems can threaten human rights   – Amnesty International

Leave a comment