By: Ayushmaan
Abstract:
In the recent decade, most public service delivery systems have been linked through Aadhar identification. The system brought the necessity of timely public service delivery to a greater extent; however, it placed more reliance upon the three-level algorithmic identification system, which relies on a poor quality of data, especially for the Below Poverty Line (BPL) groups. Developing on the dissent of Justice D.Y. Chandrachud in Aadhar’s Case, Aadhar Identification System for public service delivery is a shift from a deterministic identity process to a probabilistic identity process, which is not 100% accurate. Thus, emphasizing the inaccuracy and the quality of the data for using Aadhar-based algorithms and their reliance is establishing identity. Through a case study of Jharkhand, this piece has delved into the heavy reliance on the algorithms, especially for the BPL societies, and argues that these probabilistic systems are a hit to the livelihoods subjected to probabilities and thus require the adoption of Algorithmic Impact Assessment (AIA) in ensuring their rights.
Introduction:
The Aadhar program by UIDAI is the largest biometric ID in the world. Its primary motivation was simple: to make it easier for people to connect with the government and cut out corrupt middlemen and bureaucratic hurdles that had stopped welfare from reaching people in India. When the Ministry of Consumer Affairs, Food and Public Distribution linked Aadhar to PDS, which benefits more than 800 million Indians, to correct a longstanding problem of ghost beneficiaries. These fictitious records allowed corrupt PDS dealers to siphon off subsidy benefits easily. The solution was to link Aadhar numbers with the ration cards, make biometric checks mandatory at points of distribution, and create a system by which only authenticated individuals could collect the grain. But in practice, how it worked showed a big gap between policy aims and real-life scenarios. The piece focuses on the State of Jharkhand, a state with a tribal and Dalit population. Case studies in the last decade have reported that the system algorithm denied PDS subsidy to poor people (BPL), leading to starvation deaths.
In explanation of this exclusion, this piece looks at the Aadhar identification algorithm system, which has three levels: biometric authentication, demographic seeding, and PDS rules. Poor data quality is the biggest problem, and it creates risk along every point of the process – from misspelling names when seeding data, to worn-out fingerprints at authentication points. Each type of data flaw easily results in a breakdown of critical service delivery, as highlighted in the UIDAI committee report of 2009. Collectively, this transforms what people are entitled to into a matter of probabilistic outcomes based on the quality of the data and the efficiency of the algorithms at work in the system. For the most vulnerable groups in Jharkhand, these failures have not been minor administrative obstacles but obstacles to the people in the face of survival.
From Deterministic to Probabilistic Identity:
Historically, welfare identity checks were quite straightforward: physical papers of ration cards, voter IDs, or driver’s licenses issued by trusted authorities and accepted by the state as proof of identity. These papers could be forged, but it was a clear, generally human process. If a dealer refused rations on presentation of a valid card, you could easily challenge. Aadhaar changes this to a probabilistic system. Biometric checks are not merely “yes” or “no.” The system compares patterns to reference templates kept in the Central Identities Data Repository (CIDR) and produces a matching score that estimates how likely it is that two templates come from the same person. UIDAI sets a threshold: above it, it’s a match; below it, a non-match. The identity of a citizen becomes a statistical probability from an algorithm, not just their presence or need.
Justice D.Y. Chandrachud’s dissent in the Puttaswamy case lays the legal foundation to challenge this probabilistic exclusion. He said that basic inalienable rights should not be left to the probabilities of a tech system. He applied the proportionality tests on the Aadhar Act and found it disproportionate because its exclusion could starve people, which would be against Article 21, Right to Life. A poor family at a ration shop may face food insecurity on account of authentication failure. The dissent notes that “Biometric systems, by their very nature, work on a probabilistic model,” and fingerprints get affected by age, hard work, illness, and disability, among other factors. In India, this is particularly devastating for the marginalised and poor. He concludes: “Dignity and the rights of individuals cannot depend on algorithms or probabilities.”
The 2009 UIDAI Committee Report warned that collecting good data would be hard. It said that the system cannot reliably identify damaged or worn fingerprints, which is quite common among workers and the elderly, and that fact reduces success rates. That warning makes later mandatory use more troubling. The Aadhaar process works in three linked stages, and bad data hits each one. At the demographic seeding level, using de-duplication, the state matches PDS records with Aadhaar records to find “ghost” entries with no matching Aadhaar number or where details don’t fit. Spelling errors, transliterations, or data-entry errors create mismatches and exclude real beneficiaries. At the stage of biometric authentication, that is, through the electronic Point of Sale device, even the UIDAI’s Biometrics Standards Committee in 2009 acknowledged that fingerprint authentication is very difficult to collect and analyze in India. They mentioned that the fingerprint quality “has not been tested” well for Indian users, especially those who have worn fingerprints due to labour, which causes high false rejection rates. The system requires concurrent availability of electricity, internet, biometric sensors, and backend servers, which is a rare commodity in rural areas. At the PDS rules level, OTP-based checks or manual overwriting were supposed to be done; given the scale of failures, these options are often disincentivized of considering corruption and deviating from mandatory Aadhar identification. In programs like PM-KISAN, the remedy is merely to wait for system corrections to reach the top, which delays services. Scholars argue that it is impossible for these ad hoc and weakly implemented fixes to properly address broad exclusion risks.
Jharkhand: When Algorithms Fail, People Starve:
The case study of Algorithmic exclusion is frequently seen in the State of Jharkhand. The case of “Aadhar causing hunger is Santoshi Kumari, an 11-year-old Dalit girl” from Simdega district. Months before she died, the local government stopped her family’s ration card because it wasn’t linked to an Aadhar number. Even though the family lived in poverty and were present, the ration shop wouldn’t give them food because the system wouldn’t accept a cancelled card. Santoshi’s case wasn’t unusual, and it demonstrated a real pattern. In October 2017, 60-year-old Adivasi man Ruplal Marandi in Jharkhand’s Deoghar district died after his family was denied rations for two months because biometric authentication failed and he could not prove his identity. The Right to Food Campaign investigated 57 starvation deaths from 2015 to 2018 and found at least 19 deaths directly tied to Aadhar problems.
Further, a study discussed in the 2016-2017 Indian Budget showed failure rates of 49% in Jharkhand, 37% in Rajasthan, 6% in Gujarat, and 5% in Krishna District, Andhra Pradesh. The report thus stated: “Failure to identify genuine beneficiaries results in exclusion errors.” Those who Aadhar was meant to help are being left out.
The Path Forward: Algorithmic Impact Assessment:
International legislations provide examples regarding the treatment of large, risk-prone algorithm systems. Canada’s Algorithmic Impact Assessment (AIA) is a determination test that scores systems on risk and impact, placing them into four levels based on what could go wrong. For a Level 4 system making automated decisions, the Canadian Directive requires that not all decisions be fully automated, and a human must review decisions before they are final. The system must be checked by independent experts, users must be informed about why a certain decision was made, and there must be a detailed backup plan in case the system fails. Similarly, the European AI Act classifies AI systems according to risk, and the higher-risk categories call for strict, error-free data standards that ensure high accuracy and reliability.
Conclusion:
An AIA framework for welfare technologies is what India should be using. As it stands, the system assumes people with lack of identification to be treated as ghosts until proven. An AIA system should be able to assume that people are real until they are proven otherwise. This shifts the burden of error from poor citizens onto the state itself. In no case should an AIA framework clear a welfare program to be linked to Aadhaar unless data quality audits are done. It should demand mandatory, legally protected human overrides, which cannot be blocked by administrative action. Most importantly, it must echo what Justice Chandrachud said: dignity and rights can’t depend on algorithms or chance. When life and death are at stake, the algorithmic error should favour inclusion, not exclusion.
Author’s Bio:
Ayushmaan is a second-year student at Jindal Law School, pursuing L.L.B. His research interests include public policy, development growth, and commercial and cyber law.
Image Source : https://www.newindianexpress.com/states/tamil-nadu/2024/Apr/14/faded-fingerprints-fail-dhobis-at-pds-shops-in-tamil-nadu

