By — Sai Ruchitha
Abstract
India’s Digital Personal Data Protection (DPDP) Act 2023 makes ‘informed consent’ the central tenet of the legal processing of personal data. Nevertheless, the use of AI and machine learning calls into question the practicality of providing informed consent due to the lack of clarity and dynamism in machine learning technology itself. The article critically analyses the role and importance of informed consent within the DPDP framework for legal data processing in India in the context of its failure to address the reality of machine learning technology and its comparison with international standards.
Introduction:
India’s new data protection regime, the Digital Personal Data Protection (DPDP) Act, 2023 and its accompanying DPDP Rules establish a “clear and citizen-centred framework” for lawful data use. The law is built on core principles of consent, data security and accountability, mirroring global standards like the EU’s GDPR. Under Section 6 of the DPDP Act, any consent must be “free, specific, informed, unconditional and unambiguous”. Further, “data fiduciaries” are required to provide clear notices about the type of data collectedand the reasons for its collection. It means an individual is aware of the processing of their personal data before entering into a contract, in addition to having the option to revoke consent at any time. However, the proliferation of AI involves the application of machine learning techniques, in which complex algorithms are used to collect, analyse, and process data. It means consent is no longer ‘informed consent.’ It is better understood through the example of ‘Large Language Models (LLMs).’ These models make predictions about linguistic patterns through a transformer-based model. Through these predictions, they can generate responses that enable them to create content, provide translations, and generate summaries. However, their functioning is unclear, as data is often inferred, repurposed, and processed. It means an individual is not aware of how their data is being processed. It leads to many legal issues. Through a comparative analysis of the DPDP Act alongside regulatory approaches under the GDPR and other major data protection regimes, this article examines the treatment of ‘automated decision-making’, ‘algorithmic transparency’, and the evolving legal thresholds for ‘meaningful consent’ in the age of AI.
DPDP and Consent Governance: Regulatory Obligations and Unresolved Challenges
Section 6(1) of the DPDP Act explicitly states that the consent must be free and unambiguous and must strictly be limited to the data necessary for the stated purpose. The law mandates detailed, user-understandable privacy notices for ‘data fiduciaries’ must explain what data is collected and why, in clear language. Further, the rules expand on the ‘Data Principals’i.e being able to submit requests to erase their data that is no longer required for processing, and all grievances should be addressed within 90 days. With all these provisions, the DPDP framework maintains a robust notice-and-consent regime in which users should be well-informed about data collection and use in advance. While the Act articulated broad consent principles, the Rules now prescribe the mechanics of consent notices, withdrawal, grievance redressal, and compliance obligations for data fiduciaries. Yet, despite recognising algorithmic processing as a source of heightened risk, the Rules stop short of mandating disclosures about the use of artificial intelligence, profiling logic, or automated decision-making in consent notices. Unlike the EU’s GDPR, the DPDP Act does not grant a right to opt out of fully automated decisions. Whereas Article 22 of GDPR flatly prohibits solely automated decisions that have legal or similarly significant effects on a person. This will lead to challenges where, in India, data subjects would not be given a specific opt-out right as long as general consent was obtained. This legislative silence creates a gap: algorithmic bias or errors could go “unaddressed and unchecked” unless courts read new obligations into the law.
Nevertheless, the Rules do impose some accountability on AI use for Large or sensitive processors designated as Significant Data Fiduciaries (SDFs). In addition to being subject to yearly independent audits and, most importantly, performing Data Protection Impact Assessments (DPIAs) for high-risk processing, SDFs are required to designate Data Protection Officers. DPIAs are required by DPDP to address “decision-making systems affecting data principals” as well as “large-scale profiling and data combination activities.” Hence, any SDF that uses AI for health analytics, credit scoring, etc., needs to assess the algorithms’ ethical implications, privacy risks to data principals, and fairness. So the law does acknowledge AI-driven processing as a potential risk. It requires reasonable safeguards, but it still falls short of explicitly forbidding ‘automated decisions’, raising further questions on ‘meaningful consent’ as well.
Re-evaluating “Informed” Consent in the Age of Artificial Intelligence:
Even with strong legal restrictions, AI and Machine learning(ML) make genuine informed consent difficult. Modern machine learning models, particularly deep learning models, frequently operate as mysterious “black boxes.” A neural network’s final layer predicts after each layer converts input data into new representations. Because deep networks process information in layers, “it can become increasingly difficult to understand the decisions and inferences made at each level”. Hence, when a data principal i.e user, cannot fully understand how their data will influence the outcome, how can they really consent to it in a fully informed way? Further, AI doesn’t just use data that is directly and explicitly provided by users, but also uses data that is derived or inferred from such inputs. For example, an AI recommendation engine may use harmless clicks to infer sensitive preferences or traits; a healthcare AI may use wearables and lab data to forecast invisible conditions. According to the DPDP model, users consent to specific data uses for a predetermined purpose and duration. However, machine learning systems constantly gather and reuse data to enhance models. By agreeing to “personalised product recommendations,” a user may be unintentionally permitting their profile to be combined with others and utilised for irrelevant analytics. Hence, when users click “Agree” to lengthily-worded consent terms, they are unaware of the hidden inference engines at work. They may have no idea how much extra personal insight the AI gleans from their data, or how it might change their experience. This raises real concerns about whether DPDP’s noble consent requirements can be genuinely implemented in AI-driven services.
Global Regulatory Responses to AI and Automated Decision-Making
By contrast, many other regimes have explicitly grappled with AI. Regarding “consent”, both DPDP and the EU’s GDPR require it to be “free, specific, informed”, but the latter has further recognised data subjects’ rights through Article 22. It allows automated decisions only with some human intervention and gives the ‘data subject’ the right to contest the decision. Even other international data privacy laws, like that of China’s Personal Information Protection Law (PIPL), expressly require that automated decisions be transparent and fair. China grants a limited right to refuse or challenge AI-based decisions if an automated decision has a significant adverse effect on them. Hence, many jurisdictions are moving beyond simple notice-and-consent, demanding extra checks when data is processed by opaque algorithms. To bridge this gap, the DPDP framework may benefit from engaging more closely with developments in major privacy regimes across other jurisdictions. Further deploying “detailed and layered Consent Notices” mentioning profiling and moving past just ticking check boxes. will be significant in keeping ‘data subjects’ fully informed. This will ensure ‘data subjects’ are clearly informed, as the consent notice will be very specific on what it means to use the model and what the implications of giving consent are. Finally, to make sure AI models are following the ‘data minimisation’ principle as expected under DPDP, “cross-functional teams” (legal, technical, ethics) can be formed to oversee AI compliance, staying ready to adapt as DPDP rules evolve.
Conclusion:
Going forward, the onus of filling these gaps in the DPDP framework will be on governments across the world, as well as businesses. For example, the Indian judiciary can take a tough stand on issues related to fairness and necessity with respect to the DPDP Act and hold businesses accountable for the use of algorithms executed by them. It is also essential for businesses to manage these factors because, with primacy policies, they also run a great risk of losing customer trust due to issues related to potential exploitation, leading to negative publicity, which is difficult to antagonize. Failure to deliver with respect to AI-based systems related to data can be witnessed on the one hand, but on the other, the potential threat with respect to future regulation based on DPDP may necessitate businesses to deliver in terms of burden with respect to aspects related to consent, fairness, and necessity. India needs to bring even more stringent regulations with respect to AI, as indicated by other countries, such as the EU.
About the author:
Sai Ruchitha is a third-year BBA LLB student. The author is interested in data privacy and protection, and her work aims to analyse the current DPDP framework.

