Resources · DPDP Act
India's DPDP Act — what AI teams need to know.
India's Digital Personal Data Protection Act 2023 — the DPDP Act — is the country's first comprehensive data protection law. It creates real, enforceable obligations for any organisation that collects or processes personal data of individuals in India. For AI teams, it is not a background compliance matter. It is a fundamental constraint on how AI systems can be designed, trained, and deployed.
This guide explains what the DPDP Act requires, how it affects AI systems specifically, what the obligations are for organisations that use personal data in AI, and what to do first. It also covers how the DPDP Act interacts with other regulations — particularly the EU AI Act — for organisations that operate across both India and European markets.
In this guide
- What the DPDP Act is and who it covers
- The key concepts AI teams need to understand
- Data fiduciary duties — what they mean in practice
- Consent under DPDP — what is required for AI
- Data principal rights and how AI systems must respond
- Special rules for children's data
- Cross-border data transfers
- Penalties for non-compliance
- How DPDP overlaps with the EU AI Act
- Where AI teams should start
1. What the DPDP Act is and who it covers
The Digital Personal Data Protection Act 2023 was passed by the Indian Parliament in August 2023. It applies to the processing of digital personal data — information about an identifiable individual — in two circumstances: where the data is collected in India, or where the data is collected outside India but used to offer goods or services to individuals in India.
This extraterritorial reach is significant. An organisation based in Singapore or the UAE that processes personal data of Indian users is covered by the DPDP Act for that processing. An Indian organisation that processes data of Indian users is covered regardless of where the processing happens. A global SaaS platform with Indian customers is covered for those customers' data.
The Act does not apply to personal data processed for personal or domestic purposes, data made publicly available by the individual themselves, or data processed for certain national security and law enforcement purposes.
The Act is still being operationalised — the rules under the Act, which will provide detailed implementation guidance, are expected to be notified by the Indian government in 2025 or 2026. However, the core obligations in the Act itself are clear enough that organisations should already be designing their AI systems to comply.
Who needs to act now: Any organisation that collects, stores, processes, or uses personal data of individuals in India as part of an AI system — whether for training, inference, or output — needs to understand and plan for DPDP compliance. Waiting for the rules to be notified before starting is a mistake — the architectural decisions that enable compliance need to be made during system design, not after deployment.
2. The key concepts AI teams need to understand
The DPDP Act introduces specific terminology that AI teams need to understand before they can assess their obligations.
Personal data
Any data about an identifiable individual. This includes names, email addresses, phone numbers, financial data, health data, location data, device identifiers, and behavioural data. For AI systems, this typically includes training data that contains personal information, user inputs processed by the AI, and outputs that identify or relate to individuals.
Data fiduciary
The organisation that determines the purpose and means of processing personal data. If you decide to build an AI system that processes personal data — what data it uses, why, and how — you are the data fiduciary for that processing. The data fiduciary bears primary responsibility for compliance.
Data processor
An organisation that processes personal data on behalf of a data fiduciary. If you use a third-party AI platform or cloud service to process personal data, that provider is your data processor. You remain responsible as the data fiduciary — the data processor's obligations flow from your contract with them.
Data principal
The individual whose personal data is being processed. In most AI contexts, this is your user, customer, employee, or patient — the person the data is about. Data principals have enforceable rights under the Act that your AI systems must be able to honour.
Consent manager
An entity registered with the Data Protection Board that helps individuals manage their consent across multiple data fiduciaries. Consent managers are a novel feature of the DPDP Act — they create infrastructure for individuals to give, manage, and withdraw consent across organisations through a single interface.
3. Data fiduciary duties — what they mean in practice
The DPDP Act imposes a set of duties on data fiduciaries that have direct implications for how AI systems must be designed and operated.
Purpose limitation
Personal data collected for one purpose cannot be used for a different purpose without fresh consent. For AI systems, this means you cannot collect personal data for one stated purpose — say, improving a customer service application — and then use that data to train a different AI model for a different purpose without going back to the individual for consent. Purpose limitation must be designed into the data pipeline from the start.
Data minimisation
You can only collect and process the personal data that is actually necessary for the stated purpose. For AI training, this creates a real obligation — you cannot collect broad datasets of personal data on the basis that more data might be useful later. Each data element must be justified by the specific purpose for which it is collected.
Data accuracy
Personal data must be accurate and kept up to date where the accuracy of the data is likely to affect the data principal. For AI systems that make decisions about individuals — credit assessments, health risk scores, fraud scores — data accuracy is a direct input into the quality and fairness of the outcome. Inaccurate data produces inaccurate and potentially harmful AI decisions.
Storage limitation
Personal data must not be retained beyond the period necessary for the stated purpose. For AI systems, this means implementing data deletion workflows — not just for the raw data, but for model weights trained on that data in some circumstances, and for inferences stored about individuals. Storage limitation requires the system to be designed with retention periods and deletion mechanisms from the start.
Reasonable security safeguards
Data fiduciaries must implement reasonable security safeguards to protect personal data. The Act does not define what "reasonable" means in technical terms — the rules will provide more detail — but the baseline is clear: access controls, encryption, monitoring, and incident response processes are expected. For AI systems that process sensitive personal data, the bar for "reasonable" is higher.
Breach notification
In the event of a personal data breach, the data fiduciary must notify the Data Protection Board and each affected data principal. The Act requires notification without delay — the rules will specify the exact timeline. For AI systems, this means having breach detection and response processes in place before deployment, not after the first incident.
Grievance redressal
Data fiduciaries must provide a mechanism for data principals to raise grievances about data processing. The grievance officer must respond within a prescribed period. For AI systems, this means building a feedback and complaint mechanism that connects to your data governance processes — if a user complains that an AI decision about them was wrong, you need a process to investigate and respond.
4. Consent under DPDP — what is required for AI
Consent is the primary lawful basis for processing personal data under the DPDP Act. For most commercial AI applications, consent will be required. The Act sets a high bar for what constitutes valid consent.
Valid consent under the DPDP Act must be:
- Free — not conditional on accessing a service. You cannot make consent a prerequisite for using your product if the processing is not necessary for the core service.
- Specific — for a particular purpose, not a blanket consent to process data for any purpose. Each AI use case needs its own consent.
- Informed — the individual must understand what they are consenting to. Notices must be in plain language and must describe the purpose of processing clearly.
- Unconditional — not bundled with other consents or terms. Consent cannot be buried in terms and conditions.
- Unambiguous — affirmative action is required. Pre-ticked boxes and implied consent do not satisfy the Act.
For AI systems, this creates specific design requirements. Consent notices must describe, in plain language, exactly how personal data will be used — including whether it will be used for AI training, what the AI will do with it, and how the AI's outputs might affect the individual. Generic privacy policy language that refers vaguely to "improving our services" is insufficient.
Consent can also be withdrawn at any time. When a data principal withdraws consent, processing must cease and the data must be deleted unless there is another lawful basis to retain it. For AI systems, this means building withdrawal mechanisms that are easy to use and that trigger automated deletion workflows — not just flagging the withdrawal for someone to action manually.
Legitimate uses — the alternative to consent
The DPDP Act also recognises certain "legitimate uses" for which consent is not required. These include processing for the performance of a contract with the data principal, processing required by law, processing for responding to medical emergencies, and processing for certain state functions. For most commercial AI applications, these exceptions are narrow — consent will be the applicable basis in the majority of cases.
The practical implication for AI teams: Every AI system that processes personal data needs a clearly documented lawful basis for each category of processing. If consent is the basis, the consent mechanism must be purpose-specific, affirmative, and linked to a withdrawal mechanism. This is architecture work — it must be designed before the system is built, not added after deployment.
5. Data principal rights and how AI systems must respond
The DPDP Act gives individuals four categories of rights over their personal data. AI systems that process personal data must be designed to honour these rights — which means building the technical capability to respond to rights requests into the system itself.
Right to access information
Data principals have the right to know what personal data is being processed about them, the purposes of that processing, and the names of the data processors and other data fiduciaries with whom the data has been shared. For AI systems, this means maintaining a complete and current record of what data is held about each individual, why it is being processed, and who has access to it — and being able to produce this information on request within the prescribed timeline.
Right to correction and erasure
Data principals can request that inaccurate or incomplete data be corrected, and that data be erased when it is no longer necessary for the purpose for which it was collected or when consent has been withdrawn. For AI systems, erasure is technically complex — it is not just a matter of deleting a database record. Data may exist in training datasets, model weights, logs, backups, and third-party systems. A credible erasure process must account for all of these.
Right to grievance redressal
Data principals have the right to have their grievances about data processing addressed by the data fiduciary within a prescribed period, and to escalate to the Data Protection Board if the grievance is not resolved. For AI systems, this means building a complaints process that can receive grievances about AI decisions — not just data handling — and route them to the appropriate person for investigation and response.
Right to nominate
Data principals have the right to nominate another individual to exercise their rights on their behalf in the event of death or incapacity. This is a novel right in Indian data protection law — it creates requirements for organisations to establish processes for accepting and verifying nominations.
| Right | What your AI system must be able to do |
|---|---|
| Access | Identify all personal data held about an individual across all systems — training data, inference logs, stored outputs — and produce a clear summary on request within the prescribed timeline. |
| Correction | Update inaccurate personal data in all systems where it is held — including retraining or adjusting models where the inaccurate data materially affected the model's outputs about that individual. |
| Erasure | Delete personal data from all systems — primary databases, training datasets, model weights where feasible, logs, backups, and third-party systems — and verify deletion has occurred. |
| Grievance redressal | Receive complaints about AI decisions, route them to a responsible person, investigate, respond within the prescribed timeline, and escalate to the Data Protection Board if unresolved. |
| Nomination | Accept, verify, and record nominations from data principals allowing another individual to exercise their rights on their behalf. |
6. Special rules for children's data
The DPDP Act applies heightened protections to personal data of children — defined as individuals under eighteen years of age. For any AI system that processes data of minors, these rules create significant additional obligations.
Before processing a child's personal data, a data fiduciary must obtain verifiable consent from the child's parent or guardian. Generic consent from the child themselves is insufficient. The Act requires that the parent or guardian is a real adult — which means implementing age verification and parental consent mechanisms, not just asking users to confirm their age.
Data fiduciaries are also prohibited from processing children's data in ways that are likely to harm the child's wellbeing, or from tracking, behavioural monitoring, or targeted advertising directed at children. For AI systems that use personalisation, recommendation engines, or behavioural analytics, these prohibitions have direct implications for how the system can be designed when children are or may be users.
Edtech companies, social platforms, gaming companies, and any organisation whose users may include minors need to take these rules seriously. The Data Protection Board is expected to pay particular attention to violations involving children's data.
7. Cross-border data transfers
The DPDP Act allows personal data to be transferred outside India, subject to the Indian government notifying countries to which transfers are restricted. This is the inverse of the GDPR approach — instead of listing approved countries, India will list restricted countries. Until the government notifies the list of restricted countries, cross-border transfers are generally permitted subject to the data fiduciary's other obligations under the Act.
For AI systems, this has practical implications. Cloud AI infrastructure hosted outside India, AI models trained on Indian personal data by overseas providers, and multinational AI pipelines that route Indian personal data through non-Indian systems are all affected — and the rules may change as the government notifies restrictions.
Organisations should document where personal data flows in their AI systems — including which cloud regions are used, which third-party AI providers process the data, and how data moves between systems. This data mapping will be essential both for DPDP compliance and for responding quickly if transfer restrictions are notified.
8. Penalties for non-compliance
The DPDP Act provides for significant financial penalties for non-compliance, to be determined by the Data Protection Board following an inquiry. The penalties are structured as follows:
Up to ₹250 crore — failure to take reasonable security safeguards to prevent personal data breaches.
Up to ₹200 crore — failure to notify the Data Protection Board and affected data principals of a personal data breach.
Up to ₹200 crore — non-compliance with obligations related to children's personal data.
Up to ₹150 crore — failure to comply with additional obligations applicable to significant data fiduciaries.
Up to ₹50 crore — failure to comply with data principal rights obligations — access, correction, erasure, grievance redressal.
Up to ₹50 crore — failure to comply with any other provision of the Act or its rules.
₹250 crore is approximately €28 million at current exchange rates — a material penalty for any regulated enterprise. For companies with significant Indian operations or large Indian user bases, the exposure is real.
It is worth noting that the Data Protection Board is an adjudicatory body with investigative powers. It can conduct inquiries, call for documents, and order remediation in addition to imposing financial penalties. Reputational damage from a public Board inquiry can be significant even if no financial penalty is ultimately imposed.
9. How DPDP overlaps with the EU AI Act
Organisations that operate AI systems in both India and European markets face overlapping obligations under the DPDP Act and the EU AI Act. Understanding how these overlap is important for designing compliance programmes that address both efficiently.
Common ground
Both regimes require data governance — documenting what personal data is used, why, and how. Both require technical security safeguards. Both require breach notification processes. Both give individuals rights over their data and the decisions made about them. For organisations subject to both, these common requirements can be addressed through shared governance infrastructure — a single data mapping exercise, a single risk assessment framework, a single breach response process — that is then tailored to the specific requirements of each regime.
Key differences
The EU AI Act focuses specifically on AI system risk — classifying systems by the risk they pose and requiring proportionate controls for high-risk systems. The DPDP Act focuses on data protection — how personal data is collected, used, and protected regardless of whether AI is involved. For AI systems that process personal data, both apply simultaneously, and both must be satisfied.
The consent frameworks differ in important ways. GDPR recognises multiple lawful bases for processing — consent, legitimate interest, contractual necessity, legal obligation, vital interests, and public task. The DPDP Act has a narrower set of lawful bases — consent and "legitimate uses" — which means that some processing that is lawful under GDPR on the basis of legitimate interest may require consent under DPDP. Organisations operating across both regimes need to map their processing activities against both frameworks and identify the bases that satisfy both.
The practical implication
For an AI system that processes personal data of both Indian and European users, the compliance programme needs to satisfy both regimes. In practice, this often means building to the higher standard on each point — which in most cases means GDPR-level data governance combined with EU AI Act technical requirements for high-risk systems, plus DPDP-specific consent mechanisms for Indian users. This is more work than satisfying either regime alone, but less work than satisfying them independently with separate infrastructure.
Not sure where your AI systems stand on DPDP?
Take the free DPDP Readiness Assessment — 18 questions, 5 minutes, immediate scored report across 6 compliance dimensions.
Take the Free Assessment →10. Where AI teams should start
DPDP compliance for AI systems is not a legal project. It is a technical project with legal requirements. The work needs to be led by people who understand both the regulatory obligations and the architecture of the AI systems they are responsible for.
Step one — Map your personal data
Create a complete inventory of every category of personal data that your AI systems collect, process, store, or use. For each category, document: where it comes from, what it is used for, who has access to it, where it is stored, how long it is retained, and whether it is transferred outside India. This data map is the foundation of everything else.
Step two — Establish lawful basis for each use
For each category of personal data in your inventory, identify the lawful basis under the DPDP Act. In most cases this will be consent. Document the lawful basis, and if it is consent, audit your existing consent mechanisms against the requirements — free, specific, informed, unconditional, unambiguous. Identify gaps.
Step three — Design and implement compliant consent mechanisms
For AI systems that rely on consent, design consent mechanisms that meet the DPDP standard. This means purpose-specific consent notices in plain language, affirmative consent actions, withdrawal mechanisms that trigger automated processing cessation and deletion, and auditable records of consent given and withdrawn.
Step four — Build data principal rights workflows
Design and implement processes for responding to access, correction, erasure, and grievance requests. For each type of request, map the technical steps required — which systems hold the relevant data, how it can be retrieved, how it can be corrected, how it can be deleted — and build the tooling to execute those steps within the prescribed timelines.
Step five — Implement security safeguards and breach response
Conduct a security assessment of your AI systems against a reasonable standard. Implement encryption at rest and in transit, access controls, monitoring and alerting for anomalous behaviour, and a documented breach response plan with notification timelines built in. Test the breach response plan before you need it.
Step six — Address children's data obligations if applicable
If your AI system may process data of users under eighteen, implement age verification and parental consent mechanisms, and review the system's personalisation and behavioural tracking features against the prohibitions on processing children's data.
Step seven — Document everything
The DPDP Act does not explicitly require a formal record of processing activities in the way GDPR does — but in practice, the ability to demonstrate compliance to the Data Protection Board will depend on having documented your data governance decisions, your consent mechanisms, your security measures, and your rights response processes. Documentation produced before an inquiry is far more credible than documentation produced in response to one.
The bottom line
The DPDP Act is not a future concern for Indian AI teams. The core obligations in the Act are clear, the Data Protection Board is being constituted, and the enforcement infrastructure is being built. The organisations that will find compliance manageable are those that treat it as an architectural question — something that must be designed into their AI systems from the start — rather than a compliance exercise to be completed after the system is built.
The good news is that DPDP compliance and good AI engineering are not in conflict. Systems designed with data minimisation, purpose limitation, consent management, and rights response built in are better systems — more trustworthy, more defensible, and more resilient to the regulatory and reputational risks that come with processing personal data at scale.
Start with the data map. Everything else follows from knowing what you have, where it is, and what you are doing with it.
← Previous guide
Why governance-first costs less than retrofittingNext guide →
ISO 42001 explainedReady to make your AI systems DPDP compliant?
We build AI applications with DPDP, GDPR, and EU AI Act compliance embedded from day one. The scoping conversation takes 30 minutes.
Book a Scoping Call