India's approach to AI governance is taking shape — and it is deliberately different from the EU's comprehensive rulebook. Where the EU AI Act establishes a detailed risk-tiered framework with specific technical requirements, India has so far pursued a more principles-based, advisory-led approach that emphasises enabling innovation alongside managing risks. For domestic and international companies operating in India, understanding the current landscape — and what may be coming — is increasingly important.
What Is Already in Force: The DPDP Act
The Digital Personal Data Protection (DPDP) Act was passed by the Indian Parliament in August 2023 and has been progressively notified since. While the DPDP Act is India's primary data protection framework rather than an AI-specific regulation, its implications for AI companies are substantial.
AI systems that process personal data of Indian residents — including training data pipelines, user-facing AI products, and enterprise AI tools — are subject to DPDP Act requirements. These include consent obligations, data principal rights (access, correction, erasure), and data fiduciary obligations. The Act includes provisions allowing the government to notify certain data categories as subject to localisation or cross-border transfer restrictions, which has direct implications for cloud-based AI services training on or processing Indian user data.
MeitY's Advisory-Led Approach
The Ministry of Electronics and Information Technology (MeitY) has issued several advisories related to AI deployment, most notably a March 2024 advisory requiring significant AI platforms to seek government approval before deploying AI models in India. This advisory generated significant industry pushback and was subsequently withdrawn and revised — illustrating the iterative nature of India's current regulatory posture.
The revised approach has emphasised voluntary compliance and industry self-regulation, with MeitY working with industry bodies to develop codes of practice rather than imposing prescriptive technical requirements. This reflects a deliberate policy choice to avoid the compliance burden that has drawn criticism of the EU AI Act from smaller technology companies.
- DPDP Act (2023): India's primary data protection law — affects AI data pipelines and products
- MeitY has pursued advisory-based approach rather than prescriptive technical regulation
- IndiaAI Mission: ₹10,372 crore (~$1.25B) government initiative for AI infrastructure and development
- Digital India Act: Under development; expected to address AI governance — no confirmed passage date as of early 2026
The IndiaAI Mission
The Indian government announced the IndiaAI Mission in March 2024 with an approved outlay of ₹10,372 crore (approximately $1.25 billion) over multiple years. The mission has several pillars: building domestic AI compute infrastructure, developing Indian language foundation models, creating AI application datasets for public sector use cases, and funding AI research and skilling programmes.
The IndiaAI Compute initiative has included procurement of GPU clusters for publicly accessible compute, with the goal of reducing the cost barrier for Indian startups and researchers working on AI. The simultaneous roles of regulator and AI developer create governance questions that India's institutions are still working through.
The Digital India Act: Still in Development
The Digital India Act (DIA), which has been under development since 2022 and is intended to replace the Information Technology Act 2000, is expected to include provisions addressing AI governance. As of early 2026, the DIA has not been passed into law. Consultation drafts have proposed a principles-based framework rather than the EU's risk-tiered approach, though the specifics remain subject to change through the legislative process.
For companies with significant Indian operations, the practical implication is monitoring: the DPDP Act requires compliance now, MeitY advisories require engagement, and the forthcoming DIA framework requires preparation — particularly for companies operating AI systems in areas that any future framework is likely to classify as higher risk, such as hiring, credit, and healthcare.