top of page
ISACA Qatar.png
ISACA Qatar.png

From Paris to New Delhi, India’s AI governance is being written in hospitals and banks

  • Writer: Dr. Jon Truby
    Dr. Jon Truby
  • 16 hours ago
  • 5 min read


The real test of AI diplomacy is whether it changes what regulators do on Monday morning. By that measure, India is becoming one of the more interesting countries to watch. When France and India co-chaired the AI Action Summit in Paris in February 2025, the language centred on inclusive, trustworthy and human-centric AI, with explicit concern for digital divides and capacity-building in developing countries.


At India’s AI Impact Summit in New Delhi in February 2026, that conversation moved further towards implementation, with the New Delhi Declaration endorsed by 92 countries and international organisations. But India’s real regulatory story is not in summit communiqués. It is in the way hospitals and banks are being used to build a sector-first model of AI governance.


That matters because India’s model is becoming clearer. The 2018 National Strategy for Artificial Intelligence set out the #AIforAll vision and identified healthcare, agriculture, education, smart cities and smart mobility as priority sectors. India’s 2025 AI Governance Guidelines then made the operational logic explicit: govern AI applications through the relevant sectoral regulators rather than regulate the underlying technology in one sweep, and do so through a techno-legal framework supported by digital public infrastructure, voluntary measures and targeted oversight. For countries that want practical governance without waiting years for one all-encompassing AI statute, that is a consequential choice.


In healthcare, India’s regulatory story starts with data governance. The National Health Policy 2017 envisioned an interoperable digital health ecosystem, and the Ayushman Bharat Digital Mission launched in 2020 to build that public architecture. Under the health data policy, sharing between Health Information Providers and Health Information Users is meant to run through a consent manager and a consent artefact, with records of consent and revocation kept in a way that enables audit. Cross-sector privacy law now reinforces that logic: the Digital Personal Data Protection Act requires consent to be free, specific, informed, unambiguous and limited to data necessary for the stated purpose, while the 2025 rules and the newly established Data Protection Board begin to operationalise that framework.


But health AI only becomes real regulation when software enters clinical decision-making, diagnosis or treatment. Here India does have binding law. The Central Drugs Standard Control Organization states that all medical devices in India are regulated under the Drugs and Cosmetics Act 1940 and the Medical Devices Rules 2017, and that the definition of a medical device includes software and accessories. Those rules classify devices by risk from Class A to D, while CDSCO’s software classification notes that software which drives or influences the use of a device falls automatically in the same risk class. CDSCO’s 2025 guidance on Medical Device Software shows where the regime is heading on quality management, licensing and post-market oversight, although the guidance itself is still explicitly a draft for stakeholder comments rather than a binding legal instrument.


The AI Impact Summit sharpened that healthcare agenda. At the summit, the Health Ministry launched SAHI, the Strategy for Artificial Intelligence in Healthcare for India, and BODH, the Benchmarking Open Data Platform for Health AI. SAHI is described by the government as a national guidance framework for safe, ethical, evidence-based and inclusive AI adoption across the health system, covering governance, data stewardship, validation, deployment and monitoring. BODH is designed as a privacy-preserving benchmarking platform that can evaluate AI models on diverse real-world health data without sharing the underlying datasets.


This matters because India is moving from abstract principle to operational governance, with more attention to testing, validation and public accountability before deployment at scale.

Finance, however, is where India’s AI governance is already harder-edged. The Reserve Bank of India’s KYC directions permit AI in video-based customer identification, but only inside a tight control framework. Regulated entities using V-CIP must ensure end-to-end encryption, auditable customer consent, protection against foreign or spoofed IP addresses, geo-tagged video, liveness and spoof detection, high-accuracy face matching, reporting of forged identities as cyber events, and vulnerability assessment, penetration testing and security audits before and during live use. AI is allowed, in other words, but only inside a compliance architecture that leaves ultimate responsibility with the regulated entity.


The digital lending regime carries the same philosophy. RBI’s digital lending guidelines, issued in September 2022, brought lending apps and their service providers into a stricter consumer-protection and data-governance frame. The RBI framework requires data collection to be need-based and based on prior, explicit consent, with clear privacy policies, disclosure of purpose, options to deny or revoke consent, limits on collection of mobile-phone data, restrictions on storing biometric data unless otherwise permitted, cyber-security compliance and storage of data on servers located in India. Separately, RBI’s published work on digital lending has argued that underwriting algorithms should be auditable, based on extensive and diverse data, and paired with ethical AI principles such as transparency, inclusion, impartiality, security and privacy. Even where every element is not yet hard law, the supervisory direction is clear.


India’s securities regulator has gone further on accountability. SEBI’s own papers record that since 2019 it has required market intermediaries, market infrastructure institutions and mutual funds to report their AI and machine learning applications. In 2025, SEBI amended its intermediary regulations after a Board process that proposed making any regulated person using AI tools, whether built in-house or procured from third parties, responsible for the privacy, security and integrity of investor data, for the outputs generated by those tools and for compliance with applicable law. SEBI also issued a consultation paper in June 2025 on broader guidelines for responsible AI and machine learning use in Indian securities markets. This is not anti-innovation. It is a clear allocation of liability.


Put these pieces together and India’s offer to the wider Global South becomes clearer. It is not a single, elegant AI code. It is a stack: domain regulation, consent architecture, auditability, benchmarking, human accountability and digital public infrastructure. India’s 2025 AI Governance Guidelines explicitly point to the Data Empowerment and Protection Architecture, first deployed in finance, as a model of permission-based data sharing through consent tokens and compliance by design. The same guidelines also call for algorithmic auditing, transparency frameworks and sector-specific regulation for sensitive or high-risk uses. That is perhaps India’s distinct contribution to the global debate after Paris and New Delhi. It is less about grand principle and more about institutional plumbing.


That does not mean India has finished the job. Healthcare remains partly guidance-led, and some of the most important instruments, such as CDSCO’s software guidance and SEBI’s broader responsible-use framework, are still evolving. The sectoral model can also produce fragmentation, especially when liability, procurement standards and enforcement capacity are spread across multiple regulators. Yet for policymakers across the Middle East and North Africa, the Indian example is still instructive. Do not wait for a perfect omnibus AI law before acting. Start with the sectors where automated decisions touch life, livelihood and trust. Build consent, audit trails and clear accountability into the system early. After the summits, India’s real test is not diplomatic visibility. It is whether this sector-first model can let useful AI into hospitals and banks while keeping risky AI on a short leash.

 

 
 

Knowledge Hub

bottom of page