Artificial Intelligence and Legal Accountability: Allocating Liability for AI-Generated Harm Under Indian Law and Emerging Global Regulatory Frameworks
By Guru Legal
Keywords
AI accountability; artificial intelligence liability; product liability; negligence; EU AI Act; DPDP Act; algorithmic decision-making; autonomous systems; developer liability; deployer liability; AI governance; India; regulatory framework
Abstract
Artificial intelligence systems are increasingly deployed to make or assist high-stakes decisions in domains including finance, healthcare, employment, criminal justice, and autonomous vehicles. When AI-driven decisions cause harm a misdiagnosis by a diagnostic algorithm, a wrongful loan refusal by a credit scoring model, an accident caused by an autonomous vehicle the question of who bears legal responsibility is complex and often unresolved under existing legal frameworks. This article examines the liability of AI developers, deploying organisations, and end users under Indian law, with comparative reference to the EU AI Act and emerging global approaches. It argues that the existing doctrines of negligence, product liability, and contractual liability provide an incomplete basis for AI accountability, and that India needs a targeted legislative framework to allocate AI liability clearly, proportionately, and in a manner that incentivises responsible AI development and deployment.
I. Introduction
The deployment of AI systems across critical sectors of the economy has created a significant gap between the legal frameworks governing human decision-making and the regulatory void surrounding algorithmic decision-making. Where a human doctor misdiagnoses a patient, a human financial adviser provides unsuitable advice, or a human driver causes an accident, well-established legal doctrines of negligence, professional liability, and vicarious responsibility provide a framework for assigning accountability and providing redress. Where an AI system makes the same decisions, the attribution of legal responsibility is far less clear: the AI itself is not a legal person; the developer may be geographically and contractually remote from the deployment context; and the deploying organisation may lack the technical expertise to understand, let alone control, the AI’s decision-making process.
II. Existing Legal Frameworks and Their Limitations
Under Indian law, claims arising from AI-generated harm might be framed in negligence, under the Consumer Protection Act, 2019, under product liability principles recognised by Indian courts, or under contractual warranties. The law of negligence requires the plaintiff to establish a duty of care, breach of that duty, causation, and damage. Establishing negligence in the context of AI decision-making presents specific difficulties: the duty of care owed by an AI developer to the ultimate user of the AI’s output is not clearly established in Indian jurisprudence; the causal chain between the AI developer’s design choices and the specific harm suffered by the plaintiff may be attenuated by the intervening acts of the deploying organisation and the opacity of the AI system; and the standard of care applicable to AI development is not well-defined in the absence of mandatory technical standards.
The Consumer Protection Act, 2019 provides a more accessible avenue for individual consumers harmed by defective AI-powered products or services, with its provisions on product liability in Chapter VI applying where a product is defective or a service is deficient. However, the Act’s application to pure AI decision-making as distinct from AI embedded in a physical product is uncertain, and its remedies may be inadequate for systemic harm caused by widely deployed AI systems.
III. The EU AI Act: A Model for Regulatory Classification
The EU AI Act (Regulation (EU) 2024/1689), which entered into force in August 2024, establishes a risk-based regulatory framework for AI systems that has significant implications for global AI governance. The Act classifies AI systems by risk level: unacceptable-risk systems (prohibited outright), high-risk systems (subject to stringent requirements including conformity assessment, transparency, and human oversight obligations), limited-risk systems (subject to transparency obligations), and minimal-risk systems (subject to voluntary codes of conduct). High-risk categories include AI used in biometric identification, critical infrastructure, educational and vocational training, employment, essential private and public services, law enforcement, migration control, and administration of justice.
For high-risk AI systems, the EU AI Act imposes obligations on both developers (providers) and deploying organisations (deployers). Providers must implement a quality management system, ensure the accuracy, robustness, and cybersecurity of their AI systems, and register their systems in a public EU database. Deployers must use AI systems in accordance with the provider’s instructions, monitor their operation, and report serious incidents. This dual liability framework allocating responsibilities to both the provider and the deployer provides a useful model for Indian policymakers designing a domestic AI regulatory regime.
IV. The Path Forward for India
India does not currently have a dedicated legislative framework governing AI liability. The DPDP Act, 2023 addresses AI-related personal data processing obligations but does not resolve the broader question of liability for AI-generated harm. The Ministry of Electronics and Information Technology has published an advisory framework on responsible AI, but this remains non-binding. A targeted AI governance framework for India should, at a minimum, establish a risk-based classification of AI systems; allocate liability between AI developers, deploying organisations, and users; introduce mandatory transparency and explainability requirements for high-risk AI systems; and create a regulatory body with the technical expertise to oversee AI compliance.
V. Conclusion
The rapid deployment of AI across critical sectors makes the development of a clear and proportionate liability framework for AI-generated harm a matter of urgency for India. The EU AI Act provides a well-developed model for risk-based AI regulation that India can adapt to its own legal and socio-economic context. A well-designed AI accountability framework will serve the dual purpose of protecting individuals from algorithmic harm and providing the regulatory certainty that responsible AI developers and deploying organisations require to invest with confidence in the Indian digital economy.
Bibliography
Consumer Protection Act, 2019 (India).
Digital Personal Data Protection Act, 2023 (India).
Regulation (EU) 2024/1689 (EU Artificial Intelligence Act).
Information Technology Act, 2000 (India).
Ministry of Electronics and Information Technology, Advisory on Responsible AI (India).