Image Alt

Civil Liability of Artificial Intelligence

AI systems have now advanced so much that they are on the cusp of becoming an integral and inextricable part of our lives, both as individuals and as a society. In several countries, AI systems are involved in handling crucial private and public functions such as counting of votes, approving loans, online advertising, autonomous transportation, etc. The development and the subsequent commercialization of AI systems raise the question of how liability risks will play out in real life. Since even the best technology is not error-free and as the interaction between humans and robots increases, domestic robots, self-driving cars, and other autonomous systems will inevitably cause harm to people and property. In light of the same, the question of how accountability for decision-making by AI systems should be allocated, has right    ly drawn attention. ‘However, as technical advancements are starting to outpace legal actions, it is not entirely clear how the law will treat AI systems.


As a preliminary step, several countries have considered whether the existing legal framework is enough to handle these questions of liability and accountability. Legal systems contain a well-defined (not necessarily codified) system of laws that ascertain civil, criminal and contractual liability of persons that have inflicted civil harm to the other person (or property). Researchers working to understand the interplay of law and technology opine that the traditional approaches to handling liability are inadequate for dealing with autonomous artificial agents due to a combination of two factors–unpredictability, and causal agency without legal agency. The unpredictability and inability to clearly explain the functioning of AI systems makes it difficult to measure the extent of human intervention and control.


Much of the discussion surrounding liability boils down to determining legal status of AI systems. Some researchers have argued in favour of granting a separate legal status to AI systems (similar to that given to companies), but that such status should be determined by the level of autonomous decision making and “intelligence” of the AI system. On the other hand, there are several arguments that disregard the idea of bestowing AI systems with a separate identity, as the issue of liability can be dealt with under a strict product liability regime; it is argued that treating AI systems as legally fictitious persons (like corporations) does not actually solve the problem and could give rise to several new problems.


In terms of practical solutions, there appear to be multiple models; one of these is to upgrade or modify existing law to suit or accommodate technological advancements related to AI; Others have considered enacting a separate legislation that specifically addresses legal aspects of the development and deployment of AI systems. In some cases, there have also been calls for the prior regulation of AI, i.e. that certain classes of new algorithms should not be permitted to be distributed or sold without approval from a government agency designed along the lines of the Food and Drug Administration of the USA that develops standards and ensures consequent compliance.


In light of the above, the current chapter maps the approach that has been adopted by various jurisdictions to address the issue.

ALSO EXPLORE