Recent advances in generative AI, from the release of ChatGPT in November 2022 to the integration of AI in public facing products such as in Microsoft’s Bing search or Google’s Bard, have led to a flurry of activity by governments seeking to stay one step ahead of technological developments they have very little control over. China, for example, has released draft regulations on generative AI, the United States has actively begun discussions on potential regulations, the EU has a draft AI Act ready and waiting, and the United States has released an ‘AI Bill of Rights’.

Each of these regulatory developments have been informed by the respective jurisdictions’ domestic economic, technological, and political contexts. In this context, what can countries which have just begun looking at potentially regulating AI, like India, do to ensure their regulations effectively protect individuals from potential harms, and do not impede innovation? At the outset three principal aspects need to be considered: the purpose of regulation, building an accurate liability framework, and incorporating essential regulatory facets.

The purpose of regulation

Countries will not have uniform reasons to regulate generative AI. For example, China’s draft regulations are geared towards preventing further developments in AI technology that might undermine the government’s control over domestic internet and tech space. The EU on the other hand places prevention of harm to individuals front and centre in its draft AI Act. The purpose of regulations therefore is necessary to be defined to ascertain what the regulations should contain.

At a foundational level, all generative AI regulations should attempt to protect individuals against potential harm. These could include violation of an individual’s privacy and data rights, discrimination in access to services, or being subject to patently false or misleading news and information. Protection against these, and similar harms, has to be non-negotiable. There is an international consensus forming on the necessity to ensure such protections, though the granular details of practical implementation are still far from clear.

Off late there has been significant debate on whether regulations should also include protection from second order harms, such as violation of intellectual property rights and defamation. While this has not yet been settled in a definitive manner, it is likely that generative AI systems will be subject to atleast IPR laws in the near future.

Apart from individual harms, regulations could also look at systemic harms, specifically the increased concentration of economic and market power in the hands of Big Tech. The fundamental requirements for developing generative AI systems - access to substantial amounts of data, and computing power to process this data - are readily available only to the Big Tech companies. Given that such companies are located only in 2 major economies - the United States and China - such concentration also leads to significant geopolitical risk for other countries. Therefore some countries might seek to enforce regulations that reduce such concentration by mandating data sharing practices or investing in more public or open access AI systems.

Liability framework

To correctly identify the entities responsible for any harm, it is necessary to outline the generative AI value chain. In any commercial generative AI system, there are four principal actors: Developers, who actually develop the system; Deployers who build on the base model to create advanced functionalities for themselves or other third party customers; Users, who can be individuals, corporates, or platforms who use the AI system themselves either internally or through product offerings; and finally the Recipients, people who receive the output of the AI system.

Recipients, being the only passive actors in the entire AI value chain, cannot have any regulatory burden imposed on them. The remaining three actors - Developers, Deployers, and Users - could be separate entities, don different roles, or be the same entity depending on the organisation in question. For example, Meta or Google can be all three, whereas OpenAI will most likely only be a Developer. An AI SAAS company on the other hand could be either a Deployer or a User. Given that a generative AI system by its very nature is likely to undergo at least some modification from the time of its development to its eventual use, liability should be split across these three types of entities. Current discussions tend to focus on developers as the principal point of liability, however this could end up being both inadequate and unfair, especially if the base model has been tinkered with at later stages of deployment. The exact positioning of the liability however will depend on the AI system, its use, and the harms arising from its deployment, and therefore can only be decided on a case by case basis.

Regulatory features

There are two inter-related but distinct issues - front end and back end - that need to be addressed by any potential regulations. Front end issues - and therefore regulations - are consumer facing, whereas back end refers to the data sets used and the training models employed for the AI system.

At the front end, the critical feature to be insisted upon is transparency, i.e the recipient, who most likely will be an individual, should be made completely aware that they are interacting with an AI system or that the content they are consuming has been created by an AI system. In the case of interactions, such as with chatbots, this could be achieved through a disclaimer right at the beginning that the ‘person’ talking to the individual is in fact an AI system. In the case of content, especially photos and videos, a watermark or label indicating that the ‘author’ is a generative AI system must be made compulsory.

At the backend, the focus should be on ensuring that the data used in an AI model’s training be as accurate, and as free from bias as possible. There are already numerous suggestions on how this could be achieved, including having traceable elements in the training data so that the broad provenance of the data can be known, and actively removing pre-defined toxic content from training datasets. Some researchers have suggested that including ‘synthetic data’ of historically marginalised groups in training datasets could offset potential bias problems.

Way forward

It must be noted that generative AI, given its very nature, is likely to always evolve faster than potential regulations. It therefore might be imprudent to talk of ‘regulations’ in this context as something definitive. Instead countries and regulators must aim for an oversight mechanism that is flexible enough to keep up with ever changing technologies while providing baseline protections for individuals. This mechanism, howsoever designed, should be grounded in real harms that can emanate from widespread use of generative AI services and must aim to ‘level up’ AI systems, by making them more effective for everybody, rather than ‘level down’ such systems by forcing them to adhere to substandard output requirement.

Small adaptations in technology neutral legislations and regulations are easier to achieve than formal technology specific regulations. Existing tech neutral legislations, whether on IPR or competition or data protection, can be tailored to cover most of the harms that currently arise from the use of generative AI systems. This is also likely to be far more effective and innovation friendly than adopting AI specific legislations. In the long run the only effective way to regulate technologies like generative AI is to not regulate the technology itself, but to focus on specific use-cases that could lead to significant harm.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE