The emergence of Generative AI has made a significant shift in the advertising industry with its robust contribution to content creation, customer engagement and backend operations. The swiftness and effectiveness of AI lure businesses to turn towards the immense use of AI in order to fulfil their creative needs. For instance, we have already witnessed the innovative advertisements launched by Heinz and Nestle, which is an AI-generated artwork. Besides, artists like James Earl Jones also have licensed his voice to an AI startup to earn royalties for the voice of Darth Vader in Star Wars, and various other artists use AI to edit their album covers.

Businesses have a wide range of tools as options, such as DALL-E 2, Jasper Art, Nightcafe, Starry AI and Midjourney. Such tools render images from a prompt text or image input using models trained on the images from online and are uploaded to the site by the users. Other similar platforms include ChatGPT for prose, Movio for marketing and Boomy for music.

The legal complexities

When it comes to advertising, generative AI tools can help automate the production of original content like text, images, articles and marketing materials. However, before opting for generative AI, businesses and other entities like advertisers, marketers, lawyers, artists, and others should be aware of the legal risks behind AI-generated content. Besides the advantages like cost savings, efficiency and high productivity, the frequent use of generative AI brings us certain inherent legal complications and obstacles that the advertisers need to be aware of.

According to a report by a law firm, Khaitan & Co and the Advertising Standards Council of India (ASCI) stated that legal ramifications such as copyright infringement are required in this age of AI. Furthermore, talks and discussions are going on regarding the ownership of AI-generated content, data security and AI biases. Copyright infringement related to generative AI will always leave us perplexed. For instance, a photographer, Boris Eldagsen, refused to accept the Sony World Photography Award, saying that his winning image was a product of GenAI, drawing many discussions and debates.

The uncertainty

As per the Copyright Act of 1957, the original contents are provided proper copyright protection and define its author as the sole creator. But some uncertainty prevails since AI is not being recognised as a legal entity, raising concerns over ownership and infringement. Hence, advertisers and marketers find it difficult to claim legal ownership of AI-generated content.

The GenAI models mostly use two kinds of data: training data and user input. They both require due diligence to dodge copyright infringement. This unclear status of AI on the ground of copyright law demands serious consideration to ensure compliance and security.

The misrepresentation

Training the GenAI models with data from the internet and other open sources can result in AI biases and misinformation in output. Restricted data diversity can result in various critical issues, such as underrepresenting cultures, genders, and races and perpetuating historical stereotypes.

The effectiveness of AI tools depends upon the quality of training data. However, the dominance of English language data and models questions the concerns that arise from the disadvantages it brings to non-English speakers. As a result, global cooperation is inevitable to surpass this linguistic domination and exclusion of various communities in AI development.

Mitigating the liabilities

According to the report, AI is not recognised as a legal entity in India. Hence, it makes AI-generated works without human intervention ineligible to fall under copyright protection. Thus, the advertisers will not have any legal ownership of AI-generated content and limited option in the case of infringement by third parties. Additionally, these marketing firms may face challenges in completely transferring the ownership of AI-generated works to their customers if they are not the rightful owners.

The report draws our focus on frequently assessing AI platform terms, securing proper authorisations for copyrighted content and avoiding forbidden inputs. Executing a robust content evaluation process, deployment of regulations and guidelines, and including AI disclaimers can significantly help mitigate potential liabilities.

It is also essential to safeguard confidential data by imposing solutions like non-disclosure agreements and robust security measures. Upskilling the human workforce is also recommended to maintain the balance of human control and AI to ensure the responsible use of AI and minimise ethical implications.

According to the India Government, their awareness of AI is evident through the significant steps and initiatives of MeitY and NITI Aayog and the recently proposed Digital India Act in regulating cyberspace which can be a glimmer of light at the end of the tunnel. 

Sources of Article

  • https://www.businessinsider.in/advertising/news/generative-ai-usage-advertising-comes-with-its-legal-implications/articleshow/102356474.cms
  • https://www.bloomberglaw.com/external/document/X955T1BO000000/copyrights-professional-perspective-navigating-legal-risks-with-
  • Photo by Wesley Tingey on Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE