Generative AI is being used by people all around the world to create and modify photos, videos, and audio in ways that greatly enhance learning, productivity, and creativity. It will be crucial for society to accept new technologies and standards that enable people to comprehend the tools used to create the content they see online as generated audiovisual content grows in popularity. 

According to a recent blog post by the company, OpenAI, an American AI research organization, is creating new provenance techniques to improve the integrity of digital content. Thanks to this technology, Internet users will apparently be able to confirm whether or not something is produced by artificial intelligence. 

OpenAI says, the main approaches they are taking to address this challenge are collaborating with others to adopt, develop, and promote an open standard that can assist users in verifying the tools used to create or edit various types of digital content and developing new technology that specifically aids users in identifying content produced by OpenAI's tools. 

Ensuring authenticity 

Standard methods should be used to exchange information about the creation process of digital content. Whether the content is the raw output from a camera or an artistic production from a tool like DALL·E 3, standards can help explain how it was generated and provide additional information about its sources in a way that is easily recognized across different contexts. 

OpenAI has joined the Coalition for Content Provenance and Authenticity's (C2PA) Steering Committee. The C2PA standard is extensively utilized for digital content certification, having been created and embraced by numerous entities such as software developers, camera producers, and internet portals. C2PA is a valuable tool for demonstrating where the material originates. According to OpenAI, standard development is a critical component of their methodology, and they are excited to contribute to its growth. 

Earlier this year, we began adding C2PA metadata to all images created and edited by DALL·E 3, our latest image model, in ChatGPT and the OpenAI API. We will also integrate C2PA metadata for Sora, our video generation model when the model is launched broadly, the blog post reads. 

An essential resource for trust-building 

As OpenAI noted in the blog post, "With increasing adoption of the standard, this information can accompany content through its lifecycle of sharing, modification, and reuse." People can still create deceptive content without this information (or remove it), but they cannot easily fake or alter it, making it an essential resource for building trust. "We think that in time, people will come to expect this kind of metadata, filling a crucial gap in digital content authenticity practices," they added. 

The blog post states that OpenAI and Microsoft are collaborating to launch a $2 million societal resilience fund, which will support AI education and understanding through channels like Partnership on AI, International IDEA, and Older Adults Technology Services from AARP, in an effort to promote the adoption and understanding of provenance standards, including C2PA. 

Attempts to enhance digital content integrity 

In addition to funding C2PA, OpenAI is creating novel provenance techniques to improve the accuracy of digital content. The blog post states that this includes using detection classifier tools that use artificial intelligence to determine the likelihood that content originated from generative models, as well as implementing tamper-resistant watermarking that marks digital content like audio with an invisible signal that aims to be hard to remove.  It also stated that these technologies aim to be more resilient to obscure signals regarding the content source. Moreover, OpenAI has added audio watermarking to Voice Engine, its custom voice model, which is presently available for restricted study.

Sources of Article

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE