Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

To combat widespread abuse of deep fakes, Google, in collaboration with Jigsaw, announced the release of a large dataset of visual deepfakes today. These datasets produced by Google are incorporated into the the Technical University of Munich and the University Federico II of Naples' new FaceForensics benchmark.

In recent years, as a result of rapid advancement in deep learning, new technologies such as synthesizing hyperrealistic images, speech, music, and even video emerged. They are used to create noble applications such as text-to-speech and helping generate training data for medical imaging. However, the misuse of these technologies resulted in the creation of deepfakes, formed by deep generative models that can manipulate video and audio clips.

Earlier, to fight widespread misuse of deep fakes, Google released datasets of synthetic speech to support to the development of high-performance fake audio detectors. 

The new datasets released today are created by working with paid and consenting actors to record hundreds of videos over the past year. These videos were then used with publicly available deepfake generation methods to create thousands of deepfakes. 

"The resulting videos, real and fake, comprise our contribution, which we created to support deepfake detection efforts directly," notes Google Research's Nick Dufour and Jigsaw's Andrew Gully in a blogpost. 

As part of the FaceForensics benchmark, this new dataset is now available to the research community, for use in developing synthetic video detection methods.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE