Results for ""
Seeing is believing. Or, rather, it used to be until the emergence of deepfake video. While this revolutionary technology is still in its infancy, it’s often enough to fool the casual viewer. Given the rate of progress usually witnessed in any field of technology, it won’t be long before even experts are unable to distinguish computer-manipulated deepfake video from the real thing. And the potential repercussions could be grave. Civil disorder sparked off by an inflammatory deepfake video, election results tainted by a fake computer-generated speech, fake ‘celebrity porn’, people framed for a crime - experts across the world are well aware of what’s at stake. The tech behind deepfakes is improving, and it won’t be long before something goes wrong in the worst possible way.
It’s not possible to put the genie back in the bottle, and it would also be wrong to stifle an emerging technology which could one day have a positive impact in several areas, from fashion to gaming, which is why it’s imperative that we seek out new methods of detecting deepfake video.
Deepfake apps normally utilise a Generative Adversarial Network (GAN), a form of Machine Learning that sees two separate ML systems go head to head. One uses a training dataset to infer certain properties and then creates a new instance (which could be an image, as in the case of deepfakes). The second system then checks this instance to see whether it passes verification. Deepfake creators have many techniques to choose from: Some may superimpose a target’s face onto existing footage, while others create completely artificial faces, while a newer variant, Deep Video Portraits (DVP) pushes the stakes higher by allowing video creators to create more lifelike, almost-undetectable footage by recreating the facial expressions of one person using the likeness of another person.
As of now deepfakes often have a ‘tell’ which most of us can spot (our brains are very good at detecting when human faces seem ‘off’), but as the technology improves, we are bound to reach a point where humans can’t easily distinguish manipulated or artificially generated videos from the real thing. Even if that point is far away, increased access to deepfake software could lead to an avalanche of ‘fake’ news (and other content) that overwhelms human moderators and fact checkers, and has real-world consequences.
Can we fight fire with fire? If ML and AI are being used to create deepfakes, could they also be used to detect these ‘fake’ videos? That’s what several experts are counting on. Work on deepfake detection has been spurred on by fears computer-generated videos could even influence elections, target businesses, or foment civil disorder. In the United States, which has already seen a debate rage over alleged external interference in the 2016 elections, many fear deepfakes could be the next tool to be ‘weaponised’. With elections just a year away, the US government has tasked the Defense Advanced Research Projects Agency (DARPA) with developing methods that could be used to improve the detection of deepfake video. DARPA, along with nonprofit research group SRI International is focused on improving detection by using inconsistencies to spot tampered footage.
Meanwhile, Facebook, Microsoft and industry body Partnership for AI have joined forces with universities and research institutions to come up with the Deepfake Detection Challenge. This competition, which will run through to March 2020, will offer participants access to a dataset of unmodified video, as well as a subset of videos created using various AI techniques.
Google, meanwhile, has followed up a prior initiative to detect fake audio by launching a new dataset, which includes genuine clips of actors, as well as deepfakes generated from this footage. The data will be made available for download, also been added to FaceForensics, a benchmark system that combines several AI / ML methods of detecting deepfakes. This is, however, just a fraction of the ongoing efforts. Other organisations looking at harnessing AI to defend against the deepfake threat include ZeroFox, which has created Deepstar, an open-source toolkit for verifying video. Another team of researchers from the University of California, Berkeley, and the University of Southern California is working on creating ‘soft-biometric’ markers for high-profile individuals by using publicly available footage. These can then be used to ascertain whether a video has been manipulated by checking if these markers are present.
These collaborative efforts to combat the scourge of deepfakes can only be welcomed, and the decision of many teams to fund new datasets, which are also made available to other researchers, presents a great opportunity for AI startups. Deepfakes can affect anyone - the CEO of a firm, a head of state, or even the common man - no one is immune, and the consequences could even be deadly. For this reason, it wouldn’t be unusual for deepfake- busting AI solutions to become to become a valuable asset for content hosts, governments, and businesses alike.
Source: IndiaAI
Image Source: Flickr