Facebook Will Identify Deepfake Images with Its Artificial Intelligence

Facebook Will Identify Deepfake Images with Its Artificial Intelligence

Facebook has developed an artificial intelligence that wants to identify deepfake images and then trace their creators. A deepfake consists of a video in which a person’s voice and face is changed by artificial intelligence software, causing the altered video to appear authentic. This technique is used mostly with public figures. Facebook analyzes the similarities between a collection of different deepfakes to see if they have a shared origin, looking for unique patterns such as small specks of noise or small oddities in the color spectrum of an image.

By identifying minor fingerprints in an image, Facebook’s artificial intelligence can discern details of how the neural network that created the image was designed, such as the size of the model or how it was trained. How could we, just by looking at a photo, know how many layers a deep neural network has or what loss function it was trained on? says Tal Hassner, Facebook’s AI applied science leader.

The complexity of artificial intelligence

Hassner and his colleagues tested artificial intelligence on a database of 100,000 deepfake images generated by 100 different generative models that make 1,000 images each. Some of those images were used to train the model, while others were retained and presented to the model as images of unknown origin.Inteligencia Artificial

That helped test artificial intelligence on its ultimate goal. “What we’re doing is looking at a photo and trying to estimate what the design of the generative model that created it is, even if we’ve never seen that model before,” Hassner says. He declined to share how accurate the artificial intelligence estimates were, but says “we are much better than random.”

A step forward for technology

“It’s a big step forward for fingerprinting,” says Nina Schick, author of the book Deep Fakes and the Infocalypse. But she points out, like Hassner and her colleagues, that AI only works with images that have been totally artificially generated, whereas many deepfakes are videos created by gluing a face to someone else’s body.

Schick also wonders how effective AI would be outside of lab settings, finding deepfakes in nature . “The type of face detection models that we see are generally based on academic data sets and implemented in controlled environments,” says Schick.

Inteligencia Artificial

Hassner declined to talk about how Facebook would use its new AI, but says that this kind of work is a game of cat and mouse against people who create deepfakes. “We are developing better identification models, while others are developing increasingly better generative models,” says Hassner. “I have no doubt that at some point there will be a method that will completely fool us.” Although Facebook sometimes causes problems, there is no doubt that it is doing a great job locating deepfakes.