How to Identify if a Photo is Fake with Google Assembler

Since the creation of Photoshop and image editors, retouching a photo has been a piece of cake. There are those who make authentic works of art on their computer and even on mobile phones, but there are those who go beyond the line. Now, a new tool created by Google called Assembler lets you easily know if an image is false or has been modified .

The tool has been developed by Jigsaw , an Alphabet department dedicated to researching new technologies and applications. Now, they have developed Assembler , which is now available to help journalists and fact-checkers check if an image is real or has been modified.

fake-photo

Assembler is basically a combination of techniques that already exist today. For example, it is able to detect changes in the brightness of various elements of the images, as well as knowing if there are pixels that come from other images even though the final texture is different. In addition, it is able to detect if an image has been created with StyleGAN or some kind of deepfakes creation tool.

The result of using all the tools is that Assembler throws a final number, which is the probability that the image has been modified in some way. For users it is increasingly difficult to detect if an image has been modified, so the fact that there are programs that do it automatically is more than welcome to avoid increasing misinformation with false content.

In addition to the percentage, it also offers detailed explanations of the modifications it has detected to provide more information on what elements have been changed in the image, and to be able to more easily identify the purpose for which that modification has been made. Among the reasons there may be interest in influencing public opinion in favor of some controversial idea when there are no arguments based on real events to defend it.

Low resolution images, your main enemy

During the development of the tool, the team has encountered some difficulties such as that the images used by journalists had hardly any representation in the sets of images used to train AI. In addition, they also had trouble identifying images that were strongly compressed as they had been compressed several times. This happens, for example, when someone makes a screenshot, puts it on an Instagram Storie, then someone makes a capture of that Storie, is shared several times by WhatsApp, etc.

Therefore, they have gradually expanded the image database to improve detection, and also use a reverse image finder that TinEye uses to search for the original image or one with a higher resolution. For example, if we put an image that has arrived by WhatsApp, the AI will do the reverse search to obtain the original, which could be taken from a tweet.

Although the system is ideal for photos, at the moment its functionalities do not apply to videos, where it would be much more practical to discover deepfakes. At the moment it is only available to journalists and fact-checkers, but surely in the future they launch something for the rest of the world that allows us to fight against the problems generated by the programs to edit photos .