Coloring photos is great, but AI has a dark side

Lately, we don’t stop watching a thousand and one videos about applications that work wonders with our photos. That if they increase its resolution and its detail, that if they are in charge of recovering the color. Others, on the other hand, allow us to literally become another person. However, these applications have a hidden danger and not only that of privacy. What is the dark side of AI in photos?

Coloring photos is great, but AI has a dark side

One of the clearest applications of artificial intelligence is to hallucinate data, and no, we do not mean it in a jocular way. Rather it is due to the fact that its operation is to learn a pattern to be able to make predictions. In the event that we are talking about sound, for example, we can make it listen to several complete songs and then find it to complete the missing ones. The result? The creation of new works, but very rarely, it will be successful if it is not with massive training. Well, the same goes for photos and videos. All due to the fact that what it transmits is based on a prediction and is not real information.

Why is AI in photos and videos dangerous?

There is a topic that nobody talks about and that is the preservation of our memories, whether personal or collective , and this is the big problem posed by the manipulation of photos via AI . For example, historical information may be altered for political reasons under such a premise. People, symbols or even historical archives can be deleted at source. Which causes the desired digitization of the media to become a problem. A video from a crime scene? Voila, the political dissident, will appear in the video so that he can be judged.

Deep Fake Fotos IA

In a world where information flies daily and there is so much that it is impossible to contrast it, Fake News now has more power than ever thanks to what artificial intelligence now allows us to do with the media. That is, creating false irrefutable evidence that makes the effort for journalists to be titanic to inform and deny them. At the same time we find people destroying the past by upgrading old images and movies to low resolution. It is possible that our children will no longer enjoy works as they were made, but Doppelgangers made through artificial intelligence.

What’s more, this writer knows a family member who made the mistake of destroying old photos after giving them the AI enhancement treatment. The result? Having a number of photos of your family that look like, but aren’t, of people who haven’t really existed. Now imagine this applied to historical memories.

The importance of supervision

The problem with all those AI photo apps is the fact that there is no monitoring component. In every learning and inference process there must be an evaluator, an element that tells the algorithm that it has generated an erroneous result and discards it as a basis for future conclusions. Many of the services do not allow us to evaluate if the reconstruction has been correct and the general feeling we have is that although the AI is very powerful and impressive, it seems that it is an enemy, staunch in many cases of reality and more than reconstructing images they are partially invented and half true.

This capacity should be printed in the hardware, that is, there should be a mechanism that, when faced with two images, is capable of guaranteeing if one looks like the other. However, such capabilities are completely ignored in image reconstruction applications, especially for old photos. After all, we don’t have an original with which to compare them. They have been lost in the mists of time and, therefore, what we are going to achieve in photos modified by the AI will never be a reconstruction of the real image.

In summary, be careful with AI and historical memories, their massive use can lead to the creation of false ones.