contador web Saltar al contenido

What deepfake? Artificial intelligence used to make fake video | Editing and creation

Deepfake is a technology that uses artificial intelligence (AI) to create fake, but realistic, videos of people doing things they have never done in real life. The technique that allows making video montages has generated everything from pornographic content with celebrities to fictional speeches by influential politicians. Debates are now circulating on the tactics and consequences of technology, for better or for worse.

READ: What bot? Meet the robots that are 'dominating' the Internet

The term deepfake appeared in December 2017, when a Reddit user with that name started posting fake sex videos with celebrities. With deep learning software, he applied the faces he wanted to existing clips. The most popular cases were those of actresses Gal Gadot and Emma Watson. The phrase deepfake soon came to be used to indicate a variety of videos edited with machine learning and other AI capabilities.

In fake video, movements and speeches and Obama's are controlled by actor Jordan Peele Photo: Reproduction / BuzzFeedIn fake video, movements and speeches and Obama's are controlled by actor Jordan Peele Photo: Reproduction / BuzzFeed

In fake video, movements and speeches and Obama's are controlled by actor Jordan Peele Photo: Reproduction / BuzzFeed

READ: Google explains the basics of Artificial Intelligence on a fun website

Special computer effects that create faces and scenes in the audiovisual are nothing new; cinema has been doing this for many years. The great turning point of the so-called deepfake lies in the ease with which it can be produced. Compared to what used to be necessary, the current method is simple and inexpensive. Anyone with access to algorithms and knowledge of deep learning, a good graphics processor and a large collection of images can create a convincing fake video.

How deepfakes are created

Software based on open source libraries for machine learning is used. According to an interview with the Motherboard website, the Reddit user used TensorFlow in addition to Keras, a deep learning API written in Python. The programmer provides hundreds and even thousands of photos and videos of the people involved, which are automatically processed by a neural network. like training, in which the computer learns how a certain face, how it moves, how it reacts to light and shadows.

This training is done with the face of the original video and with the new face, until the program is able to find a common point between the two faces and sew one over the other. The procedure involves a kind of trick, in which the software receives an image of person A and processes it as if it were person B.

Deepfake is very recent and its definition is fluid. The phenomenon is confused in public discussion with technologies with similar or complementary functions. There is, for example, a program advertised by Adobe that can create lines with a person's voice from real samples. There are also experiments in facial reenactment, with the recreation of the same speeches and expressions of one person in the face of another, and lip sync, videos of someone speaking with audio and images of their face.

In addition to the pornographic clips, other fake videos created with artificial intelligence that have gained notoriety show former US President Obama. In one, he calls the current US President Donald Trump a complete sh * t. In another, he makes speeches that existed only in audio or in written form. There is also a Trump video produced with images and lines from a parody of the president on the Saturday Night Live humor show.

Usually, videos of this type are not perfect, but they are realistic enough to fool many people. My intention is not part of the deepfakes concept, but it is in the equation. The manipulation of the images and voices of politicians is an alert. With such accessible tools, it is easier to spread false information according to your own interests, based on supposed video evidence. This can pose a danger to democracy and society, even threatening the credibility of everything published.

In the case of fictitious porn videos, complex and legal problems of a more individual nature are also included. Misleading creations can damage a person's life, be it famous or anonymous, and, for now, it is unclear what Justice can do about it. The videos released are not real; the face of one inserted in the body of another. However, if the images manage to pass themselves off as true and there is no consent from the individual in question, how to deal?

Actress Gal Gadot's face was inserted in ponographic clips Photo: Reproduo / SendVideos

Actress Gal Gadot's face was inserted in ponographic clips Photo: Reproduo / SendVideos

Some have also raised questions about the possible trivialization of the term, similarly to what happened with the term fake news. The concern is that the word deepfake will be used in a very vague and casual way and become omnipresent, stronger than the real impact of technology. Thus, people with bad intentions can take advantage to cast doubt on real evidence that does not please them.

Beneficial uses of technology

There is no pessimism in the world of deepfakes. There are examples of the positive use of machine learning algorithms that brought the new phenomenon to life. The principle of technology lies in facial recognition and reconstruction, which indicates enormous potential. In fact, similar functions are already used in resources present in the daily lives of Internet users.

Apple animations and Samsung AR emojis map a person's face and reproduce their expressions in real time on virtual dolls. On Instagram Stories and Snapchat, several filters detect and transform users' faces. There is even a face exchange filter between people in a photo.

IPhone X Animojis Photo: Reproduo / YouTube

IPhone X Animojis Photo: Reproduo / YouTube

Cinema and the entire audiovisual industry could also benefit from a simpler method of performing special effects with faces, especially in the case of independent content producers with low budgets. Celebrities and digital influencers could sell their images to advertisers without having to attend filming.

In an interview with Mashable, however, the director of MultiComp Lab, part of Carnegie Mellon University, said that this technology may have important applications in addition to entertainment. According to the researcher, if developed with sufficient quality to operate in real time, the software could serve to offer useful videoconferencing therapy to individuals who are not comfortable showing their faces. Or to do job interviews without gender or race bias.

How to recognize a deepfake

Now that deepfakes are part of our reality, it is essential to learn to identify them. We may have reached a point where this is either impossible or very difficult, but today there are still some details that help reveal a fake video. Pay attention to mouth movements, if they correspond well to what is being said. Also pay attention to the voice itself: does the intonation and tone sound normal?

Check your eyes to see if they are blinking. Most of the time, the algorithms do not reproduce this aspect well or the person's breathing. See also if it moves naturally as a whole. Recreations can be difficult to fit all parts of the face and the rest of the body and duplicate certain organic movements. And if the person in the video in question is someone you don't know well, look for other clips, preferably where there is certainty of veracity, to compare.