In preparation for the US presidential election, Twitter decided to declare "war" on the manipulated content. As of March 5, a new policy comes into effect that prohibits the sharing of fake images and videos in order to deceive users and cause damage to them. All Tweets that contain this type of content will also have a warning in order to alert the public and prevent the spread of false information.
The company led by Jack Dorsey considers as "manipulated content" content that has been "substantially edited" in a way that "significantly changes its composition, sequence, timing or framing", that has visual or sound information that has been added or removed or that has been manufactured or simulated to represent a person. This category includes not only sophisticated deepfakes, but also less elaborate videos.
According to the new policy, the context of the Tweets themselves will also be evaluated, including the text that accompanies the manipulated content and the profile of those who published or shared it. Publications that cause damage to users of the platform will be removed. The social network indicates that the threats to the security and privacy of an individual or group are, for example, reasons that justify the removal of a Tweet. To avoid spreading the manipulated content, the platform will also reduce the visibility of the Tweets in question, displaying a warning to other users before they share them.
The decision comes after Twitter collected feedback from its users in November last year, and consulted experts to see what the best approach would be. About 90% of the 6,500 suggestions received indicate that it supports the decision to exclude this type of content when it is causing some type of damage. In addition, 70% of users who gave their opinion say that it would be unacceptable "not to take action against manipulated content.