Twitter has put out the call to its users: Help us define our deepfake policy. The social network created a new survey in multiple languages asking users what they want to see from Twitter’s new deepfake rules.
Between this move and CEO Jack Dorsey’s announcement that it is removing all political ads from its platform, the social network seems to be trying to redefine its image as “the good one,” especially when compared to its much larger rival, Facebook.
Deepfakes are videos that have been heavily and almost undetectably manipulated to make it look like the subject of the video is saying or doing something that never actually happened. They are easy to make, hard to spot, and have become a looming problem in the run-up to the 2020 election. Last month, Congress pressed 11 major social media companies to draft plans to deal with the proliferation of deepfake videos on their platforms.
Crowdsourcing a policy for deepfakes is a new step for any social network, and a very different approach than that taken by competitors, namely,Facebook and Instagram. On these sites, the policy has been to make the offending media inaccessible, but not to remove it entirely. These policies were put to the test over the summer when a deepfake video of Mark Zuckerberg himself surfaced on Instagram. That post is still accessible today.
The crowdsourcing approach has its pitfalls as well, said Ryan Long, a nonresident fellow at the Stanford Center for Internet and Society. “Twitter really has to think about this issue,” Long told Digital Trends. “If they just kind of shove responsibility onto the users, then they’re not looking at the deeper issue of whether they want to have terms and policies that they should have.”
As we shape our approach to synthetic and manipulated media, we think it's critical to consider global perspectives. We want to hear from you.
— Twitter Safety (@TwitterSafety) November 11, 2019
Long said he fears a pile-on effect: That no matter what Twitter’s eventual policy is, a video will be marked as fake or abusive simply because some segment of Twitter users don’t like it, even if the video is real. And the original poster will have no recourse. “There has to be some objective basis for doing this,” Long said. “The question is, how will they audit the truth?”
The current draft of Twitter’s new deepfake rules is short, and says that false media will only be removed if it threatens a person or group’s physical safety. However, in the survey, Twitter asks its users if they believe strongly whether media should be removed if it also threatened a person’s mental health, reputation, or personal wealth, among other assets.
The survey also includes questions like, “I agree/disagree that Twitter will apply its policy fairly and consistently,” and “I agree/disagree that Twitter should do something about misleading or altered media.”
These queries nod not only at the ever-swirling discussion around what, exactly, a social media platform’s role is in proliferating both news and misinformation, but also at Twitter’s own internal problems of enforcing its Terms of Service evenly across all users — a problem that has been noted particularly by people, notably women, who say they have been harassed or bullied on the platform. It points to a self-awareness, and possible steps of the platform attempting to become a better kind of social network.
This stands in stark contrast to the ongoing scandals that Facebook has brought upon itself. Along with ongoing privacy violations — including two major ones that came to light in less than 24 hours last week — Facebook has come under fire for the way it treats political ads. CEO Mark Zuckerberg said the social network will allow blatantly false political ads, adding that it’s not up to the company to fact-check politicians. Twitter, on the other hand, is getting rid of political ads altogether.