icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
6 Feb, 2020 14:15

Twitter hoards ministry-of-truth powers, making rules on ‘manipulated’ images with loopholes to ban anything

Twitter hoards ministry-of-truth powers, making rules on ‘manipulated’ images with loopholes to ban anything

Twitter will remove or label “manipulated images and videos” on its platform in a bid to control disinfo, it has announced – though its new policy reads more like it’s setting itself up as a clairvoyant arbiter of truth.

“Synthetic or manipulated media that are likely to cause harm” will be removed – or at least plastered with warning labels – under new rules announced by Twitter on Tuesday.

Citing overwhelming demand for stricter content regulations, perhaps in a preemptive attempt to excuse what some have already interpreted as overreach, the microblogging platform laid out a lengthy list of criteria that would supposedly be considered before removing or labeling a tweet.

Media is considered “synthetic or manipulated” if it is edited to significantly change its meaning, sequence, or other attributes, and of course ‘deepfakes’ are right out. But “any visual or auditory information… that has been added or removed,” including subtitles or audio overdubbing, can also get content flagged. Technically, this creates a loophole that makes even content that has simply been translated from another language a potential target.

Also on rt.com VIDEO of fake chemical attack in Syria already complete, White Helmets co-produced footage – Moscow

A tweet may be labeled as “deceptive” if the context in which it is shared “could result in confusion or misunderstanding” or “suggests a deliberate intent to deceive people.” No one can control another user’s (mis)interpretation of their tweet – some people are just easily confused – and similar rules have already been used to target politically-charged satire and memes. No matter how clearly labeled, one man’s joke is inevitably declared another man’s fake news. But a deliberate intent to deceive people? How does Twitter propose to determine who is telling an innocent joke and who is maliciously trolling?

Making the viewer’s sense of humor the responsibility of the poster is likely to have a profoundly chilling effect on memes and other political humor, already besieged by ‘fact-checkers’ sinking their fangs into everything from the parody site Babylon Bee to the obviously-photoshopped image of US President Donald Trump giving a Medal of Honor to a terrorist-killing military dog. With the Pentagon itself taking aim at “polarizing viral content” – i.e., political memes – and so-called “malicious intent” in a sinister project announced in September, Twitter may have unwittingly volunteered itself as the first battlefield in the War on Memes.

While tweets containing synthetic or deceptive content will merely get slapped with a warning label when the new rules take effect on March 5, content “likely to impact public safety or cause serious harm” is singled out for removal. This seemingly-uncontroversial rule becomes menacingly vague on closer examination, listing “targeted content that includes tropes, epithets, or material that aims to silence someone” and “threats to the privacy or ability of a person or group to freely express themselves” among the categories of banned speech. 

While this would seem to outlaw the tactics of groups like Sleeping Giants whose literal goal is to get those it unilaterally deems ‘fascists’ deplatformed by ginning up outrage mobs against them, Twitter is unlikely to defend the victims of such groups, if pastbehavior is any indication.

Also on rt.com After US killing of Iran’s Soleimani, narrative control on social media is getting worse

Which begs the question: what constitutes ‘serious harm’, or for that matter ‘public safety’, and who determines what is likely to result in it? Twitter has allowed faux-Iranian bots operated by the anti-Tehran Mojahedin-e Khalq (MEK) cult to run rampant on the platform, demanding an American invasion of “their” country – some of which have been retweeted by Trump himself as “proof” the Iranian people want regime change.  

The new rules leave a wide swath of content open to interpretation, giving Twitter carte blanche to determine the intent and likely repercussions of any given tweet. While no one wants to be flooded with deepfakes or other truly deceptive content, especially during an election season, in practice these rules have been applied unevenly to silence political and social viewpoints that diverge from ‘woke’ centrist orthodoxy. Giving Twitter the power to determine both truth and intention is conferring an authority the platform has already shown it cannot handle responsibly.

Think your friends would be interested? Share this story!

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

Podcasts
0:00
25:25
0:00
27:21