- Sputnik International
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

Users Decry Twitter 'Thought Police' as Platform Introduces Warnings Against 'Harmful Language'

© AP Photo / Matt RourkeTwitter app icon on a mobile phone
Twitter app icon on a mobile phone - Sputnik International
In recent years Twitter has been making efforts to clean up harmful and abusive content on its social media platform. The company has so far relied on internal software and user-flagged rule-breaking tweets to stay ahead.

Twitter has announced the introduction of a tool that allows users to rewrite replies before publishing that contain what they describe as “harmful” language.

The company said in a tweet from its support account on Tuesday that the new feature would be first brought in as a "limited" measure. After hitting send, users will be alerted if their message contains words similar to other posts that have previously been reported and given an option to revise the message before it's published.

"When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful", Twitter said.

While the test comes as part of a general attempt by Twitter to combat hateful posts on the social media platform, some users did not take well to the announcement, going as far as to describe it as 'thought policing'.

​Gab, which is a self-professed "free-speech" alternative to Twitter, used the opportunity to advertise its own platform.

​Others called for the introduction of an edit button to comments. Currently, to edit tweets users must delete then reupload a post.

​There was some support for the measure, however, as well as calls for the site to counter "fake news".

​In an interview with Reuters, a Twitter representative said that the policy is designed to get users to "rethink" comments before posting to ensure that they are in line with existing guidelines.

“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret”, said Sunita Saligram, Twitter’s global head of site policy for trust and safety.

Twitter policies do not allow users to use slurs, racist, or sexist tropes, or degrading content, but, until now, monitoring has been done by netizens themselves who report offensive posts, as well as through the companies' own screening technology.

To participate in the discussion
log in or register
Заголовок открываемого материала