- Oops!Something went wrong.Please try again later.
Twiter is adding a new “Safety Mode” intended to keep people safe from seeing abusive posts.
The feature will automaticaly block accounts that use potentially harmful language such as insults or hateful remarks, or engaging with tweets in repetitive or uninvited ways.
Those automatic blocks will stay in place for seven days, meaning that the original poster will be protected from seeing those tweets. When a user is blocked, they are unable to send direct messages or posts to a user at all, and are also banned from following their accountor seeing their tweets.
Initially, the feature will roll out to a small group of users on iOS, Android and the web version of Twitter. Those people could be contacted by Twitter to ask about their experience of the feature.
Users will be given the option of turning the feature on or off. It also takes existing relationships into account, Twitter said, with a view to meaning that accounts people follow or regularly interact with will not be caught in the automatic blocking filters.
It will also provide information about who has been blocked and for how long, so that any erroneous automatically blocked accounts can be fixed. “We won’t always get this right and may make mistakes,” Twitter said in its announcement, where it also said that it would “regularly monitor the accuracy of our Safety Mode systems to make improvements to our detection capabilities”.
The company said it was introducing the feature as part of a broad push to encourage “healthy conversations” on its platform.
“While we have made strides in giving people greater control over their safety experience on Twitter, there is always more to be done,” said Katy Minshall, head of UK public policy at Twitter. “As part of our work in this space, today we’re introducing Safety Mode; a feature that allows you to automatically reduce disruptive interactions on Twitter, which in turn improves the health of the public conversation.
“Today’s roll out will be to a limited feedback group, so we can gain key insights ahead of a wider launch. We want to incorporate this feedback to ensure that the safety tools we’re developing truly empower people and make them feel comfortable engaging in the public conversation. ’’
The company said that it had consulted with a variety of people on the feature before it was launched “with expertise in online safety, mental health, and human rights”. That also allowed the company to “think through ways to address the potential manipulation of our technology”, it said.