Advertisement
UK markets close in 3 hours 1 minute
  • FTSE 100

    8,117.15
    +38.29 (+0.47%)
     
  • FTSE 250

    19,798.38
    +196.40 (+1.00%)
     
  • AIM

    754.64
    +1.52 (+0.20%)
     
  • GBP/EUR

    1.1667
    +0.0011 (+0.09%)
     
  • GBP/USD

    1.2510
    -0.0001 (-0.01%)
     
  • Bitcoin GBP

    51,407.43
    +259.79 (+0.51%)
     
  • CMC Crypto 200

    1,389.03
    -7.51 (-0.54%)
     
  • S&P 500

    5,048.42
    -23.21 (-0.46%)
     
  • DOW

    38,085.80
    -375.12 (-0.98%)
     
  • CRUDE OIL

    84.34
    +0.77 (+0.92%)
     
  • GOLD FUTURES

    2,354.70
    +12.20 (+0.52%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • HANG SENG

    17,651.15
    +366.61 (+2.12%)
     
  • DAX

    18,055.89
    +138.61 (+0.77%)
     
  • CAC 40

    8,046.43
    +29.78 (+0.37%)
     

Twitter rolls out pre-tweet warning about entering ‘intense’ conversations

New Twitter feature will warn users about entering intense interactions on the platform (iStock/ composite)
New Twitter feature will warn users about entering intense interactions on the platform (iStock/ composite)

Twitter will begin warning users about entering into “heated or intense” interactions on the platform.

Pre-tweet alerts will offer a “heads up” if a conversation contains sensitive or controversial subjects, while a pop-up will warn users to not break Twitter’s rules.

The pop-up also encourage people to communicate with respect, check the facts, and be open to diverse perspectives.

The new system is designed to better support healthy conversation, according to Twitter.

“Ever want to know the vibe of a conversation before you join in?” stated a post by the official account for Twitter Support.

“We’re testing prompts on Android and iOS that give you a heads up if the convo you’re about to enter could get heated or intense.”

ADVERTISEMENT

Twitter has faced criticism for providing a platform for harassment and abuse, prompting a number of measures aimed at reducing the toxicity of interactions.

A new “Safety Mode” was tested earlier this year, which automatically blocks accounts for a period of seven days if they are suspected of being a troll.

“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier,” Twitter said.

Last year, anyone attempting to reply to a tweet with “harmful” language received a pop-up urging them to reconsider their choice of words.

A similar system was also launched by Facebook, with users receiving a “nudge” if they were about to post a comment containing offensive language.

Both social media firms rely on a combination of human and artificial intelligence algorithms to moderate the vast amounts of content posted online through their apps.