UK markets closed
  • FTSE 100

    7,204.55
    +14.25 (+0.20%)
     
  • FTSE 250

    22,931.66
    +14.61 (+0.06%)
     
  • AIM

    1,234.19
    -7.18 (-0.58%)
     
  • GBP/EUR

    1.1800
    -0.0061 (-0.51%)
     
  • GBP/USD

    1.3760
    -0.0036 (-0.26%)
     
  • BTC-GBP

    43,850.85
    -818.12 (-1.83%)
     
  • CMC Crypto 200

    1,453.34
    -49.69 (-3.31%)
     
  • S&P 500

    4,544.90
    -4.88 (-0.11%)
     
  • DOW

    35,677.02
    +73.92 (+0.21%)
     
  • CRUDE OIL

    83.98
    +1.48 (+1.79%)
     
  • GOLD FUTURES

    1,793.10
    +11.20 (+0.63%)
     
  • NIKKEI 225

    28,804.85
    +96.27 (+0.34%)
     
  • HANG SENG

    26,126.93
    +109.40 (+0.42%)
     
  • DAX

    15,542.98
    +70.42 (+0.46%)
     
  • CAC 40

    6,733.69
    +47.52 (+0.71%)
     

Twitter rolls out pre-tweet warning about entering ‘intense’ conversations

  • Oops!
    Something went wrong.
    Please try again later.
·1-min read
In this article:
  • Oops!
    Something went wrong.
    Please try again later.
New Twitter feature will warn users about entering intense interactions on the platform (iStock/ composite)
New Twitter feature will warn users about entering intense interactions on the platform (iStock/ composite)

Twitter will begin warning users about entering into “heated or intense” interactions on the platform.

Pre-tweet alerts will offer a “heads up” if a conversation contains sensitive or controversial subjects, while a pop-up will warn users to not break Twitter’s rules.

The pop-up also encourage people to communicate with respect, check the facts, and be open to diverse perspectives.

The new system is designed to better support healthy conversation, according to Twitter.

“Ever want to know the vibe of a conversation before you join in?” stated a post by the official account for Twitter Support.

“We’re testing prompts on Android and iOS that give you a heads up if the convo you’re about to enter could get heated or intense.”

Twitter has faced criticism for providing a platform for harassment and abuse, prompting a number of measures aimed at reducing the toxicity of interactions.

A new “Safety Mode” was tested earlier this year, which automatically blocks accounts for a period of seven days if they are suspected of being a troll.

“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier,” Twitter said.

Last year, anyone attempting to reply to a tweet with “harmful” language received a pop-up urging them to reconsider their choice of words.

A similar system was also launched by Facebook, with users receiving a “nudge” if they were about to post a comment containing offensive language.

Both social media firms rely on a combination of human and artificial intelligence algorithms to moderate the vast amounts of content posted online through their apps.

Our goal is to create a safe and engaging place for users to connect over interests and passions. In order to improve our community experience, we are temporarily suspending article commenting