Advertisement
UK markets closed
  • NIKKEI 225

    38,835.10
    +599.03 (+1.57%)
     
  • HANG SENG

    18,479.37
    -98.93 (-0.53%)
     
  • CRUDE OIL

    78.83
    +0.35 (+0.45%)
     
  • GOLD FUTURES

    2,323.80
    -7.40 (-0.32%)
     
  • DOW

    38,893.96
    +41.69 (+0.11%)
     
  • Bitcoin GBP

    50,722.01
    +77.63 (+0.15%)
     
  • CMC Crypto 200

    1,320.57
    -44.56 (-3.26%)
     
  • NASDAQ Composite

    16,388.23
    +38.99 (+0.24%)
     
  • UK FTSE All Share

    4,522.99
    +53.90 (+1.21%)
     

Twitter will study ‘unintentional harms’ of its algorithm

Twitter logo displayed on laptop screen (AFP via Getty Images)
Twitter logo displayed on laptop screen (AFP via Getty Images)

Twitter has introduced a new company-wide move called the “Responsible Machinle Learning Initiative” to study whether its algorithms cause unintentional harm.

According to the microblogging site, the initiative seeks to ensure “equity and fairness of outcomes” when the platform uses machine learning to make its decisions, a move that comes as social media platforms continue to face criticism over racial and gender bias amplified by their algorithms.

The company said it also seeks to enable better transparency about the platform’s decisions and how it arrives at them, while providing better agency and choice of algorithms to its users.

ADVERTISEMENT

Twitter noted that its machine learning algorithms can impact hundreds of millions of Tweets per day, adding that “sometimes, the way a system was designed to help could start to behave differently than was intended.”

It said the aim of the new initiative is to study these subtle changes and use the knowledge to build a better platform.

In the upcoming months, the company’s ML Ethics, Transparency and Accountability (META) team plans to study the gender and racial bias in its image cropping algorithm.

This comes after several users pointed out last year that photos cropped in people’s timelines appear to be automatically electing to display the faces of white people over people with darker skin pigmentation.

The team is also slated to conduct an analysis of content recommendations for users from different political ideologies across seven countries.

Twitter said its researchers would also perform a fairness analysis of the Home timeline recommendations across racial subgroups.

“The META team works to study how our systems work and uses those findings to improve the experience people have on Twitter,” the company noted.

It added that its researchers are also building explainable ML solutions that can help users better understand the platform’s algorithms, what informs them, and how they impact the Twitter feed.

According to the microblogging platform, the findings from these studies may help in changing Twitter by helping remove problematic algorithms or help build new standards into its design policies when there is an outsized impact on particular communities.

Read More

Greensill: ‘All decisions taken by the bank were made independently’, minister says

Someone just moved a bitcoin fortune – 5 years after it was stolen

New Google Earth feature shows devastating effects of climate change