Advertisement
UK markets closed
  • NIKKEI 225

    37,552.16
    +113.55 (+0.30%)
     
  • HANG SENG

    16,828.93
    +317.24 (+1.92%)
     
  • CRUDE OIL

    83.42
    +1.52 (+1.86%)
     
  • GOLD FUTURES

    2,336.90
    -9.50 (-0.40%)
     
  • DOW

    38,487.66
    +247.68 (+0.65%)
     
  • Bitcoin GBP

    53,606.50
    +255.95 (+0.48%)
     
  • CMC Crypto 200

    1,436.03
    +21.27 (+1.50%)
     
  • NASDAQ Composite

    15,712.47
    +261.16 (+1.69%)
     
  • UK FTSE All Share

    4,378.75
    +16.15 (+0.37%)
     

Meta chatbot criticised over antisemitic remarks

 (Getty Images)
(Getty Images)

Meta’s new chatbot has attracted criticism after it appeared to post antisemitic remarks.

Last week, Facebook’s parent company announced what it called “BlenderBot 3”, the latest version of its artificially intelligent chat system.

Meta admitted that the system was not yet perfect and would improve through time. It learns from interactions with humans and feedback about those chats, it said, and so would get better.

But some appear to have already found those imperfections: including seeing antisemitic remarks from the bot. Wall Street Journal reporter Jeff Horwitz shared screengrabs of the system saying that Jewish people were “overrepresented among America’s super rich”.

ADVERTISEMENT

He also shared conversations in which the system appeared to suggest that Donald Trump was still president and that he should serve more than the constitutionally limited two terms.

The system even seemed to criticise the company that made it, talking about misinformation on Facebook and the amount of fake news on the platform.

But at the same time, other users found the bot was progressive on issues such as racism. In a piece on Gizmodo, writer Mack DeGeurin found conversations with the bot suggested it was actively anti-racist – and that it continued to express those opinions even when that conversation was seemingly over.

Meta did say that the system was able to remember conversations, and that it had been trained on a large amount of data, presumably meaning that the text it was relying on to speak had come from a range of different sources.

AI experts have repeatedly cautioned that such systems carry over the same biases that are present in the data that is used to train them – meaning that they can reflect the racism or other prejudices of the society that created them.

In its announcement of the bot, Meta did stress that the bot could still make problematic comments and that it was looking to improve its conversation over time.

“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the company wrote.

“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”

In the same announcement, Meta said that the chatbot would be improved through time. It also noted that some people do not have “good intentions” when using such systems and that it had “developed new learning algorithms to distinguish between helpful responses and harmful examples”. “Over time, we will use this technique to make our models more responsible and safe for all users,” it said.