Advertisement
UK markets close in 1 hour 36 minutes
  • FTSE 100

    8,207.28
    +15.99 (+0.20%)
     
  • FTSE 250

    20,427.13
    +17.20 (+0.08%)
     
  • AIM

    780.53
    +2.21 (+0.28%)
     
  • GBP/EUR

    1.1839
    +0.0012 (+0.10%)
     
  • GBP/USD

    1.2720
    +0.0010 (+0.08%)
     
  • Bitcoin GBP

    51,175.23
    +187.94 (+0.37%)
     
  • CMC Crypto 200

    1,384.44
    +46.69 (+3.49%)
     
  • S&P 500

    5,487.03
    +13.80 (+0.25%)
     
  • DOW

    38,834.90
    +56.76 (+0.15%)
     
  • CRUDE OIL

    81.64
    +0.07 (+0.09%)
     
  • GOLD FUTURES

    2,344.70
    -2.20 (-0.09%)
     
  • NIKKEI 225

    38,570.76
    +88.65 (+0.23%)
     
  • HANG SENG

    18,430.39
    +514.84 (+2.87%)
     
  • DAX

    18,082.26
    -49.71 (-0.27%)
     
  • CAC 40

    7,582.45
    -46.35 (-0.61%)
     

OpenAI chief Sam Altman accused of lying and ‘psychological abuse’

Last year's failed rebellion against Mr Altman was sparked by his alleged 'toxic culture'
Last year's failed rebellion against Mr Altman was sparked by his alleged 'toxic culture' - Jason Redmond/AFP

OpenAI chief executive Sam Altman has been accused of fostering a “toxic culture of lying” and engaging in “psychological abuse” at the ChatGPT maker.

Two former board members who helped to briefly oust Mr Altman from the OpenAI board in November said they “stand by” the decision to sack him.

Helen Toner and Tasha McCauley, who left the board after the failed rebellion, claimed Mr Altman’s “long-standing patterns” of behaviour had “undermined the board’s oversight of key decisions and internal safety protocols”.

In an article for The Economist, the former board members claimed they had received warnings from senior staff about Mr Altman’s conduct. The complaints alleged Mr Altman “cultivated ‘a toxic culture of lying’” and engaged in “behaviour [that] can be characterised as psychological abuse”.

ADVERTISEMENT

Their comments come days after the departure of Ilya Sutskever, one of OpenAI’s co-founders who also backed the coup against Mr Altman, before changing sides and advocating for his return.

OpenAI is the company behind ChatGPT, the powerful AI chatbot, which has the stated aim of building ever more powerful AI machines.

Its board had been established to keep watch over the company’s developments in an effort to create safe AI tools. Some AI experts, including Mr Altman, have warned over the potentially disastrous consequences of out of control AI.

Mr Altman has sought to position OpenAI as a leader in safe development of the technology. Last week the company signed an industry pledge not to develop AI that posed “intolerable risks” to society.

However, Ms Toner and Ms McCauley wrote that “developments since [Mr Altman] returned to the company – including his reinstatement to the board and the departure of senior safety-focused talent – bode ill for the OpenAI experiment in self-governance”.

Earlier this month Jan Leike, a senior OpenAI safety researcher, resigned from the company, claiming that “safety culture and processes have taken a backseat to shiny products”.

Ms Toner and Ms McCauley warned that OpenAI’s “self-regulation” was “unenforceable, especially under the pressure of immense profit incentives” and called for government regulation of AI businesses.

Last year’s boardroom coup against Mr Altman proved short-lived. He returned as chief executive within days of his ousting with the backing of hundreds of staff and major investors. A new board was formed, purged of his critics.

The reasons behind the coup have remained a mystery, with the board only admitting Mr Altman had not been “consistently candid”. OpenAI hired a law firm to conduct a review of Mr Altman’s sacking, which reported his behaviour “did not mandate removal”.

OpenAI was contacted for comment.