Advertisement
UK markets open in 3 hours 58 minutes
  • NIKKEI 225

    40,666.78
    +86.02 (+0.21%)
     
  • HANG SENG

    18,006.91
    +28.34 (+0.16%)
     
  • CRUDE OIL

    83.88
    0.00 (0.00%)
     
  • GOLD FUTURES

    2,369.40
    0.00 (0.00%)
     
  • DOW

    39,308.00
    -23.90 (-0.06%)
     
  • Bitcoin GBP

    46,249.61
    -1,813.82 (-3.77%)
     
  • CMC Crypto 200

    1,236.14
    -98.78 (-7.40%)
     
  • NASDAQ Composite

    18,188.30
    +159.54 (+0.88%)
     
  • UK FTSE All Share

    4,463.09
    +33.43 (+0.75%)
     

OpenAI announces new safety board after angering employees by dismantling the old one

Stefan Wermuth—Bloomberg/Getty Images

Weeks after he dismantled the OpenAI team focused on AI safety, CEO Sam Altman says he will lead a new team with the same charge.

The company, in a blog post Monday, announced the formation of the Safety and Security Committee, which it says will be responsible for making recommendations on critical safety and security decisions for all OpenAI projects.

The announcement follows the exit earlier this month of several key members of the safety committee, including cofounder Ilya Sutskever and Jan Leike. Leike was especially critical of OpenAI in his departure, accusing the company of neglecting “safety culture and processes” in favor of “shiny products.” He chose to depart the company, he said, because he has “been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.”

Given those criticisms, Altman’s oversight of the safety committee could be scrutinized. Joining him will be directors Bret Taylor (who is chairman of the board), Adam D’Angelo, and Nicole Seligman.

ADVERTISEMENT

“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the company wrote. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

The blog post also revealed OpenAI is in the process of training its “next frontier model,” a successor to the one that currently drives ChatGPT, saying “we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.”

News of the new safety committee comes less than 10 days after OpenAI dissolved the "Superalignment" team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that superalignment team.

This story was originally featured on Fortune.com