Advertisement
UK markets closed
  • FTSE 100

    8,252.91
    +29.57 (+0.36%)
     
  • FTSE 250

    21,202.89
    +13.98 (+0.07%)
     
  • AIM

    786.17
    +4.64 (+0.59%)
     
  • GBP/EUR

    1.1903
    +0.0024 (+0.21%)
     
  • GBP/USD

    1.2990
    +0.0075 (+0.58%)
     
  • Bitcoin GBP

    44,566.77
    +395.21 (+0.89%)
     
  • CMC Crypto 200

    1,209.55
    +10.98 (+0.92%)
     
  • S&P 500

    5,615.35
    +30.81 (+0.55%)
     
  • DOW

    40,000.90
    +247.15 (+0.62%)
     
  • CRUDE OIL

    82.18
    -0.44 (-0.53%)
     
  • GOLD FUTURES

    2,416.00
    -5.90 (-0.24%)
     
  • NIKKEI 225

    41,190.68
    -1,033.34 (-2.45%)
     
  • HANG SENG

    18,293.38
    +461.05 (+2.59%)
     
  • DAX

    18,748.18
    +213.62 (+1.15%)
     
  • CAC 40

    7,724.32
    +97.19 (+1.27%)
     

Former OpenAI chief scientist launches own AI company

The former chief scientist and co-founder of OpenAI has announced the launch of his own artificial intelligence (AI) company, which he said would focus on safety.

Ilya Sutskever said he was launching Safe Superintelligence and that building safe AI was “our mission, our name, and our entire product roadmap”.

In a launch statement on the new company’s website, the firm said it would approach “safety and capabilities in tandem” as “technical problems to be solved” and to “advance capabilities as fast as possible while making sure our safety always remains ahead”.

Some critics have raised concerns that major tech and AI firms are too focused on reaping the commercial benefits of the emerging technology, and are neglecting safety principles in the process – an issue raised in recent months by several former OpenAI staff members when announcing they were leaving the company.

ADVERTISEMENT

Elon Musk, a co-founder of OpenAI, has also accused the company of abandoning its original mission to develop open-source AI to focus on commercial gain.

In what appeared to be a direct response to those concerns, Safe Superintelligence’s launch statement said: “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Mr Sutskever was involved in the high-profile attempt to oust Sam Altman as OpenAI chief executive last year, and was removed from the company’s board following Mr Altman’s swift return before leaving the company in May this year.

He has been joined at Safe Superintelligence by former OpenAI researcher Daniel Levy and former Apple AI lead Daniel Gross – both are named as co-founders at the new firm, which has offices in California and Tel Aviv, Israel.

The trio said the company was “the world’s first straight-shot SSI (safe superintelligence) lab, with one goal and one product: a safe superintelligence”, calling it the “most important technical problem of our time”.