Advertisement
UK markets closed
  • FTSE 100

    7,895.85
    +18.80 (+0.24%)
     
  • FTSE 250

    19,391.30
    -59.37 (-0.31%)
     
  • AIM

    745.67
    +0.38 (+0.05%)
     
  • GBP/EUR

    1.1607
    -0.0076 (-0.65%)
     
  • GBP/USD

    1.2370
    -0.0068 (-0.55%)
     
  • Bitcoin GBP

    51,684.15
    -597.26 (-1.14%)
     
  • CMC Crypto 200

    1,371.97
    +59.35 (+4.52%)
     
  • S&P 500

    4,967.23
    -43.89 (-0.88%)
     
  • DOW

    37,986.40
    +211.02 (+0.56%)
     
  • CRUDE OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD FUTURES

    2,406.70
    +8.70 (+0.36%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     
  • HANG SENG

    16,224.14
    -161.73 (-0.99%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • CAC 40

    8,022.41
    -0.85 (-0.01%)
     

‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence

<span>Photograph: The Washington Post/Getty Images</span>
Photograph: The Washington Post/Getty Images

Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing artificial intelligence application ChatGPT, has warned that the technology comes with real dangers as it reshapes society.

Altman, 37, stressed that regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity. “We’ve got to be careful here,” Altman told ABC News on Thursday, adding: “I think people should be happy that we are a little bit scared of this.

“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”

ADVERTISEMENT

But despite the dangers, he said, it could also be “the greatest technology humanity has yet developed”.

The warning came as OpenAI released the latest version of its language AI model, GPT-4, less than four months since the original version was released and became the fastest-growing consumer application in history.

In the interview, the artificial intelligence engineer said that although the new version was “not perfect” it had scored 90% in the US on the bar exams and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages, he said.

Fears over consumer-facing artificial intelligence, and artificial intelligence in general, focus on humans being replaced by machines. But Altman pointed out that AI only works under direction, or input, from humans.

Related: The stupidity of AI

“It waits for someone to give it an input,” he said. “This is a tool that is very much in human control.” But he said he had concerns about which humans had input control.

“There will be other people who don’t put some of the safety limits that we put on,” he added. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

Many users of ChatGPT have encountered a machine with responses that are defensive to the point of paranoid. In tests offered to the TV news outlet, GPT-4 performed a test in which it conjured up recipes from the contents of a fridge.

The Tesla CEO, Elon Musk, one of the first investors in OpenAI when it was still a non-profit company, has repeatedly issued warnings that AI or AGI – artificial general intelligence – is more dangerous than a nuclear weapon.

Musk voiced concern that Microsoft, which hosts ChatGPT on its Bing search engine, had disbanded its ethics oversight division. “There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!” Musk tweeted in December. This week, Musk fretted, also on Twitter, which he owns: “What will be left for us humans to do?”

On Thursday, Altman acknowledged that the latest version uses deductive reasoning rather than memorization, a process that can lead to bizarre responses.

“The thing that I try to caution people the most is what we call the ‘hallucinations problem’,” Altman said. “The model will confidently state things as if they were facts that are entirely made up.

“The right way to think of the models that we create is a reasoning engine, not a fact database,” he added. While the technology could act as a database of facts, he said, “that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”

What you get out, depends on what you put in, the Guardian recently warned in an analysis of ChatGPT. “We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully.”