Advertisement
UK markets closed
  • NIKKEI 225

    39,103.22
    +486.12 (+1.26%)
     
  • HANG SENG

    18,868.71
    -326.89 (-1.70%)
     
  • CRUDE OIL

    76.92
    +0.05 (+0.07%)
     
  • GOLD FUTURES

    2,332.50
    -4.70 (-0.20%)
     
  • DOW

    39,065.26
    -605.78 (-1.53%)
     
  • Bitcoin GBP

    53,241.88
    -1,347.59 (-2.47%)
     
  • CMC Crypto 200

    1,470.90
    -31.76 (-2.11%)
     
  • NASDAQ Composite

    16,736.03
    -65.51 (-0.39%)
     
  • UK FTSE All Share

    4,543.84
    -16.71 (-0.37%)
     

OpenAI Launches Faster and Cheaper AI Model With GPT-4o

(Bloomberg) -- OpenAI is launching a faster and cheaper version of the artificial intelligence model that underpins its chatbot, ChatGPT, as the startup works to hold on to its lead in an increasingly crowded market.

Most Read from Bloomberg

During a livestreamed event on Monday, OpenAI debuted GPT-4o. It’s an updated version of its GPT-4 model, which is now more than a year old. The new large language model, trained on vast amounts of data from the internet, will be better at handling text, audio and images in real-time. The updates will be available in the coming weeks.

ADVERTISEMENT

Asked a question verbally, the system can reply with an audio response in milliseconds, the company said, allowing for a more fluid conversation. In a demonstration of the model, OpenAI researchers and Chief Technology Officer Mira Murati held a conversation with the new ChatGPT using just their voices, showing that the tool could talk back. During the presentation, the chatbot also appeared to translate speech from one language to another almost instantaneously, and at one point sang part of a story upon request.

“This is the first time that we’re making a huge leap in the interaction and ease of use,” Murati told Bloomberg News. “We’re really making it possible for you to collaborate with tools like ChatGPT.”

The update will bring a number of features to free users that previously had been limited to those with a paid subscription to ChatGPT, such as the ability to search the web for answers to queries, speak to the chatbot and hear response in various voices, and command it to store details that the chatbot can recall in the future.

The release of GPT-4o is poised to shake up the rapidly evolving AI landscape, where GPT-4 remains the gold standard. A growing number of startups and Big Tech companies, including Anthropic, Cohere and Alphabet Inc.’s Google, have recently pushed out AI models that they say match or surpass the performance of GPT-4 in certain benchmarks.

OpenAI’s announcement also comes the day before the Google I/O developer conference. Google, an early leader in the artificial intelligence space, is expected to use the event to unveil more AI updates after racing to keep pace with Microsoft Corp.-backed OpenAI.

In a rare blog post on Monday, OpenAI Chief Executive Officer Sam Altman said that while the original version of ChatGPT gave a hint for how people could use language to interact with computers, using GPT-4o feels “viscerally different.”

“It feels like AI from the movies; and it’s still a bit surprising to me that it’s real,” he said. “Getting to human-level response times and expressiveness turns out to be a big change.”

Two Times Faster

Rather than relying on different AI models to process different inputs, GPT-4o — the “o” stands for omni — combines voice, text and vision into a single model, allowing it to be faster than its predecessor. For example, if you feed the system an image prompt, it can respond with an image. The company said that the new model is two times faster and significantly more efficient.

“When you have three different models that work together, you introduce a lot of latency in the experience, and it breaks the immersion of the experience,” Murati said. “But when you have one model that natively reasons across audio, text and vision, then you cut all of the latency out and you can interact with ChatGPT more like we’re interacting now.”

But the new model hit some snags. The audio frequently cut out as the researchers spoke during their demo. The AI system also surprised the audience when, after coaching a researcher through the process of solving an algebra problem, it chimed in with a flirtatious-sounding voice: “Wow, that’s quite the outfit you’ve got on.”

OpenAI is beginning to roll out GPT-4o’s new text and image capabilities to some paying ChatGPT Plus and Team users today, and is offering those capabilities to enterprise users soon. The company will make the new version of its “voice mode” assistant available to ChatGPT Plus users in the coming weeks.

Read More: Apple Nears Deal With OpenAI to Put ChatGPT on iPhone

As part of its updates, OpenAI said it’s also enabling anyone to access its GPT Store, which includes customized chatbots made by users. Previously, it was only available to paying customers.

Speculation about OpenAI’s next launch has become a Silicon Valley parlor game in recent weeks. A mysterious new chatbot caused a stir among AI watchers after it showed up on a benchmarking website and appeared to rival GPT-4’s performance. Altman offered winking references to the chatbot on X, fueling rumors that his company was behind it. On Monday, an OpenAI employee confirmed on the social platform X, that the mystery chatbot was indeed GPT-4o.

The company is working on a wide range of products, including voice technology and video software. OpenAI is also developing a search feature for ChatGPT, Bloomberg previously reported.

On Friday, the company quelled some of the rumors by saying it wouldn’t imminently launch GPT-5, a much anticipated version of its model that some in the tech world expect to be radically more capable than current AI systems. It also said that Monday’s event wouldn’t unveil a new search product, a tool that could compete with Google. Google’s stock ticked higher on the news.

But after the event wrapped, Altman was quick to keep the speculation going. “We’ll have more stuff to share soon,” he wrote on X.

(Updates with context starting in the third paragraph.)

Most Read from Bloomberg Businessweek

©2024 Bloomberg L.P.