Hugging Face cofounder Thomas Wolf says open-source AI’s benefits far outweigh its risks

Fortune· Courtesy of Thomas Wolf

In this edition…a Hugging Face cofounder on the importance of open source…a Nobel Prize for Geoff Hinton and John Hopfield…a movie model from Meta…a Trump ‘Manhattan Project’ for AI?

Hello, and welcome to Eye on AI.

Yesterday, I had the privilege of moderating a fireside chat with Thomas Wolf, the cofounder and chief scientific officer at Hugging Face, at the CogX Global Leadership Summit at the Royal Albert Hall in London.

Hugging Face, of course, is the world’s leading repository for open-source AI models—the GitHub of AI, if you will. Founded in 2016 (in New York, as Wolf reminded me on stage when I erroneously said the company was founded in Paris), the company was valued at $4.5 billion in its latest $235 million venture capital funding round in August 2023.

It was fascinating to listen to Wolf speak about what he sees as the vital importance of both open-source AI models and making sure AI is ultimately a successful, impactful technology. Here were some key insights from our conversation.

Smaller is better

Wolf argued that it was the open-source community that was leading the way in the effort to produce smaller AI models that perform as well as larger ones. He noted that Meta’s newly released Llama 3.2 family of models includes two small models—at 1 billion and 3 billion parameters, compared to tens of billions or even hundreds of billions—that perform as well as much larger models on many text-based tasks, including summarization, as much larger models.

Smaller models, in turn, Wolf argued would be essential for two reasons. One, they would let people run AI directly on smartphones, tablets, and maybe eventually other devices, without having to transmit data to the cloud. That was better for privacy and data security. And it would enable people to enjoy the benefits of AI even if they didn’t have a constant, high-speed broadband connection.

More importantly, smaller models use less energy than large models running in data centers. That’s important to combating AI’s growing carbon footprint and water usage.

Democratizing AI

Critically, Wolf sees open-source AI and small models as fundamentally “democratizing” the technology. He, like many, is disturbed by the extent to which AI has simply reinforced the power of large technology giants, such as Microsoft, Google, Amazon, and, yes, Meta, even though it has arguably done more for open source AI than anyone else.

While OpenAI and, to a lesser extent, Anthropic, have emerged as key players in the development of frontier AI capabilities, they have only been able to do so through close partnerships and funding relationships with tech giants (Microsoft in the case of OpenAI; Amazon and Google in the case of Anthropic). Many of the other companies working on proprietary LLMs—Inflection, Character.ai, Adept, Aleph Alpha, to name just a few—have pivoted away from trying to build the most capable models.

The only way to ensure that just a handful of companies don’t monopolize this vital technology is to make it freely available to developers and researchers as open-source software, Wolf said. Open-source models—and particularly small open-source models—also gave companies more control over how much they were spending, which he saw as critical to businesses actually realizing that elusive return on investment from AI.

Safer in the long run

I pressed Wolf about the security risks of open-source AI. He said other kinds of open-source software—such as Linux—have wound up being more secure than proprietary software because there are so many people who can scrutinize the code, find security vulnerabilities, and then figure out how to fix them. He said he thought that open-source AI would prove to be no different.

I told Wolf I was less confident than he was. Right now, if an attacker has access to a model’s weights, it is simple to create prompts—some of which might seem like gibberish to a human—designed to get that model to jump its guard rails and do something it isn’t supposed to, whether that is coughing up proprietary data, writing malware, or giving the user a recipe for a bioweapon.

What’s more, research has shown that an attacker can use the weights from open-source models to help design similar “prompt injection” attacks that will also work reasonably well against proprietary models. So the open models are not just more vulnerable, they are potentially making the entire AI ecosystem less secure.

Wolf acknowledged that there might be a tradeoff—with open models being more vulnerable in the near term until researchers could figure out how to better safeguard them. But he insisted that in the long-term, having so many eyes on a model would make the technology more secure.

Openness, on a spectrum

I also asked Wolf about the controversy over Meta’s labelling of its AI software as open source, when open source purists criticize the company for placing some restrictions on the license terms of its AI models and also for not fully disclosing the datasets on which its models are trained. Wolf said that it was best to be less dogmatic and to think of openness existing on a spectrum, with some models, such as Meta’s, being “semi-open.”

Better benchmarks

One of the things Hugging Face is best known for is its leaderboards, which rank open-source models against one another based on their performance on certain benchmarks. While the leaderboards are helpful, I bemoaned the fact that almost none exist that seek to show how well AI models work as an aid to human labor and intelligence. It is in this “copilot” role that AI models have found their best uses so far. And yet there are almost no benchmarks for how well humans perform when assisted by different AI software. Instead, the leaderboards always pit the models against one another and against human-level performance—which tends to frame the technology as a replacement for human intelligence and labor.

Wolf agreed that it would be great to have benchmarks that looked at how humans do when assisted by AI—and he noted that some early models for coding did have such benchmarks—but he said these benchmark tests were more expensive to run since you had to pay human testers, which is why he thought few companies attempted them.

Making money

Interestingly, Wolf also told me Hugging Face is bucking a trend among AI companies: It's cashflow positive. (The company makes money on consulting projects and by selling tools for enterprise developers.) By contrast, OpenAI is thought to be burning through billions of dollars. Maybe there really is a profitable future in giving AI models away.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news. If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my book, Mastering AI: A Survival Guide to Our Superpowered Future. It's out now in the U.S. from Simon & Schuster, and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.

This story was originally featured on Fortune.com