Advertisement
UK markets closed
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • FTSE 250

    20,286.03
    -45.77 (-0.23%)
     
  • AIM

    764.38
    -0.09 (-0.01%)
     
  • GBP/EUR

    1.1796
    -0.0009 (-0.07%)
     
  • GBP/USD

    1.2646
    +0.0005 (+0.04%)
     
  • Bitcoin GBP

    47,943.55
    -3.02 (-0.01%)
     
  • CMC Crypto 200

    1,260.75
    -23.08 (-1.80%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • DOW

    39,118.86
    -45.20 (-0.12%)
     
  • CRUDE OIL

    81.46
    -0.28 (-0.34%)
     
  • GOLD FUTURES

    2,336.90
    +0.30 (+0.01%)
     
  • NIKKEI 225

    39,583.08
    +241.54 (+0.61%)
     
  • HANG SENG

    17,718.61
    +2.14 (+0.01%)
     
  • DAX

    18,235.45
    +24.90 (+0.14%)
     
  • CAC 40

    7,479.40
    -51.32 (-0.68%)
     

A global race to regulate AI has put the booming industry on the defensive

Illustration by Bratislav Milenkovic for Fortune

By midnight, all the cookies had been eaten. The vending machine was out of coffee. The sandwiches ordered in for dinner were long gone. Still, about 700 lawmakers remained holed up inside the European Union’s soaring executive chamber in Brussels. Finally, as the sun rose on an icy Dec. 8, 2023, they staggered home to nap and shower before returning for a further 17 hours, until—out of exhaustion or expediency—they agreed on a package of laws about one of the thorniest issues on which they had ever voted: artificial intelligence.

The marathon negotiations over the EU’s AI Act, as it’s now known, were among the most intensive in the bloc’s memory, and produced the world’s first rule book for a sprawling new technology with seismic implications. The EU’s 27 member nations were hardly alone in trying to pull off this legislative feat. President Joe Biden last October issued an executive order, setting out guidelines to protect Americans against discrimination and massive job losses caused by AI—a placeholder while Congress and U.S. agencies like the Department of Commerce attempt to turn the order into hard rules. Meanwhile Japan, the U.K., South Korea, and other major economies have all issued similar proposals that vary from cracking down on misinformation and deepfakes to protecting people’s privacy and revealing the data used for AI machine learning. And at an AI safety summit in the U.K. last November, convened by Prime Minister Rishi Sunak, 28 countries agreed to fight “serious, even catastrophic, harm” from the technology.

The deluge of rules and the potential for more have opened a new battlefront for companies developing and using artificial intelligence. How the fine print is interpreted and enforced will have a far-reaching impact on the tech industry’s future by dictating the kinds of products businesses can create and how they can harness customer data. At stake are hundreds of billions of dollars in future corporate revenue and market value. So, too, are national interests. Countries are weighing the protection of their citizens from dangers like privacy violations, bioweapons, and large-scale cyberattacks against a desire to become leaders in the technology in order to boost their economies and their militaries in the all-important global arms race. “This is so important, so transformative, not only for us, but for the whole world,” says Dragos Tudorache, the EU’s lead AI negotiator.

For regulators in the U.S. and Europe, the conundrum is this: Regulating too lightly could create unexpected dangers, while regulating too heavily could stifle innovation—the central argument pushed by investors and tech execs in arguing for companies to be allowed relatively free rein.

ADVERTISEMENT

What’s more, with AI changing at lightning speed, regulations currently under discussion around the world could soon be obsolete. “You’re trying to regulate a moving target that very few people are technically competent to understand, and which government agencies have a hard time attracting the right skills to effectively regulate well,” says Bill Whyman, president of Tech Dynamics, a Washington, D.C.–based advisor to tech firms. “You don’t know what you are going to get in the end, when you open that Pandora’s box of regulations.”

When EU politicians opened that Pandora’s box, they found two issues almost guaranteed to ignite fierce arguments among the union’s 27 countries. First was facial recognition—something many lawmakers in Europe, with its strict privacy laws, saw as violating individual rights and reenforcing innate biases. Ultimately, EU lawmakers agreed to limit the use of AI facial recognition to law enforcement, and even then only for the purpose of solving serious crimes, like terrorism.

Second came the debate over how big AI companies must be before they should be compelled to abide by the full raft of EU regulations. “That really got the alarm bells ringing at these companies,” says Bram Vranken, campaigner for Corporate Europe Observatory, a Brussels-based NGO that monitors companies’ activities in the European capital. Vranken’s group found that Google, Microsoft, and other tech giants spent millions lobbying EU lawmakers during the months leading up to the AI vote.

Tudorache, the lead EU negotiator, says he was bombarded by text messages and emails from lobbyists and industry groups parroting the talking points of tech companies, “requesting hundreds and hundreds and hundreds of meetings,” he says. That is no surprise to AI experts. “These companies have more power than ever before,” says Luciano Floridi, founding director of Yale University’s new Digital Ethics Center. “They are richer than a couple of hundred countries around the world.”

“This is so important, so transformative, not only for us, but for the whole world.”

Dragos Tudorache, the EU’s lead AI negotiator

The specter looming over the Brussels negotiations was whether companies’ AI foundational models—those, like AI chatbot ChatGPT, capable of being used for an almost endless variety of purposes—could cause grave harm in the hands of malicious actors. That fear has hugely increased with the explosive popularity of ChatGPT, which OpenAI launched in 2022. France and Germany argued hard to protect their own AI champions, specifically Paris-based startup Mistral AI (founded by former Meta and Google scientists) and Aleph Alpha in Germany, each of which has raised hundreds of millions of dollars in venture capital.

In the end, the EU act compelled the biggest tech giants (largely American) to be fully compliant, while likely giving Europe’s startups years to grow before being subjected to the same rules. “We’re in the very early stage of regulating an absolutely novel phenomenon,” says Robert Spano, an attorney who specializes in AI-related law at Gibson Dunn & Crutcher in Paris and London. “Probably for the next decade it will be two steps forward, one step back. There will be a lot of disputes as to what the regulation requires.”

Unlike the EU’s AI Act, the Biden administration’s executive order is a set of guidelines for about 50 federal agencies to follow. Under the order, companies would need to watermark AI-generated content, test their models for potential security risks, and invest in more AI engineers. But critics say the order lacks enforcement mechanisms and downplays human rights concerns. While Europe tightly restricts facial recognition software, the U.S. will rely on its federal standards and technology agency to test those new applications. “That does NOT sound reassuring at all,” says Don’t Spy EU, an NGO monitoring AI regulations in the U.S. and EU, in an analysis of Biden’s executive order. Likewise, the group criticized the EU for allowing security services to use facial recognition, saying, “There is no way to predict how law enforcement will in fact employ these systems.”

To democratic and authoritarian governments alike, the AI race is crucial to cementing their countries’ political clout in the years ahead. “Whoever becomes leader in this sphere will become the ruler of the world,” Russian President Vladimir Putin said as far back as 2017. Likewise, China has declared AI technology a key national strategy. It exports AI facial recognition technology to its allies abroad, and at home bans Western AI apps like OpenAI’s ChatGPT and Google’s Bard. China’s government also requires companies to show that their algorithms reflect “core socialist values,” and tightly controls the data fed into AI machines. That ensures that Chinese chatbots do not, for example, tell users that the Chinese dream is to move to America—as two apps did several years ago.

Despite the clashing—indeed, hostile—ideologies, companies insist the hodgepodge of rules worldwide should be standardized as much as possible to make it easier to do business. But in today’s deeply divided world, that’s “a daunting challenge,” says Chris Meserole, executive director of the Frontier Model Forum, an industry group created in January by OpenAI, Microsoft, Google, and emerging AI powerhouse Anthropic, to push for favorable laws.

The ink had barely dried on the EU’s AI Act when Margrethe Vestager, one of the EU’s top officials, with the fanciful title of Executive Vice President for a Europe Fit for the Digital Age, landed in San Francisco in early January to lay out its implications for Big Tech. In a two-day blitz around the Bay Area, Vestager—a Danish politician who has waged antitrust cases against Google and Apple over the past decade in Brussels—raced between meetings with CEOs Tim Cook of Apple, Sundar Pichai of Google, Nvidia’s Jensen Huang, and OpenAI’s Sam Altman.

Pausing for an hour to meet with journalists, she said she had told all those men that AI laws could wait no longer. “When ChatGPT launched, all of a sudden, more or less everybody on the planet realized this is something really massively important,” she said. When skeptical journalists asked why Big Tech would listen to lawmakers 5,000 miles away, Vestager said bluntly, “If there is not compliance, we stand ready to open noncompliance cases,” adding that under the AI Act, EU countries could fine errant businesses up to $30 million or between 2% and 6% of their global revenues, or order them to break up their vastly powerful companies.

It might take years to come to that—if it ever does. Europe’s laws come into full effect only in 2026, and U.S. and U.K. versions are lagging behind that. The activity in Brussels, however, has fired the starting gun. Says Whyman, of Tech Dynamics: “The odds of getting it right are hard. But that doesn’t mean we should do nothing.”


Regulatory Roulette

Governments worldwide have floated or voted on AI regulations, creating a smorgasbord of rules. Here’s how they’ve tackled some of the top issues:

Facial recognition
This is among the most contentious types of AI and is heavily criticized by civil rights groups. The EU’s AI Act would restrict its use to law enforcement for tracking serious crimes like terrorism. But President Biden’s executive order is more flexible, instructing a federal agency to evaluate new systems as they are launched.

Copyright
Another hot-button issue, with authors and artists filing a slew of lawsuits in U.S. courts, claiming that several companies have used their content to train algorithms, without their consent. The EU would require companies to disclose the data they use, while Biden’s order instructs U.S. patent offices to study the issue and report back.

Deepfakes
Global regulators largely agree on this issue. Both the EU and Biden want to crack down on the proliferation of manipulated AI-generated images, or deepfakes. Both would require companies to watermark fake, or synthetic, content, so that users could tell the difference between images that are real and not.

This article appears in the February/March 2024 issue of Fortune with the headline, “AI’s battle shifts to the corridors of power.”

This story was originally featured on Fortune.com