Advertisement
UK markets open in 5 hours 26 minutes
  • NIKKEI 225

    39,531.92
    -62.47 (-0.16%)
     
  • HANG SENG

    17,469.36
    0.00 (0.00%)
     
  • CRUDE OIL

    77.17
    +0.21 (+0.27%)
     
  • GOLD FUTURES

    2,408.70
    +1.40 (+0.06%)
     
  • DOW

    40,358.09
    -57.35 (-0.14%)
     
  • Bitcoin GBP

    51,210.04
    -1,042.62 (-2.00%)
     
  • CMC Crypto 200

    1,361.35
    -23.91 (-1.73%)
     
  • NASDAQ Composite

    17,997.35
    -10.22 (-0.06%)
     
  • UK FTSE All Share

    4,479.49
    -15.97 (-0.36%)
     

Congress took on AI regulation – and raised a lot more questions than answers

OpenAI CEO Sam Altman testified before the Senate Judiciary Committee Tuesday to discuss possible regulation of artificial intelligence and next steps. The bottom line? Many questions; few, if any, answers.

The hearing featured Altman, New York University Emeritus Professor of Psychology and Neural Science Gary Marcus, and IBM (IBM) Chief Privacy & Trust Officer Christina Montgomery.

Senator John Kennedy (R-La.) pressed on the practical steps that Congress can take – and Marcus and Altman obliged.

"Number one, a safety review, like we use with the [Federal Drug Administration] prior to widespread deployment," said Marcus. "If you're going to introduce something to 100 million people, somebody has to have their eyeballs on it... Number two, a nimble monitoring agency to follow what's going on, not just pre-review, but also post as things are out there in the world, with the authority to call things back."

ADVERTISEMENT

Marcus added that there should also be funding focused on AI safety research and building an "AI constitution."

For Altman's part, he also called for an AI-focused regulating entity and regulation of AI overall.

"I would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards," he said. "Number two, I would create a set of safety standards...We can give your office a longer list of things that we think are important there, but (there should be) specific tests a model has to pass before it can be deployed in the world."

Altman also said that it will be important to include "independent audits" from "experts who can say that the model is or isn't in compliance with state and safety thresholds."

Yahoo Finance spoke to two outside experts about the hearing directly in its aftermath.

Part of the reason regulation is so hard to lock down is "the very nature of technologies like generative AI," said Wasim Khaled, CEO and co-founder of Blackbird.AI. When things do go wrong, facilitated by AI, it's also hard to assign blame, making regulation and accountability even trickier.

Khaled added: "For example, if an AI system causes harm, who’s at fault? Is it the creators? The operators? The training data sources? The system itself? The consequences of getting it wrong have never been higher.”

John Winner, CEO of Kizen, also weighed in, saying that the hearing was an important first step, albeit a very early one.

"There are still a lot of open questions on how we ensure AI is used to advance humanity and benefit everyone," he said. "With how fast AI is moving, it's great to see the involvement and commitment from leading companies and the US government to ensure this is used properly."

OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
New models needed? OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16. (AP Photo/Patrick Semansky) (ASSOCIATED PRESS)

'You can create ten new agencies'

The idea of an AI-specific agency received a lot of air time. But it also posed more questions and few answers. For example, IBM's Montgomery initially received pushback from Senators for her concerns that efforts to build an agency could take too long and be limited in its ability to regulate the new age of AI. One reason: a lack of proper funding and other resources, a point that some lawmakers came to her defense on.

"Most of my career has been in enforcement and, I'm telling you, you can create ten new agencies, but if you don't give them the resources – I'm not just talking about dollars, I'm talking about scientific expertise –you guys will run circles around us," said Senator Richard Blumenthal (D-Conn.), who convened the hearing. "It isn't just the [AI models] that will run circles around [those agencies], but it's the scientists in your company. For every success story in government regulation, you can think of five failures."

Still, the government is going to need to do something – and NYU's Marcus noted that there are certain types of AI usage that are more important to regulate than others, like those in the medical world.

"We have systems that hallucinate things, and they're going to hallucinate medical advice," Marcus told the Senators. "Some of the advice they'll give us is good, some of it bad. We need really tight regulation around that, the same with psychiatric advice – people are using these things as kinds of ersatz therapists. I think we need to be very concerned about that."

But, the challenge for lawmakers is this: "How?"

Allie Garfinkle is a Senior Tech Reporter at Yahoo Finance. Follow her on Twitter at @agarfinks and on LinkedIn.

Click here for the latest trending stock tickers of the Yahoo Finance platform.

Read the latest financial and business news from Yahoo Finance.

Download the Yahoo Finance app for Apple or Android.

Follow Yahoo Finance on Twitter, Facebook, Instagram, LinkedIn, and YouTube.