OpenAI chief executive Sam Altman appeared before Congress on Tuesday morning to testify about the dangers posed by emerging artificial intelligence technologies, including his company’s ChatGPT AI chatbot.
The hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law offered congressional members the chance to question Mr Altman and other tech leaders about the “urgent” need to create regulations around AI.
Senators questioned Mr Altman, and the other witnesses, Gary Marcus, a Professor Emeritus at New York University and Christina Montgomery the chief privacy and trust officer at IBM, about the need to AI regulations.
Mr Altman spoke to the dangers of artificial intelligence harming the integrity of future elections, manipulating individuals’ opinions, limiting access to certain information and copyright infringement among other things.
The OpenAI CEO offered possible solutions like creating an international regulator committee or agency, led by the US.
“My worst fears are that [the AI industry] cause significant harm to the world,” Mr Altman said.
Ahead of the hearing, Committee Chairman Richard Blumenthal (D-CT), said, “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.”
AI is facing ‘the perfect storm'
Sam Altman says he is concerned about election misinformation
Artificial intelligence is ‘bull in a china shop'
Lawmakers compare the need for legislation to the mistakes of social media
Delaying artificial intelligence advancements by six months is unlikely
Sam Altman Congress hearing live: Watch the stream
15:02 , Anthony Cuthbertson
Here’s the live stream if you want to watch along:
Hello and welcome...
11:08 , Anthony Cuthbertson
to The Independent’s live coverage of OpenAI boss Sam Altman’s first ever appearance before Congress.
The tech executive will be appearing alongside Gary Marcus, Professor Emeritus at New York University, and Christina Montgomery, chief privacy officer at IBM, to face questions from the Senate Judiciary Subcommittee on Privacy, Technology & the Law.
The hearing will begin at 10am local time (3pm BST), and we’ll be bringing you all the build-up for what could be a critical day for establishing AI rules and limits in the US.
Sam Altman Congress hearing live: How to watch
13:57 , Anthony Cuthbertson
There’s just over an hour to go until OpenAI CEO Sam Altman appears before Congress, alongside New York University Professor Gary Marcus and IBM executive Christina Montgomery.
We’ll have a live stream of the hearing for you to watch right here, pinned to the top of this page, as soon as it’s available.
It’s not clear yet how long it will last, but these things tend to go on for a few hours with most or all of the Senators wanting to use their maximum time to air their concerns and pose their questions.
Sam Altman Congress hearing live: What to expect from the Senate Committee
14:04 , Anthony Cuthbertson
Today’s hearing will see Sam Altman testify before Congress for the first time ever, following in the footsteps of many of his high-profile peers in the tech industry.
The Senate Judiciary Subcommittee on Privacy, Technology & the Law are expected to ask him about AI risks and how best to establish safeguards to protect against them.
Here’s what US Senators Richard Blumenthal and Josh Hawley, Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, had to say ahead of today’s hearing:
Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls. This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology. I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.
Senator Richard Blumenthal (D-CT)
Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security. This hearing marks a critical first step towards understanding what Congress should do.
Senator Josh Hawley (R-MO)
Sam Altman Congress hearing live: Opening remarks from committee chair
15:11 , Anthony Cuthbertson
The hearing is underway, with US Senator Richard Blumenthal, Chair of the Senate Judiciary Subcommittee, saying in his opening remarks that the hearing aims to “demystify and hold accountable the technology”.
He begins by using an AI voice cloning software to read a text in his voice. The text is written by OpenAI’s ChatGPT.
It’s pretty convincing. The AI begins: “Too often we have seen what happens when technology outpaces regulation.”
Senator Blumenthal says it may seem amusing but he fears what would happen if the same technology was used to mislead people into thinking he had made an endorsement of Vladimir Putin.
His biggest immediate fear, however, is the “looming new industrial revolution” that will leave millions unemployed when AI displaces their jobs.
Sam Altman Congress hearing live: Senator warns AI could be as devastating as the atom bomb
15:16 , Anthony Cuthbertson
Senator Blumenthal ends his opening remarks saying “the ideas that we develop from this discussion will provide a solid path forward” for establishing regulation for the AI industry.
He adds that it will be the first of many hearings on artificial intelligence.
Senator Josh Hawley, ranking member of the committee, adds some remarks.
“We could be looking at one of the significant technological innovations in human history”, he says.
He speculates it could be as great as the advent of the printing press, or as devastating as the atom bomb.
Sam Altman Congress hearing live: ChatGPT boss says AI could be ‘printing press moment’
15:26 , Anthony Cuthbertson
Sam Altman and the other witnesses swear in, before the OpenAI boss gets underway with his opening remarks.
“It’s really an honour to be here,” he says.
“OpenAI was founded on the belief that AI has the potential to improve nearly every aspect of our lives... We think it can be a printing press moment.”
AI Congress hearing live: Artificial intelligence is ‘bull in a china shop'
15:34 , Anthony Cuthbertson
Christina Montgomery, chief privacy officer at IBM, is up next to give her opening remarks.
She says the era of AI should not be part of the “move fast and break things” culture that has driven Silicon Valley since its early days.
Gary Marcus, Professor Emeritus at New York University, follows up by saying he is worried about the rapid development of artificial intelligence.
“We have built machines that are like bulls in a china shop: Powerful, wreckless and difficult to control,” he says.
AI Congress hearing live: ‘We are facing the perfect storm'
15:37 , Anthony Cuthbertson
The professor continues: “We are facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability... The choices we make now will have lasting effects, for decades, maybe even centuries.”
Seven minute rounds of questioning will now follow.
AI Congress hearing live: AI is ‘bomb in a china shop'
15:41 , Anthony Cuthbertson
Senator Blumenthal agrees with Professor Marcus’s “perfect strom” assessment, but says it is less like a bull in a china shop and more like a “bomb in a china shop”.
AI Congress hearing live: ‘Testing labs’ suggested
15:48 , Anthony Cuthbertson
Sam Altman will face the first questions, with Senator Blumenthal suggesting to him that “independent testing labs” for AI tools could be one idea to help regulate the technology.
He also asks what his thoughts are about mass job displacement. Altman agrees that there will be significant disruption, but adds that he is “very optimistic about how great the jobs of the future will be.”
AI Congress hearing live: ChatGPT boss says technology ‘can go quite wrong'
15:53 , Anthony Cuthbertson
The OpenAI boss is asked about the potential risks of AI.
“My worst fears are that we cause significant harms to the world,” he responds. “If this technology goes wrong, it can go quite wrong.”
It’s worth noting that Altman has admitted in the past to being a doomsday prepper - specifically prepping for an AI apocalypse.
You can read all about that here:
AI expert says we need a ‘cabinet-level’ position or more to regulate
16:07 , Ariana Baio
Gary Marcus, a leading voice in AI, author and professor, advised the Senate Judiciary Committee that the US, or the world, may need a completely new agency to regulate it.
“My view is that we probably need a cabinet-level organisation within the US to address this,” Mr Marcus told the committee on Tuesday.
Mr Marcus explained how AI will likely be a massive part of the future but given how fast-moving and complicated it is the government cannot rely on current legislation and agencies to understand it and thoughtfully regulate it.
Sam Altman says his worst fear is causing ‘significant harm to the world'
16:30 , Ariana Baio
“My worst fear is that we cause significant harm to the world,” Sam Altman, the CEO of OpenAI told the Senate Judiciary Subcommittee on Privacy, Technology and the Law.
OpenAI is one of the leading companies in creating new artificial intelligence. The company has created Dall-E and Chat GPT.
Sam Altman says he is concern about election misinformation
16:31 , Ariana Baio
Multiple members of the Senate Judiciary Subcommittee raised concerns about the future of election misinformation regarding artificial intelligence, citing foreign intervention with social media in the 2016 election.
Sam Altman said he is “quite concerned about the impact [AI] can have on elections” due to the technologies’ limitations.
Several reports have shown how Chat GPT can generate false information and cite misinformation in articles when asked by users - something called “hallucinations”.
Mr Altman said Chat GPT has been developed to refuse to generate answers to harmful things and is monitored to ensure false information is not consistently given under the guise of truth.
AI expert says the key to regulating AI is transparency
16:45 , Ariana Baio
Gary Marcus, an AI expert, said the key to understanding AI systems and regulating them is to ask companies to be more transparent about how they train the models.
“What [AI] is trained on has biases for the system,” Mr Marcus said during the Senate Judiciary Subcommittee hearing on Tuesday.
Mr Marcus encouraged companies with AI tools to provide transparent information that would explain what their system is trained on to help people determine whether the information it is providing is biased.
OpenAI CEO says they are working on copyright model
16:46 , Ariana Baio
Sam Altman, the CEO of OpenAI said that the company was working on how to handle copyright with their systems.
“Creators deserve control over how their creations are used,” Mr Altman said on Tuesday.
AI tools like Chat GPT have been accused of stealing artists’ work and then re-purposing it as original content.
Mr Altman said the company was working to create a new copyright model to give artists credit, compensation and consent.
AI leaders and experts agree Section 230 does not apply to them
16:50 , Ariana Baio
Sam Altman, Christina Montgomery and Gary Marcus all agreed that Section 230 does not apply to their platforms and indicated new regulatory legislation is needed.
Section 230 provides immunity for online computer services when harmful content is uploaded to their platforms.
Sam Altman speaks to AI-generated images
17:05 , Ariana Baio
The CEO of Open AI spoke to AI-generated images, like the one of Donald Trump being arrested when he was indicted.
Sam Altman said an easy way to fix misinformation spread online from images like this is to label them as “generated.”
OpenAI wants less people to use it
17:12 , ariana.baio
OpenAI CEO Sam Altman said multiple times throughout Tuesday’s hearing that ChatGPT needs less people to use it because it does not have enough GPT for everyone to use it.
Mr Altman made it very clear to the Senate Judiciary Subcommittee that they are not an advertising-based platform therefore having more users does not benefit OpenAI.
Mr Altman’s comments were related to lawmakers raising concerns about AI using personal data to grasp people’s attention and hold it for as long as they can.
Though he recognised that AI is being used in marketing, he clarified that was not OpenAI’s goal.
VOICES: AI isn’t falling into the wrong hands – it’s being built by them
17:15 , Ariana Baio
VOICES: The loudest voices in the AI discourse tend to exhibit a startling level of misplaced certainty about what AI is capable of and how the technology can be controlled.
These are essential, urgent questions that will require input from a wide diversity of voices, especially of those who are most likely to be imperilled by the use of AI.
So far, the earliest victims of malign forms of AI, such as inaccurate facial recognition tools or deepfake porn generators, have been members of historically marginalised groups, including women and people of colour. But these groups are woefully underrepresented among the loudest voices in the AI debate.
Arthur Holland Michel reports:
Witnesses encourage lawmakers to think of the future
17:23 , Ariana Baio
Sam Altman, Christina Montgomery and Gary Marcus all agreed that artificial intelligence is nowhere near being as advanced as it can be, therefore lawmakers need to make legislation that can be used as it develops.
Mr Altman and Mr Marcus agree that the best way to start this is creating a regulatory agency or independent commission to look into the complicated world of AI to create a set of regulations for companies that create the technologies.
Ms Montgomery suggested creating rules that begin with the risk of AI rather than regulating the technology itself is a better way to go about it.
But all three witnesses believe Congress should think of guidelines that can be applied to the future of automatic generated content and more.
AI could harm local news
17:40 , Ariana Baio
Senator Amy Klobuchar (D-MN) brought to attention how generative AI could harm local news by encouraging users to use ChatGPT rather than read news from a newspaper.
Ms Klobuchar asked Sam Altman, the CEO of OpenAI, about how the platform will compensate the smaller new organisations that ChatGPT pulls information from to generate answers.
“Do you understand that this could be exponentially worse in terms of local news content if they’re not compensated?” Ms Klobuchar asked.
“Because what they need is to be compensated for their content and not have it stolen,” she added.
Mr Altman said ChatGPT would be willing to do things to help local news.
He said, “We would certainly like to.”
ACLU encourages Congress to implement AI safeguards
17:45 , Ariana Baio
Flawed automated decision-making systems are increasingly being used by the government to make important decisions in key areas of our lives.
Catch up on today's hearing to see @acluidaho's Ritchie Eppink discuss the dangers of AI and why Congress must implement safeguards now. https://t.co/rkk34HQJ5g
— ACLU (@ACLU) May 16, 2023
AI expert says medical advise through AI could be harmful
17:50 , Ariana Baio
Professor Gary Marcus, an expert in artificial intelligence, said lawmakers should be concerned about medical advice given through AI systems like ChatGPT.
Mr Marcus said there needs to be “tight regulations” around what AI may generate, or not, on medical advice as it could make users think they have a medical condition or use it to seek medical advice.
Senator Kennedy asks witnesses for guidance on rules
17:55 , Ariana Baio
Senator Ted Kennedy (R-LA) asked the witnesses what rules they would implore if they were “kings and queens” for the day under the “hypothesis” that Congress doesn’t understand AI and could harm it by creating regulations.
Sen. Kennedy (R-LA) asks panel what reforms they'd make as “queen or king for a day” if 3 hypotheticals are true:
1. Congress doesn't understand AI
2. Congress could “hurt” AI w/regulations
3. There's likely a “berserk wing” of AI community that could use AI to “kill all of us” pic.twitter.com/V7B2kus4Qk
— The Recount (@therecount) May 16, 2023
AI can change jobs of the future
18:12 , Ariana Baio
Over and over again, lawmakers and witnesses debated the negative impact AI will have on the jobs of the future during Tuesday’s hearing.
Asked if he thought AI could harm most jobs, OpenAI CEO Sam Altman said he felt optimistic about how AI will change jobs.
“I believe there are far greater jobs on the other side of this,” Mr Altman told members of the Senate Judiciary Subcommittee.
Christina Montgomery, chief privacy and trust officer at IBM agreed with Mr Altman.
“The most important thing that we can be doing and should be doing now is prepare the workforce of today and the workforce of tomorrow of partnering with the AI technologies,” Ms Montgomery said.
Gary Marcus, a professor at New York University and an expert in AI, slightly disagreed, saying far down the line AI could “replace” most jobs.
Sam Altman says AI can go ‘quite wrong’ without regulations
18:25 , Ariana Baio
While testifying before Congress, Sam Altman said, “If this technology goes wrong, it can go quite wrong” when speaking about the harms that artificial intelligence can have.
OpenAI CEO Sam Altman on AI: "If this technology goes wrong, it can go quite wrong."
In other words, now that OpenAI is in the lead, Congress needs to regulate their competitors pic.twitter.com/LZ8XH9tVtF
— Genevieve Roch-Decter, CFA (@GRDecter) May 16, 2023
Sam Altman says OpenAI could use advertisers in the future
18:45 , Ariana Baio
When asked if OpenAI would use advertisers in the future, CEO Sam Altman said it wasn’t out of the question completely.
Mr Altman previously said during Tuesday’s hearing that OpenAI was not interested in collecting personal data and using it in OpenAI’s technology models because it was not advertising-focused.
However, when asked if advertising could ever be an option Mr Altman said, “I wouldn’t say never.”
Mr Altman said he prefers a “subscription-based model” to profit from.
Lawmakers compare the need for legislation to the mistakes of social media
19:01 , Ariana Baio
From the beginning of the hearing, a similar sentiment has been said among many lawmakers: now is the time to act.
Senator Richard Blumenthal (D-CT) started the hearing by explaining how lawmakers cannot make the same mistake that they made by failing to regulate social media before it became harmful to children and young people.
Delaying artificial intelligence advancements by six months is unlikely
19:25 , Ariana Baio
During Tuesday’s hearing, lawmakers brought up a letter that circulated around the tech community earlier this year calling for companies like OpenAI to pause the development of technology for the next six months to allow lawmakers to play catch-up.
The letter, signed by industry leaders including Elon Musk, called AI systems a, “profound risk to society and humanity.’
However, all three witnesses did not believe pausing technology advancements for six months would help the situation.