Sam Altman, CEO of OpenAI, called for a new agency to regulate and license AI, which was met with both support and skepticism in the US Senate.
In the first US Senate hearing on “Oversight of AI: Rules for Artificial Intelligence,” it quickly became clear that everyone kind of agreed: OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and deep learning critic Gary Marcus called on the US Senate to regulate artificial intelligence.
Senators expressed similar sentiments, some surprised by Altman’s call for regulation: “I can’t remember when we last had companies come and plead with us to regulate them,” said US Senator Dick Durbin during the hearing. Montgomery reminded the senate, that IBM’s position did not change: “Trust is our license to operate. We called for precision regulation for AI for years now. AI should be regulated at the point of risk.”
OpenAI CEO calls for new US agency for AI
Specifically, all three advocated for giving out licenses for companies to operate AI models above a certain level of risk; Marcus and Altman also called for the creation of a new US regulatory agency for AI and the initiation of an international regulatory body, sometimes likened to CERN, sometimes to the International Atomic Energy Agency. Existing authorities or litigation under existing laws could be a tool, but “they give not enough coverage, are too slow to protect the things we care about.,” Marcus said.
A new AI agency would need to conduct safety reviews before and after AI models are deployed, and be a “nimble monitoring agency” able to track AI developments and recall products, Marcus said. He also called for more investment in AI safety research and more transparency from companies like OpenAI.
Altman warned against stifling small AI startups and the open-source community with overly broad regulations. Instead, he suggested that companies would have to have their AI models licensed by the new agency above a certain capability threshold, and could lose it if they failed to meet safety standards. The models would also have to be tested for certain capabilities before they could be used, such as the ability to replicate itself or break out of a system. Altman also advocated that these processes be audited by independent, external parties.
Montgomery, on the other hand, argued that regulation could be handled by existing agencies and repeatedly referred to the EU AI Act in her regulatory proposals. What is needed, she said, is a precision regulation approach to AI with rules governing specific use cases rather than the technology itself. This would also need a clear definition of the risks depending on the use case and different rules for different risks. The focus needs to be on transparency and accountability, she said.
“Pandora’s box does need more than words”
Altman and Marcus pointed to immediate dangers, such as election meddling or other forms of targeted influence, as well as dangers that could only emerge with the advent of general artificial intelligence (AGI). Licenses are needed primarily for what AI models will one day be able to do, not just for what they can do now, Altman said. As for a possible moratorium on AI training, such as for GPT-5, he stressed that his company currently sees no reason to stop training new AI models, but instead wants to continue conducting extensive security testing before release.
Most US senators seemed open to the idea of a new AI agency, with some openly supporting it, including Peter Welch and Richard Blumenthal, who cautioned that “Pandora’s box does need more than words like ‘licensing’ and ‘new agency.’”
“There is some real hard decision-making, how to frame the rules to fit the risk. First do no harm, make it effective, make it enforceable, make it real,” the US senator said. “We need to grapple with the hard questions here.” They were raised in the hearing, he said, but have yet to be answered. What is clear, he said, is that “enforcement really does matter,” and that any new agency that might be created should be adequately resourced with money and, most importantly, capable scientists.