WASHINGTON — Two tech executives on Tuesday urged lawmakers on Capitol Hill to keep artificial intelligence “under the control of people” and establish an emergency brake to ensure such systems can’t cause harm to humans.
One of those executives, Microsoft President Brad Smith, testified that a “safety brake” is specifically needed for AI systems that manage critical infrastructure like power grids and water systems.
“Maybe it’s one of the most important things we need to do so that we ensure that the threats that many people worry about remain part of science fiction and don’t become a new reality. Let’s keep AI under the control of people. It needs to be safe,” Smith said during a Senate Judiciary subcommittee hearing on ways to regulate AI.
“If a company wants to use AI to, say, control the electrical grid or all of the self-driving cars on our roads or the water supply … we need a safety brake, just like we have a circuit breaker in every building and home in this country to stop the flow of electricity if that’s needed.”
Microsoft is the largest investor in OpenAI, the parent company of the popular AI chatbot ChatGPT. Smith testified alongside William Dally, the chief scientist and senior vice president of the software company Nvidia, and Boston University law professor Woodrow Hartzog.
Dally also told senators that “keeping a human in the loop” is critical to ensure the robots don’t run amok.
“AI is a computer program, it takes an input, it produces an output. … And so anytime that there’s some grievous harm that could happen, you want a human being between the output of that AI model and the causing of harm,” Dally said.
“And so I think as long as we’re careful about how we deploy AI, to keep humans in the critical loops, I think we can assure that the AI won’t take over and shut down our power grid or cause airplanes to fall out of the sky,” he continued.
Tuesday’s panel marked the third such AI-focused hearing hosted by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., the leaders of the Judiciary subcommittee on privacy, technology and the law.
The hearing came just days after Blumenthal and Hawley unveiled their one-page legislative framework for regulating AI — a document that was referenced repeatedly throughout the meeting.
The bipartisan framework, among other things, calls for the creation of an independent oversight body that AI companies would need to register with; makes clear that Section 230 does not apply to AI and allows companies to be held legally liable for harms, including election interference and explicit deepfake imagery of real people; and requires that companies inform users that they are interacting with an AI model or system, or to watermark AI-generated deepfakes.
“Make no mistake. There will be regulation. The only question is how soon and what,” Blumenthal said in his opening remarks.
Microsoft’s Smith praised the Blumenthal-Hawley blueprint, specifically the provision calling for a new oversight body. But given that AI covers so many areas, Smith said it will be necessary for all federal agencies that enforce the law to get up to speed on AI.
“Let’s have an agency that is independent and can exercise real and effective oversight over this category,” Smith said. He added: “I don’t think we want to move the approval of every new drug from the FDA to this agency. So by definition, the FDA is going to need … to have the capability to assess AI.”
Separately on Tuesday afternoon, the leaders of a Commerce and Science subcommittee, Sens. John Hickenlooper, D-Colo., and Marsha Blackburn, R-Tenn., held a hearing focused on how AI companies can boost transparency and the public’s trust.
The main event will come on Wednesday. That’s when Senate Majority Leader Chuck Schumer, D-N.Y. will hold his first AI Insight Forum, where all 100 senators will get a chance to hear from some of the biggest names in tech and the AI space. They include Elon Musk, CEO of SpaceX, Tesla and X, formerly known as Twitter; Mark Zuckerberg, CEO of Meta; former Microsoft co-founder Bill Gates; and Sam Altman, CEO of OpenAI, the parent company of AI chatbot, ChatGPT.
It will be “one of the most important meetings Congress has held in years as we welcome the top minds in AI,” Schumer said in a speech on the Senate floor.
In an interview, Blumenthal said Schumer’s AI forums, which will continue through the fall, are working “in tandem” with committees like his that are holding hearings and drafting legislation.
“He’s educating members. We’re trying to produce legislation. The two are very much in tandem and closely aligned,” Blumenthal said. “But in our framework, we have some very important details on issues like transparency, testing watermarks and the like, with enforcement.”
Scott Wong is a senior congressional reporter for NBC News.
Frank Thorp V