Bloomberg Law
Aug. 8, 2023, 9:30 AM

AI Influencers Pound Capitol Hill Hallways to Shape Legislation

Oma Seddiq
Elizabeth Kim
Reporter

A parade of experts has been flooding Washington to educate Congress on the benefits and harms of artificial intelligence and how lawmakers should regulate the rapidly evolving technology.

“We’re visiting hundreds of people,” Todd Young (R-Ind.), who’s part of the Senate’s newly formed bipartisan group spearheading efforts on AI, said. “We want this to be highly consultative.”

The AI boom has jolted the usual sluggish pace on Capitol Hill, spurred by a mix of fear and excitement over the technology’s promise to reshape global economies and national security. Lawmakers and interest groups are keen to influence any federal rules on the subject. Since May a flurry of hearings and briefings have featured a total of almost 50 researchers, advocates, government officials, and industry executives, and hundreds of meetings took place behind the scenes. More hearings and legislation are being teed up for when Congress returns from recess in September.

Senate Majority Leader Chuck Schumer (D-N.Y.) points to Sen. Todd Young (R-Ind.) during a news conference in July 2022 about their efforts to enact another technology initiative, the CHIPS and Science Act.
Eric Lee/Bloomberg via Getty Images

These groups told Bloomberg Government they’ve been impressed with the speed and vigor of lawmakers’ AI study-up efforts. They say Congress is largely approaching the issue pragmatically and trying to avoid past failures to rein in tech giants. But some have raised concerns over what regulation might eventually look like, stressing that any new rules must target AI’s biggest risks, include oversight, and account for a wider range of threats. They also want Congress to pivot to action—fast.

OpenAI, IBM Urge Senate to Act on AI After Past Tech Failures

Congress is being “admirably proactive,” said Samir Jain, vice president of policy at the Center for Democracy and Technology, a digital rights advocacy group that testified on AI in May and has been talking with congressional offices each week. “They’re asking many of the right questions. I think they’re still figuring out what direction they want to take.”

Companies across industries have been rolling out AI guidelines they say will help ensure transparency and privacy. Tech giants, including Amazon.com Inc., Alphabet Inc.'s Google, Meta Platforms Inc., and Microsoft Corp., agreed to work with the White House on adopting AI safeguards, the Biden administration announced in July.

Biden Vows to Stay ‘Vigilant’ on AI as Firms Unveil Safeguards

Still, many advocates and officials say federal rules on AI from Congress are crucial. They’ve warned lawmakers of the threat of AI- driven weapons, job loss, disinformation and bias.

Senate Majority Leader Chuck Schumer (D-N.Y.), who’s led the AI charge in the chamber, has promised to host forums with more specialists when lawmakers return in September. He’s working to release comprehensive legislation in the coming months.

AI Rules Must Balance Innovation, Safeguards, Schumer Says

At the same time, other senators and House members are putting out a range of proposals. One bipartisan, bicameral bill (H.R. 5077) released as recess began aims to keep the US’s competitive edge by creating a hub of computational tools and resources for a swath of researchers and students. Meanwhile, Rep. Ted Lieu (D-Calif.), along with Ken Buck (R-Colo.) and Anna Eshoo (D-Calif.) have introduced a bill (H.R. 4223) that would establish a national commission to provide recommendations on AI.

“If we get this wrong, we’d need another act of Congress to correct it,” said Lieu, whose district includes Beverly Hills and Venice, a hub of tech startups. “It’s important that we have a coalition of experts advise Congress in a transparent process, in a transparent manner, because of how complicated and fast-moving AI is.”

AI Research Infrastructure Gains Bipartisan Congress Support

Risky Business

Congress’ AI education sprint began with a high-profile May 16 hearing where Sam Altman, chief executive officer of OpenAI, developer of ChatGPT, urged lawmakers to set rules that address the technology’s risks and maximize its benefits. The hearing marked Altman’s first congressional appearance after ChatGPT’s launch last year shook the public’s imagination of what AI is capable of.

Altman’s testimony was followed by eight more hearings and three Senate briefings focused on topics spanning from human rights to global competition and intellectual property. Witnesses routinely encouraged lawmakers to regulate the technology based on its most risky uses, such as when providing a medical diagnosis or reviewing an asylum application.

AI Disinformation Drives Lawmaker Fears About 2024 ‘Wild West’

A one-size-fits-all framework for AI would be impractical and ineffective, according to Joshua New, a senior fellow at IBM Policy Lab who helped prepare the company’s May testimony. Besides OpenAI and IBM Corp., executives at Google, Anthropic PBC and Hugging Face Inc. also testified over the past few months.

Google Asks US for Guidance on Artificial Intelligence Patents
Tech Industry Embraces AI ‘Watermarks’ To Combat Fake Content

Following talks with at least 100 developers, executives, scientists and advocates, Schumer in June unveiled an AI legislative framework that he said would support the technology’s growth while setting guardrails against dangers.

Yet other congressional proposals, including creating an AI rule-setting federal agency or requiring industry licensing, have given some pause that they may hinder the US’s technological lead.

“We don’t want our peer innovators and authoritarian regimes to get ahead of the US,” said Dewey Murdick, executive director at Georgetown University’s Center for Security and Emerging Technology, who testified to Congress in June.

Looking Past ‘Shiny’ Objects

Officials and groups educating lawmakers have also advised that future AI legislation should bake-in oversight, address lesser-known risks, and include perspectives from a wider range of voices.

Large language models that produce text, visuals, and audio, otherwise known as generative AI, are the “shiny object that everyone is focused on,” Jain said. While responding to generative AI is crucial, experts said, lawmakers should also focus on ensuring AI systems don’t exacerbate social inequities or discrimination, for example, when used in job hiring or housing access.

“There is a lack of a clear definition of what an audit is or what an audit entails, and that’s worrisome to me,” said Rumman Chowdhury, a Harvard University AI fellow who testified in June.

More voices also need to be included in the conversation on AI regulation, some witnesses said. Small and local communities aren’t immune to AI’s harms but have been underrepresented in the national narrative, according to Chowdhury.

“We’re seeing these big, moneyed institutions—very traditionally powerful, even if they are academic—but how do we make space for other people who have been doing the work for many years?” Chowdhury said. She pointed to indigenous-run data organizations that can help reduce bias in machine learning as an example of the kind of stakeholders lawmakers should reach out to.

Sen. Martin Heinrich (D-N.M.), who’s helped advance AI efforts in the chamber, said he wants to hear more from “all comers,” including startups, academics, and entertainers and artists in the music and film industries, where AI is already taking hold, to help guide future legislation.

“We need to think through all those use cases and then try to come up with a legislative response that speaks to as many of them as we can build consensus around,” Heinrich, who also founded the Senate AI caucus, said.

Need for Action

Groups that have spoken to lawmakers are optimistic for action. Yet worries are percolating that AI may lose the national spotlight if Congress gets stuck in the learning process, the issue turns partisan, or other priorities—such as next year’s elections—take center stage.

“The technology is, obviously, getting integrated so quickly in people’s lives in so many different areas that it would behoove Congress to pass something this session,” Jain said.

AI Threats Confront Eager Congress Grappling With Learning Curve

Lawmakers don’t need to start from scratch, hearing witnesses have testified. Government agencies, such as the Federal Trade Commission and the Education Department, can set further guardrails on AI, experts say, and relevant laws and rules can be built upon.

“We don’t have to reinvent the wheel here,” New said. “We do have the capacity to do this.”

Lawmakers acknowledge the urgency behind AI regulation but also embrace that they have more work to do.

Josh Hawley (R-Mo.), ranking member of the Senate Judiciary’s privacy, technology and law panel, which recently hosted two hearings on AI, said more is on tap.

“We’re not done by any stretch,” Hawley said.

— With data visualization by Seemeen Hashem, Cordelia Gaffney, and Jonathan Hurtarte.

To contact the reporters on this story: Oma Seddiq at oseddiq@bloombergindustry.com; Elizabeth Kim in Washington at ekim534@bloomberg.net

To contact the editors responsible for this story: Michaela Ross at mross@bgov.com; Robin Meszoly at rmeszoly@bgov.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.