Deeply divided US partisan politics, powerful industry lobbying, and a complex, slow-moving legislative process have stalled Washington from enacting major AI and technology regulation.
In contrast, Europe has moved quickly on privacy, competition, and artificial intelligence, most recently with the ambitious AI Act. California and other states are attempting to keep up, with mixed success.
Congress’s deadlock on AI regulation pushes Washington to rely on executive orders, blueprints, and administrative memorandums. Earlier this week, Biden’s administration issued the first National Security AI memo ordering the Pentagon, the intelligence agencies, and other national security institutions to harness and put “guardrails” on the most powerful AI tools.
The new communication follows a directive from Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which requires developers to create standards, tools, and tests to ensure that AI systems are safe and to mitigate risks such as the use of AI to engineer dangerous biological materials.
Although optimists say that US executive actions position the country as a leader in AI governance, executive orders are limited—they set lofty goals but lack resources and can be reversed by the next president on day one.
US state regulators are trying to fill the vacuum left by stagnant Congress and limited executive authority. If federal laws fail to override state laws, states can step in to fill regulatory gaps. California did so when it established a nationwide standard for Internet data privacy with its California Consumer Privacy Act. Such state regulation of the Internet essentially functions as federal legislation, since the Internet has no tangible physical boundaries and crosses state lines.
California took a bold step and set a new standard by passing SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, targeting doomsday scenarios. Under the proposed law, developers of large artificial intelligence models would become liable for “catastrophic” harms. The bill highlights a key issue involved with AI regulation: should the focus be on regulating the models and developers, or on the uses and applications of the technology?
California’s “doomsday bill” aims to prohibit AI models that pose an unreasonable risk. Developers must ensure that users cannot inflict “critical harm” or access the model’s “hazardous capabilities.” The definition of “critical harm” includes when AI creates chemical, biological, radiological, or nuclear weapons that could result in mass casualties; artificial intelligence executing a massive cyberattack on critical infrastructure; or AI leading to severe physical damage that, if performed by a human, would be deemed a crime. The bill would have applied to all large-scale AI systems that cost at least $100 million to train.
After fierce opposition from startups, tech giants, and several Democratic House members, California Gavin Newsom vetoed the bill. The Democratic governor told a Silicon Valley audience that while California must lead in regulating AI in the face of federal inaction, the proposal “can have a chilling effect on the industry.”
While the AI safety community saw this failure as a significant and disheartening setback, some remain optimistic that the veto could pave the way for a practical and comprehensive AI regulatory bill, one with a greater chance of success. Instead of focusing on hypothetical, doomsday scenarios—like those raised by the Future of Life Institute’s call for an AI “pause”—a new California AI safety effort could target real-world issues such as deepfakes, privacy violations, copyright infringement, and the impact of AI on the labor force.
Such an approach has already seen success in Colorado. The “Colorado AI Act” represents the first comprehensive state-level AI legislation. Officially titled Concerning Consumer Protections in Interactions with Artificial Intelligence Systems,” it obligates developers and deployers to exercise reasonable care in protecting consumers from known or reasonably foreseeable risks of “algorithmic discrimination” that could arise from the intended or actual use of high-risk AI systems. A “high-risk AI system” is defined as one that plays a significant role in making a “consequential decision,” meaning a decision that materially affects a consumer’s access to or the cost and terms of a product, service, or opportunity.
Others believe that California’s failure to pass an AI safety bill will prompt Congress to recognize the need for federal legislation. Recent Nobel Laureate Geoffrey Hinton has warned of AI’s growing danger. This “Godfather” of AI built the foundations for neural network machine learning that mimics human intelligence during his time in the tech industry. As he and others ramp up their efforts for AI regulation, the question remains: will Congress be able to pass an AI Act that will protect both innovation and safety?
Hillary Brill is a Senior Fellow at CEPA’s Digital Innovation Initiative. She is the founder HTB Strategies, a legislative advocacy and strategic-planning practice with a diverse client mix that includes Fortune 500 companies, public interest & academic sectors. She is a Senior Fellow at Georgetown Law’s Institute for Tech Policy & Law where she teaches her novel curriculum on Technology Policy Practice, e-commerce, and Copyright Law.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
Technology is defining the future of geopolitics.
The post US AI Regulation: States Take the Lead appeared first on CEPA.