After blackouts swept the northeastern US in 2003, Congress passed the Energy Policy Act. It authorized the Federal Energy Regulatory Commission to oversee reliability rules for the electricity grid.
Rather than the Commission developing and enforcing standards, Congress handed the pen to a decades-old industry association, the North American Electric Reliability Corporation. The group creates standards and conducts its own enforcement. The government, through the Commission, provides oversight, approving standards and requesting new ones. Today, electric grid operators must follow reliability standards that cover everything from cybersecurity to geomagnetic disturbances.
It’s a good model for AI.
Unfortunately, current US policy is trending in a different direction. The US Senate’s AI Roadmap, released in May 2024, encourages rules for AI deployers, such as banks or hospitals. Noticeably absent are recommendations for regulating AI developers that resemble the European Union’s AI Act or California’s SB-1047.
This approach is imbalanced. It imposes obligations on technology deployers, not technology developers. It’s the automobile equivalent of regulating the driver, not the automaker. While it’s important that drivers follow the speed limit, we still want automobiles with airbags and reliable brakes. Or, to use an electric grid analogy, we require hospitals to have backup generators and power providers.
The recent Delta-CrowdStrike drama underlines this dangerous developer vs. deployer imbalance. Delta faces investigations and lawsuits for its flight cancellations. CrowdStrike will likely (legally) walk away — even though its software update caused the crisis. Imagine a world where autonomous AI agents and systems are integrated into healthcare, finance, and other sectors. The security and reliability of these systems will be paramount.
The Biden Administration is attempting to improve software development through voluntary pledges, Federal procurement, and strongly worded reports. All are insufficient for AI. Congressional action is required. If lawmakers step up, what could they do?
The first option is a traditional regulatory regime. A new or existing government agency receives authority to create and enforce standards on AI developers. California’s original SB-1047 or proposed model legislation takes this path.
The challenge will be political viability. California’s SB-1047 originally proposed a new State Division with the authority to create and enforce standards on AI developers. Reasonable and unreasonable criticism followed. Industry, unsurprisingly, opposed the bill, and it’s been dramatically changed.
On a Federal level, a California approach looks impossible. Dominant California Democrats favor policy approaches that resemble Brussels more than Washington, with the notable exception of Speaker emerita Nancy Pelosi. At the Federal level, Republicans’ historical opposition to regulation and industry lobbying will block SB-1047-style legislation.
In contrast, a self-regulatory model could prove politically palatable. Bipartisan and industry support exists for similar proposals in other sectors. For example, a new bill proposes a non-government regulator to create cybersecurity standards for water utilities. Republicans sponsored the bill. Industry supported it. By leveraging industry expertise to create standards, the government prioritizes oversight and identifies risks.
Al developers will prefer the status quo, and when it comes to influencing legislators, small water utilities and tech industry giants are in different leagues. But in the long run, AI developers may see self-regulation as the lesser of two evils. Ideally, this approach would avoid prescriptive regulation or outright bans, allowing innovation and safety to coexist.
AI developers have already laid the groundwork, making safety and security commitments and establishing the Frontier Model Forum. These could be translated to become a future AI standards and enforcement body similar to the electricity grid industry body.
Aside from industry objections, expect other challenges. AI safety and security advocates will have valid concerns about self-policing, regulatory capture, and lax standards. To address these concerns, advocates should lobby for legislation that includes strong oversight and allows civil society to weigh in. Congress must create or designate a technically competent government agency or commission to provide oversight.
This is not a perfect solution, nor one that anyone is ready for today. The electric grid’s self-regulation evolved over decades. But as AI continues to progress, the stakes are too high to hold out for flawless solutions.
Christopher Covino is Senior Cyber Policy and Strategy Advisor at The Rosslyn Group, AI Policy Fellow with the Institute for AI Policy and Strategy, and Visiting Analyst at The Future Society.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
The post A High-Voltage Vision To Regulate AI appeared first on CEPA.