With David Sacks, a Silicon Valley venture capitalist and technologist, now tapped to serve as Artificial Intelligence (AI) and Cryptocurrency Czar in the incoming Trump administration, we thought we would provide him a friendly memorandum that can support his mission on the AI front: Regulating the future of this all important technology balancing innovation, trade policy, national security, workers’ interests, geopolitics, and more. Here is a list of dos and don’ts in no particular order of importance for the incoming AI Czar.
Encourage American dynamism and entrepreneurship.
The race for AI is global and thus far the United States leads the world in terms of AI research, as well as in AI applications in industry, military, science, and development. Sustaining AI leadership requires not only smart regulation but also enabling the high-skilled engineers to come to study and build their companies in the United States. This is a race for the future and the U.S. government can put its foot on the pedal to ensure continued leadership.
Consider nuanced-granular regulation.
There is a race for AI tools development and deployment, but also a race for regulation. Concerning the latter, avoid the temptation to regulate for regulation’s sake. There are plenty of laws already on the books for consumer safety and citizen welfare. Assess whether they cover AI applications and if they do, simply provide guidance. Too many state-level regulations make compliance a headache and expensive.
Provide an Artificial Intelligence-safety net for the American worker.
The risk of AI disrupting the status quo of employment is quite high. In March 2023 Goldman Sachs predicted that 300 million knowledge workers’ jobs could be displaced worldwide. While creative destruction runs its course and the actual numbers may be different, there is a need for transitional assistance and to help affected workers with career planning, retraining and job placement. Allocating government funds for career safety nets and reskilling initiatives is taxpayer money well spent. AI is not like the trade deals of the past wherein trade adjustment assistance was just handed out. Such support is necessary, but should be much more strategic and thought through.
Take national security concerns seriously. Artificial Intelligence in the wrong hands and geopolitical rivals is something to worry about.
Bad guys, nefarious actors and automation no-goodniks will also be using AI tools. Ensure the good guys stay steps ahead of the bad guys. Increase government human capacity. Let’s not forget that the previous generations of pathbreaking technologies were seeded by government labs.
Avoid regulatory capture. Don’t do Big Tech’s bidding please.
Regulatory capture is a problem with any industry but AI presents other tricky challenges. Technology expertise exists with the AI companies themselves, and not so much outside of them. In other industries, academia and government research labs do have comparable expertise but in AI, the leading labs are all from industry itself. Also, the technology is moving fast, faster than the ability of our authorities to reign it in. The prize for the winners is quite high – dominating the future of production, consumption and even opinion – so is the temptation to lobby the regulators and influence regulation.
Don’t let activism rule the agenda.
There are many claims going around about the harms of AI – some of which are true, and some are hypothetical. AI alarmism comes from both the usual suspects – activists and non-governmental organizations – as well as technology circles, who imagine apocalyptic AI scenarios. The trick is to pick the right AI risks to regulate.
Ensure responsible usage of Artificial Intelligence.
Clarify the rules for the industry and don’t keep the AI sector guessing. If there is regulation, provide guidance in terms of the steps needed for compliance. The field moves fast, so keep regulatory flexibility with options like review of regulation. That is the essence of smart regulation.
Encourage open-source technology.
Open source allows for crowdsourced innovation, expands the talent pool, and makes available inexpensive alternatives. In short, open-source provides optionality. Relatedly, don’t dismiss right away the public choice option in Artificial Intelligence. If you want a successful moonshot, consider a few nationalization ideas to keep everyone “buying American.” After all, government programs like DARPA created the internet and many other pioneering technologies that are the precursors to today’s Artificial Intelligence.
Shut down impractical ideas fast.
For example, some experts have floated the need for an international AI regulator that is analogous to the International Atomic Energy Agency. Given the poor job the United Nations and other multilateral institutions have done with the mandates they currently have, maybe that is not such a great idea. And AI as a technology is so fast moving, international institutional lethargy and kumbaya moments are not going to cut it.
Lastly, don’t throw away the good work of the Biden administration.
In AI, the United States is operating from a position of strength based on the strong foundations by industry, academia, and investment communities, and yes, the previous administration. For example, leadership in AI is co-related with hardware sector leadership and there have been some good moves concerning semiconductor policy. So, don’t throw away any of the good work of the Biden administration.
Good luck with the new gig and remember that we cannot just prompt our way to AI leadership.
James Cooper is a Professor of Law at California Western School of Law in San Diego. Kashyap Kompella, CFA is founder of the AI advisory company AI Profs and Visiting Faculty at the Institute of Directors. Cooper and Kompella are the authors of the recently published “A Short and Happy Guide to Artificial Intelligence for Lawyers.”