Skip to content

Ten Principles for AI Policy

ARI’s approach to AI policy stems from the following ten core principles.

1. Navigate uncertainty

Over the past few years, the size and capabilities of frontier AI models have been growing at an exponential rate, and the pace of achievements has surprised experts. However, there is still disagreement on how quickly AI will advance in the coming years. The only thing we can be certain of is that we cannot say with high confidence what trajectory the technology will take, leaving open a multitude of possibilities ranging from technological stagnation to hypergrowth. A key challenge for policymakers will be to suspend judgment of which prediction is “right” and instead craft policies that can work in a variety of scenarios. ARI aims to help them navigate this difficult terrain.

2. Build regulatory muscle

Right now, we cannot be certain what AI capabilities will emerge in the next few years, what opportunities and risks will appear as a result, and what the potential regulatory or mitigation strategies will be. Given this uncertainty, it is important that the federal government build the capability to regulate AI effectively, so that systems can be adapted to address emerging risks as the technology evolves. The “regulatory muscle” needed includes people, institutions, processes, and laws. The best way to start building this regulatory muscle is by recognizing the benefits and addressing the current societal harms already resulting from AI, such as nonconsensual deepfakes, algorithmic bias, and security risks.

3. Maintain America’s lead

AI can strengthen democracies by enhancing education, accelerating market economies, and improving national security. However, AI can also be used to advance authoritarianism by automating state surveillance and propaganda. Luckily, the U.S. and its allies lead the world in the development of advanced AI, due to our free and open societies (which attract top talent from around the world) and highly interconnected market economies (which enable the intense specialization needed for advanced semiconductor supply chains). U.S. policymakers have a duty to ensure that we maintain our lead in this field. We must be certain not to over-regulate in ways that would cause the U.S. to fall behind. Furthermore, we must avoid giving away our advantages to our geopolitical competitors like China and Russia. Finally, the U.S. should take on a global leadership role in coordinating our allies on setting rules of the road.

4. Upskill government

There is an urgent need for a significant infusion of AI expertise across all branches and levels of government to ensure informed policymaking and effective governance in the face of rapidly advancing AI technologies. This expertise is essential for anticipating future challenges, shaping strategic responses, and harnessing AI’s potential to benefit society.

5. Avoid regulatory capture

In building any new regulatory structures, we must be vigilant against regulatory capture. With a cutting-edge advanced and complex technology like AI, most of the expertise lies in industry, and it is understandable for policymakers to become over-reliant on industry voices to inform them on the issue. Policymakers should listen to those in industry for their expertise and to understand their perspective, but not allow them to dominate the conversation – civil society must also participate. This was a key motivation for us in founding Americans for Responsible Innovation – to help contribute independent expertise to policymakers without financial conflicts of interest. Furthermore, ARI can promote best practices on institutional design and oversight to mitigate the risk.

6. Regulate according to risk

The term “AI” encompasses a wide variety of software systems, from so-called “Good Old-Fashioned AI” (GOFAI) to state-of-the-art multi-modal large language models. There is no one-size-fits-all approach to the issue, so we must start by distinguishing different types of AI, understanding their use cases and risks, and then regulating according to those risk levels. Policy efforts should focus on systems that pose specific risks to the public. When possible, these risks should be addressed using voluntary market-shaping mechanisms (e.g. government purchasing, standards setting, etc.) rather than regulatory mandates.

7. Move quickly

Inaction by Congress will harm both innovation and public safety. To thrive, the American AI field needs R&D investment, standards setting, regulatory harmonization, and public trust. The rapid pace of technological innovation will make it increasingly difficult to steer the technological trajectory in the public interest the longer we wait. Absent leadership from Congress, states and other countries will develop a patchwork of regulations that make compliance more costly. If the technology advances quickly without adequate guardrails, a crisis could trigger a loss of public confidence, which could lead to overregulation and missed opportunities for progress.

8. Counteract market concentration

Because AI foundation models are very expensive to train, there is a natural economic tendency toward winner-take-all dynamics. Market concentration in the AI sector would lead to worse outcomes economically (fewer benefits to consumers) and politically (greater risk of regulatory capture). We must strive to level the playing field, for example, by empowering academics outside of industry to contribute to research and ensuring that a variety of businesses can compete fairly.

9. Embrace economic transformation, but soften the landing

AI could lead to rapid progress and job displacement on unprecedented levels. We cannot stop the process of economic transformation, nor should we try to. But we need to pay attention to the needs of people whose career skill sets are devalued as a result of these changes, and find ways to give them a soft landing.

10. Adopt AI in government responsibly

One of the best ways for the U.S. government to address the challenges of the 21st century, including those posed by AI, is to adopt AI tools responsibly and integrate them into systems and processes. For example, AI is sparking an arms race in cybersecurity, with increasingly powerful offensive capabilities being used by cyber criminals, which must be counteracted through adoption of AI-powered defensive measures by organizations and law enforcement. However, hasty adoption of AI in government could lead to systems that exacerbate inequalities or break when conditions change. We need to develop processes for responsible AI adoption so that government can keep up with the rapid pace of change.