Skip to content

Our Priorities

To advance the cause of responsible innovation, ARI is advocating for thoughtful, targeted policies that address the AI issue from multiple angles.

Cross-Cutting Policy Recommendations

In addition to policies that address specific harms and risks, ARI recommends the following measures that cut across issue areas and improve the U.S. government’s ability to lead on AI.

  • Establish an AI Auditing Oversight Board (AIAOB), modeled on the successful Public Company Accounting Oversight Board (PCAOB), to ensure integrity in external audits of AI labs.
  • Establish a US-led AI Suppliers Group, modeled on the Nuclear Suppliers Group, composed of responsible democracies leading in AI and semiconductors to coordinate on safety and export controls.
  • Update the Federal Acquisition Regulations (FAR) to incorporate considerations of risks presented by the use of AI systems.
  • Increase funding for the Commerce Department’s National Institute of Standards and Technology (NIST) to help develop regulatory capacity on AI.
  • Democratize access to AI for academic researchers by generously funding the National AI Research Resource (NAIRR) and federal testbeds.
  • Bring more AI experts into federal agencies by expanding personnel hiring flexibility.

Current Harms

Government is falling behind on addressing the current harms coming from AI systems, including algorithmic bias, electoral interference, and labor market disruption.

  • Promote public trust by requiring disclosure when automated systems (e.g. chatbots) impersonate humans.
  • Require the use of AI model “nutrition labels” to clarify intended use cases and minimize the usage of models in contexts for which they are not well suited.
  • Safeguard privacy and human dignity by prohibiting the distribution of non-consensual AI-generated intimate images. 
  • Allow content creators to quickly and easily opt out of having their work product included in datasets used to train AI models.
  • Prevent nonconsensual deepfakes and strengthen individual control over the use of personal data, image, and likeness.
  • Safeguard the integrity of federal elections by prohibiting distribution of deceptive AI-generated audio or visual content to influence an election or solicit funds.

National Security

We must preserve America’s lead in the global race for AI innovation, while improving our defenses against AI-powered cyber, robotic, chemical, and bio weapons from state and non-state actors.

  • Apply “Know Your Customer” regulations to U.S.-based cloud providers to prevent countries of concern and malicious actors from accessing cloud resources to bypass export controls and train powerful AI models.
  • Create reporting requirements for large data centers to allow tracking of large compute clusters.
  • Increase funding for the Commerce Department’s Bureau of Industry and Security to encourage development and efficient administration of export controls on cutting-edge semiconductors used to develop large AI models.

Emerging Risks

As AI capabilities increase over the coming years, misuse and misalignment of powerful AI systems could present dangerous risks that we must try to mitigate.


  • Create an incident reporting database to help regulators and experts track emerging risks and dangerous unexpected behavior from AI systems.
  • Impose reporting requirements on the largest AI models to facilitate federal monitoring of capabilities, communicate suggested policy actions to relevant bodies throughout the government, and reduce risks.
  • Develop new benchmarks to test generative AI on extreme capabilities such as facilitating development of chemical, biological, radiological, and nuclear weapons.
  • Require pre-clearance for the most advanced frontier model training runs.
  • Conduct post-deployment auditing of advanced AI models to assess conformity with standards.
  • Require screening of DNA synthesis requests to prevent proliferation of dangerous biological material.
  • Fund research in interpretability, robustness, safety, and security of AI systems.