800.553.8359 info@const-ins.com

The biggest U.S. AI companies all have strong views about what the country’s incoming “AI Action Plan” should look like, but they don’t all want the same things.

With the deadline for submissions having passed on Saturday, now is a good time to compare what OpenAI, Anthropic, Microsoft and Google had to say. (Meta has presumably also made a submission, but, unlike its peers, it has not publicized its proposals.)

So here is that comparison. (For brevity’s sake, we have not included the submissions of lobbying groups and investors, nor those of various institutes and think tanks—but we are including a list of links to these proposals at the bottom of this piece.)

AI laws

As we wrote last week, OpenAI’s submission called on the Trump administration to rescue it and its peers from a likely flood of disparate state-level AI laws; over 700 bills are currently out there. But it doesn’t want federal legislation—rather, OpenAI (which was loudly calling for AI legislation a year or two ago) now wants a narrow, voluntary framework that would pre-empt state regulation. Under this deal, AI companies would get juicy government contracts and a heads-up on potential security threats, and the government would get to test the models’ new capabilities and evaluate them against foreign models. (It is notable that this is something most of the top AI firms, including OpenAI, had already voluntarily committed to doing when the Biden Administration was in power.)

Google also wants the pre-emption of state laws with a “unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive.” However, it isn’t against the idea of federal AI regulation, as long as it focuses on specific applications of the technology and doesn’t hold AI developers responsible for the tools’ misuse. Interestingly, Google used this opportunity to push for a new federal privacy policy that would also pre-empt state-level efforts, on the basis that this affects the AI industry too.

Google wants the administration to engage with other governments on the issue of AI legislation, pushing back against any laws that would require companies to divulge trade secrets, and establishing an international norm where only a company’s home government would get to deeply evaluate its models.

Export controls and China

The big AI companies all urged the Trump administration to revise the “AI diffusion” rule that the Biden administration introduced in January, in an attempt to stop China routing unlawful imports of powerful U.S. equipment through third countries. But they want different things.

OpenAI would like more countries added to the rule’s top tier, which allows uncapped imports of U.S. AI chips, as long as those countries “commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens.” (It calls this “commercial diplomacy.”) Most countries are currently in the AI diffusion rule’s second tier, with fewer than 20 currently being in the top tier.

Microsoft has also said it wants the number of countries that qualify for the Diffusion Rule’s Tier 1 category expanded. Meanwhile, it wants more resources devoted to helping the Commerce Department enforce a portion of the Diffusion Rule that said that cutting-edge AI chips can only be exported and deployed in data centers that the U.S. government certifies as trusted and secure. It says this would prevent Chinese companies from accessing the most powerful AI chips through a burgeoning gray market of small data center providers in Asia and the Middle East who don’t ask too many questions about who exactly is renting time on their servers. (Microsoft has not yet published its full submission on the U.S. AI Action plan, but instead published a blog post from President Brad Smith talking about what it thinks the Trump Administration should do in terms of the Diffusion Rule.)

Anthropic wants those in Tier 2 to face even tighter controls on the numbers of Nvidia H100s that they can import. The Claude-maker also wants U.S. export controls expanded so China cannot get its hands on Nvidia’s less-powerful H20 chips, which Nvidia specifically designed for the Chinese market to get around existing U.S. export controls.

Google doesn’t like the AI diffusion rule at all, arguing that it imposes “disproportionate burdens on U.S. cloud service providers,” even if its national-security goals are valid.

OpenAI has also suggested a global ban on Huawei chips and Chinese “models that violate user privacy and create security risks such as the risk of IP theft,” which is being widely interpreted as a dig at DeepSeek.

Copyright

OpenAI scorned Europe’s AI Act, which gives rightsholders the ability to opt out of having their works automatically used to train AI models. It urged the Trump administration to “prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress.”

Google, meanwhile, called for balanced copyright laws, and also privacy laws that automatically grant an exemption for publicly available information. It also suggested a review of “AI patents granted in error,” especially because Chinese companies have recently been scooping up increasing numbers of U.S. AI patents.

Infrastructure

OpenAI, Anthropic and Google all called for the streamlining of permitting around transmission lines, to encourage a faster energy buildout to support new AI data centers. Anthropic also called for 50 gigawatts of extra energy in the U.S., only for AI use, by 2027.

Security and government adoption

OpenAI called on the government speed up cybersecurity approvals of the top AI tools, so agencies can more easily test their use. It proposed public-private partnerships to develop national-security models that would otherwise have no commercial market, such as models for classified nuclear tasks.

Anthropic also suggested speeding up procurement procedures to get AI embedded into government functions. Notably, it also called for strong security-evaluation roles for the National Institute of Standards and Technology and the U.S. AI Safety Institute, both of which have been hit hard by the Trump administrations mass firings.

Google argued that national-security agencies should be allowed to use commercial storage and compute for their AI needs. It also called on the government to free up its datasets for commercial AI training, and to mandate open data standards and APIs across different government cloud deployments to enable “AI-driven insights.”

AI’s effects

Anthropic urged the administration to keep a close eye on labor markets and prepare for big changes. Google also said shifts were coming, arguing that they would need wider AI skills. Google also asked for more funding for AI research and a policy to make sure U.S. researchers have access to enough compute power, data, and models.

Other submissions included those from: the Future of Life Institute, Internet Works, the News Media Alliance, the Association of American Publishers, the Authors Alliance, the Business Software Alliance, the Securities Industry and Financial Markets Association, the American National Standards Institute, the Center for AI and Digital Policy, a16z, the Center for Data Innovation, the ARC Prize Foundation, the R Street Institute, the Abundance Institute, and the Foundation for American Innovation.

This story was originally featured on Fortune.com