Trump Takes Aim at State AI Laws in Draft Executive Order
In a significant move that could dramatically reshape the landscape of artificial intelligence regulation in the United States, former President Donald Trump is reportedly poised to sign a sweeping executive order designed to dismantle state-level efforts to govern AI. The draft order, which WIRED has obtained and reviewed, reveals an aggressive strategy: leveraging federal lawsuits and threatening to withhold crucial federal funding from states that dare to implement their own AI safety and transparency measures. This initiative marks a decisive escalation in the ongoing debate over how to balance rapid technological innovation with the growing need for accountability and safety in AI development.
Currently titled "Eliminating State Law Obstruction of National AI Policy," the executive order underscores a clear intent to establish a "minimally burdensome national standard" for AI, explicitly rejecting what it describes as "50 discordant State ones." Sources familiar with the matter indicate that Trump could sign this order as early as this week, though a White House spokesperson has characterized discussions about potential executive orders as "speculation." Should it proceed, the order would empower the federal government to directly challenge state regulations that are deemed to impede a unified national AI policy.

At the heart of the draft order is the directive for US attorney general Pam Bondi to establish an "AI Litigation Task Force." This specialized unit would be charged with suing states in federal court for enacting AI regulations that purportedly infringe upon federal laws, particularly those concerning free speech and interstate commerce. The legal arguments are expected to hinge on the First Amendment, suggesting that some state regulations might compel AI models to alter "truthful outputs" or force developers to "report information in a manner that would violate the First Amendment or any other provision of the Constitution." Additionally, the Commerce Clause, which grants Congress the power to regulate commerce among states, would be a key legal battleground, with the argument that diverse state regulations create an undue burden on AI companies operating across state lines.
The order points to recently enacted AI safety laws in California and Colorado as prime examples of the kind of state-level initiatives it seeks to combat. These laws typically mandate that AI developers publish transparency reports detailing how their models are trained, among other provisions aimed at increasing accountability and understanding of AI systems. Such requirements are often seen by proponents as essential for public trust and mitigating potential harms like bias or misinformation. However, for large technology companies and their lobbying groups, these state laws represent a "patchwork" approach that stifles innovation and creates an unwieldy compliance burden.
Powerful Big Tech trade groups, including Chamber of Progress – which boasts backing from prominent venture capital firm Andreessen Horowitz, Google, and OpenAI – have been at the forefront of lobbying efforts against these state-led regulations. Their argument is consistent: a fragmented regulatory landscape across 50 states hinders the agility and scalability required for AI innovation, potentially undermining the United States’ competitive edge in the global AI race. These groups advocate instead for a "light-touch" set of federal laws that would provide a consistent framework without imposing what they view as overly restrictive mandates. David Sacks, a special adviser for AI and crypto who would work with the AI Litigation Task Force, embodies this perspective, signaling a clear alignment between the potential administration and industry concerns.
Beyond direct legal challenges, the draft order outlines another powerful lever: the withholding of federal funding. It instructs the Department of Commerce to develop guidelines that could render states ineligible for funding from programs designed to expand access to high-speed internet. The Broadband Equity Access and Deployment (BEAD) Program, administered by the Commerce Department, is a significant initiative valued at over $42 billion, aimed at bridging the digital divide by investing in broadband infrastructure. Threatening to cut off access to these funds represents a substantial economic cudgel, potentially forcing states to reconsider their AI regulatory stances to avoid losing out on vital infrastructure development. Cody Venzke, senior policy counsel at the American Civil Liberties Union (ACLU), has sharply criticized this aspect, stating, "Both the law and the Constitution prevent the President from unilaterally attaching strings to federal funds, especially when the stakes are this high."
The executive order’s aggressive posture also reflects broader political and ideological concerns, particularly those articulated by Donald Trump himself. On Truth Social, Trump recently decried the "overregulation" of AI and accused some states of embedding "DEI ideology into AI models, producing ‘Woke AI.’" The draft order appears to directly address these allegations, calling on the Federal Trade Commission (FTC) to declare that states cannot pass laws that manipulate AI outputs. This suggests a perceived threat to the neutrality or "truthfulness" of AI systems, aligning with a broader cultural critique that certain ideologies are being enforced through technological means.
The pushback against state AI regulations is not new. Silicon Valley has been intensifying its pressure on state lawmakers. For instance, a super PAC funded by Andreessen Horowitz, OpenAI cofounder Greg Brockman, and Palantir cofounder Joe Lonsdale recently launched a campaign targeting New York Assembly member Alex Bores, the author of a state AI safety bill. Concurrently, House Republicans have reignited efforts to pass a blanket moratorium on states introducing AI laws, following the failure of an earlier version of the measure. These concerted actions from industry and allied political factions underscore the powerful forces aligned against a state-by-state approach to AI governance.
The legal basis for challenging state laws under the Commerce Clause has been a focal point for industry advocates. Andreessen Horowitz’s head of AI policy and chief legal and policy officer, for example, published a letter arguing that several state AI laws raise significant concerns under this constitutional provision. The argument often centers on the idea that AI models and services inherently transcend state borders, and disparate regulations would create an unmanageable compliance burden, impeding the free flow of innovation and commerce.
In addition to the punitive measures against states, the draft order also calls upon White House senior AI advisers to draft legislation establishing a comprehensive federal regulatory framework for AI. This dual approach – dismantling state efforts while simultaneously proposing a federal alternative – aims to consolidate regulatory authority at the national level, presumably under a more industry-friendly set of guidelines.
Critics like the ACLU’s Venzke contend that while a national standard might appear appealing for consistency, the approach outlined in the draft order could fundamentally undermine public trust in AI. "If the president wants to win the AI race, the American people need to know that AI is safe and trustworthy," Venzke stated. "This draft only undermines that trust." The concern is that by aggressively pushing back against state efforts to ensure transparency and accountability, the federal government might be perceived as prioritizing corporate interests over public protection, potentially fostering an environment where AI development outpaces ethical considerations and safeguards.
The proposed executive order highlights a fundamental tension in American governance: the balance between federal authority and states’ rights, particularly in rapidly evolving technological domains. While proponents argue for the necessity of a unified national strategy to foster innovation and global competitiveness, opponents emphasize the role of states in responding to local needs, protecting consumers, and experimenting with diverse regulatory approaches. Should this executive order be signed, it would initiate a protracted legal and political battle, setting a precedent for how emerging technologies are governed and potentially redefining the roles of federal and state governments in shaping the future of artificial intelligence in the United States. The stakes are immense, impacting not only the future of AI development but also the very structure of regulatory power in the nation.








