Ever since ChatGPT released the initial version of its model to the public in 2022, the buzz around artificial intelligence (AI) hasn’t stopped. With the release of competing models, the spiralling number of uses and the quick integration into people’s daily lives, the technology remains at the center of much debate. Today, three years later, AI still feels largely new. People disagree on its effects, providing different answers to questions, such as how much use is too much and creating widespread confusion.
Part of the problem is that while individuals and firms have made their decisions and taken their stances, the government has largely remained too divided to pass much actionable policy. Like the general public, lawmakers remain divided along party and state lines when it comes to their stance on AI. Believing laws to stifle innovation and hinder the U.S. in its race with China, the current administration’s stance is on the side of anti-regulation. However, proponents of a stricter AI policy argue for the protection of the consumer from harm and misinformation, as well as regulations to protect the resources needed to run large models.
So, should the U.S. regulate?
The answer has become a legal battlefield where state governments and the federal executive branch are currently locked in a standoff. In the absence of a unified federal law, some states have decided they cannot afford to wait to pass policy. Recently, California enacted the Transparency in Frontier Artificial Intelligence Act (SB 53), which represents the most significant state-level intervention to date.
The law targets large developers with hundreds of millions of dollars in revenue, forcing them to report regularly on a defined list of catastrophic risks, risks that could lead to mass casualties or massive amounts of property damage. It imposes heavy fines for violations and creates a legal framework in which developers must disclose their safety protocols and offer protections for whistleblowers who catch something dangerous. By setting such high stakes, California has discarded the standard of voluntary transparency that was the previous protocol and installed a mandatory legal framework.
This state-led movement is not only California’s doing. Colorado and Illinois have also enacted their own mandates, designed to stop algorithmic discrimination in areas like hiring and housing. While these mandates come as a result of a lack of national-level policy, they also create what some have called a “patchwork of regulation” with their efforts to protect their citizens from bias.
For a startup trying to break into the industry, navigating fifty different sets of rules is a daunting task. This is the primary concern cited by federal policymakers, who worry that fragmented rules will cause the U.S. to lose its competitive edge against global rivals like China.
Due to this, the federal government has recently taken an aggressive stance against this state-level expansion. In December, the White House issued an executive order aimed at eliminating state law obstruction of national AI policy. The order frames these state laws as burdensome and even goes as far as to label them ideologically-biased. To force states back in line, the administration has threatened to withhold federal funding from those that refuse to align with the national preference—a light-touch approach. It even directed the creation of a special task force within the Department of Justice to sue states and overturn their AI safety laws in court.
Meanwhile, proponents of regulation point to the European Union (EU) as the gold standard for how regulation should be handled in the U.S. The EU AI Act uses a risk-based hierarchy that bans certain unacceptable uses of AI—such as forms of behavioral monitoring—while strictly regulating high-risk applications in certain industries like healthcare and law enforcement.
For many American legal experts and firms, the EU rules are seen as a blueprint that the U.S. should mirror to ensure that technology serves human interests rather than the other way around. They argue that without these guardrails, we risk a future where misinformation and algorithmic bias become permanent features of our social fabric.
There is also the matter of infrastructure. While we often think of AI as something living on the web, it requires a massive amount of real-world facilities to function. Large models often require data centers that can consume as much electricity as a small city. This has created a secondary front for regulation.
Even the recent federal push to deregulate AI includes exceptions for state power over data center infrastructure. States argue that they must regulate the physical footprint of AI to protect their energy grids and ensure that this digital revolution does not lead to local environmental crises.
This conflict over AI policy mirrors the legal precedent surrounding aviation law. The federal government maintains control over the design of an aircraft engine to ensure uniformity across borders, while states are left to regulate the local environment of the airport itself. We are currently seeing the struggle over who gets to define the engine and who gets to define the environment. This has effectively turned the regulatory debate into a legal civil war between the federal government’s desire for national dominance and the states’ desire for safety.
The debate over whether the U.S. should regulate ultimately comes down to a choice between two types of risk. On one hand, there is the risk of falling behind in a global technological race that will define the next century of economic power. On the other, there is the risk of deploying powerful systems without any public accountability, potentially leading to widespread misinformation or critical safety failures.
A nuanced approach would likely involve a compromise that has yet to be reached. The federal government could establish a national standard for safety that provides the uniformity businesses need to grow, while still allowing states to protect their citizens from specific local harms.
However, as of early 2026, we are still far from that middle ground. Instead, we are entering a period of intense litigation, where the courts, rather than the voters, may decide how AI is governed in this country. Until a comprehensive federal law is passed, the United States will remain caught between a federal government pushing for speed and states pulling for safety, leaving the rest of us to live within the experimental world that is yet to be defined.