Across the United States, a growing number of state lawmakers are moving to draw a firm legal line between humans and artificial intelligence. In recent months, several states have passed or proposed legislation that would explicitly deny AI systems the status of legal “personhood,” a concept that carries significant rights and responsibilities under American law. Supporters say the effort is about common sense and accountability, while critics note that it raises deeper questions about how the law defines a “person” in an era of rapidly advancing technology.
Being recognized as a person in the United States has never been a purely biological matter. Legal personhood determines who or what can own property, enter contracts, sue or be sued, and claim constitutional protections. Historically, access to those rights has expanded and shifted over time, often through intense political and moral conflict. Today, that long-running debate has reached a new frontier: whether artificial intelligence should ever qualify as a legal person.
Several states are acting preemptively. Oklahoma, Idaho, Utah, North Dakota, and Ohio are among those where lawmakers have introduced or enacted measures preventing any government entity from recognizing AI as a legal person. In Oklahoma, Rep. Cody Maynard described the motivation bluntly, arguing that “AI is a man-made tool, and it should not have any more rights than a hammer would.” He said the legislation is meant to counter growing public confusion about AI systems and to ensure that responsibility for harm caused by AI remains with the humans and companies behind it.
At the heart of the issue is how American law defines “personhood,” a concept that legal scholars emphasize is far more flexible than many assume. Katherine Forrest, co-chair of the Global AI Group at Paul, Weiss, Rifkind, Wharton & Garrison LLP, noted that “in American legal history, the definition of person has had a very flexible meaning over time.” Early in U.S. history, full legal rights were reserved for a narrow group of white male property owners, while women, enslaved people, and Indigenous populations were granted limited or no recognition as legal persons.
Those definitions have evolved, but they remain contested. Today, one of the most visible debates over personhood centers on abortion law, with some states arguing that unborn babies should be recognized as legal persons. Michael Froomkin, a law professor at the University of Miami, said that “a person is actually less well defined than you might think,” pointing out that legal ambiguity has long been part of the concept.
The law has also extended certain forms of personhood to non-human entities. Corporations are the most prominent example. In the late 19th century, court decisions such as Santa Clara County v. Southern Pacific Railroad established that corporations enjoy certain constitutional protections. Over time, corporate personhood became a practical legal tool. As Froomkin explained, treating corporations as persons is “convenient” for purposes like contracts and property ownership because it advances specific social and economic goals.
That precedent is one reason lawmakers are wary of allowing similar recognition for AI. Legal experts warn that granting AI personhood could complicate or undermine accountability. Sital Kalantry, a law professor at Seattle University, said that if an AI system—such as a self-driving car—were considered a legal person, it could become harder to hold manufacturers or developers responsible for harm. “The worry could be that you couldn’t get to the person making that self-driving car,” she said.
Froomkin echoed that concern, warning that recognizing AI as a person could allow companies to argue that they are not responsible for the actions of systems they designed. “That wouldn’t help anybody,” he said, especially when victims are seeking compensation or justice.
Supporters of state legislation say that is precisely why action is needed now, even if truly sentient AI remains the realm of science fiction. Popular culture often imagines advanced, human-like machines, such as those portrayed in films like I, Robot. But experts stress that current AI lacks consciousness, intent, or moral agency. Kalantry described the debate over AI rights as premature, saying that today’s systems “don’t have the characteristics of anything that’s human.”
The push for state-level laws is unfolding alongside a broader political conflict over who should regulate AI. President Donald Trump signed an executive order in December 2025 calling for limits on state-by-state AI regulation, arguing that a patchwork of laws could hinder innovation and burden startups. While the order did not address AI personhood directly, it reflects a preference for minimal regulation at the federal level.
That tension highlights a deeper challenge. AI already raises unresolved legal questions about liability, consumer protection, and mental health impacts, including concerns over misinformation and what some researchers call “AI psychosis.” Froomkin noted that even basic questions—such as whether a chatbot should be treated as a product or a service—can dramatically affect how lawsuits and regulations apply.
For now, most experts agree that AI is far from being treated as a legal equal to humans. Forrest emphasized that the central goal remains ensuring human control over technology. As states move to block AI personhood, they are not only addressing a hypothetical future but also reinforcing a longstanding legal principle: tools, no matter how advanced, remain the responsibility of the people who create and deploy them.
