By Jared Schroeder
Texas this month became the first state to criminalize deepfakes — the practice of making it appear people said or did something they did not actually say or do with manipulated video or digital information. What concerns Texas lawmakers are deepfake videos, and especially those used for political purposes.
It’s a shame such good intentions, designed to thwart an emerging threat to democracy, are likely to be struck down by the courts. Without such a law, partisans can use artificial intelligence to create such convincing deepfake videos we literally will not be able to believe our own eyes.
While text can easily be used to mislead, video clips tend to be more believable. It puts the viewer in the moment. If politically motivated deepfakes become commonplace, our trust in information we encounter will falter. We simply will not know if what we are seeing happened or not. Truth could become whatever the deepfake puppeteers want it to be.
Deepfakes, which are becoming more popular and believable, are only part of the massive amount of false information already flowing online. We already had a truth problem; misleading videos will only make it worse.
So, why is the Texas law likely to fail? The way we have come to interpret First Amendment safeguards for free expression severely limits lawmakers’ options to thwart emerging technologies like deepfakes. There is very little lawmakers can do without crossing into protected First Amendment territory.
The state’s deepfakes law, which was added to the election code, criminalizes “creating a deepfake video” or causing “a deepfake video to be published or distributed within 30 days of an election.”
Such restrictions seem reasonable enough. But they cross free expression lines.
First, the U.S. Supreme Court has protected intentionally false speech. Texas’s deepfake law criminalizes distributing a video that is essentially a lie for political benefit.
In 2012, justices struck down the Stolen Valor Act, which criminalized falsely claiming to have earned military honors. “The remedy for speech that is false is speech that is true,” the court reasoned. “This is the ordinary course in a society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.”
The court came to a very similar conclusion in an earlier defamation case, finding that, “under the First Amendment, there is no such thing as a false idea. However pernicious an opinion may seem, we depend for its correction not on the conscience of judges and juries, but on the competition of other ideas.”
It is difficult to find any redeeming value in a fake video that is intended to mislead people, but our free-expression system generally is not based on the perceived value of a message.
Second, in Citizens United v. Federal Elections Commission, the court struck down a law that limited campaign expenditures by certain groups. While a minority of justices argued the law helped democratic society by protecting it from being distorted by certain speakers, five justices found the law unconstitutionally limited the exchange of ideas.
Texas’s law seeks to do something similar. It seeks to safeguard political discussion by limiting certain speech. The Supreme Court has long rejected such efforts.
Finally, the law makes publishers liable for making videos available. This would seem to mean that Facebook or YouTube, by allowing a deepfake to be published on their forums, would be liable. This seems to conflict with a federal law that protects online forums from liability for how their services are used.
So what can be done about political deepfakes? There are no easy options.
We can prioritize information literacy. This means teaching people how to spot deepfakes and how to verify information. We can also wait for detection software to catch up. The government and private sector are working to create deepfake detection software.
Finally, we can rethink how we understand free expression — to allow some limited restrictions on messages that are created purely to deceive and manipulate citizens. This approach, however, is problematic, since the Supreme Court has generally rejected any attempt to limit the flow of information.
Jared Schroeder is an assistant professor of journalism at Southern Methodist University.