A long-running legal battle over the limits of government involvement in online speech moderation is coming to a close, marking a significant moment in the ongoing debate over censorship and the First Amendment. The case, Missouri v. Biden—which reached the U.S. Supreme Court under the name Murthy v. Missouri—is being settled through a 10-year consent decree that restricts several federal agencies from pressuring social media companies to suppress lawful speech.
Under the agreement, key public health and security entities, including the Office of the Surgeon General, the Centers for Disease Control and Prevention, and the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, will be barred from coercing or significantly influencing platforms’ content moderation decisions. The settlement aims to draw a clearer line between permissible government communication and unconstitutional interference in the digital public square.
The case has been closely watched since its inception, largely because it touched on the evolving relationship between federal agencies and major technology companies. Plaintiffs, including the states of Missouri and Louisiana along with a group of social media users, argued that government officials had overstepped constitutional boundaries by encouraging or pressuring platforms to remove or downrank certain viewpoints.
When the dispute reached the Supreme Court in 2024, the justices declined to rule on the broader constitutional questions. Instead, they determined that the plaintiffs lacked standing to seek a preliminary injunction, effectively siding with the administration of Joe Biden on procedural grounds. However, the ruling left the underlying issues unresolved, allowing the litigation to continue in lower courts.
That continuation proved consequential. Discovery efforts, combined with congressional investigations and the release of internal communications known as the “Twitter Files,” revealed a complex web of interactions among federal agencies, nonprofit organizations, and major social media platforms. These materials were cited by critics as evidence of coordinated efforts to shape online discourse, particularly around contentious topics such as the origins of COVID-19 and the reporting on Hunter Biden.
The legal landscape shifted again following the election of Donald Trump, who made free speech protections a central component of his administration’s early policy agenda. Shortly after taking office, Trump issued an executive order directing federal agencies to avoid actions that could infringe on constitutionally protected expression and to review past conduct for potential overreach. Members of his cabinet, including Marco Rubio, moved quickly to align agency practices with the directive.
The consent decree now formalizes those policy goals into enforceable legal constraints. It prohibits federal officials from using threats, implicit or explicit, to influence companies such as Facebook, Instagram, X, LinkedIn, and YouTube in their handling of user content. It also restricts behind-the-scenes efforts to steer moderation policies, a practice that had drawn scrutiny during earlier phases of the litigation.
Supporters of the settlement view it as a milestone in reasserting constitutional protections in the digital age. Eric Schmitt, who initiated the lawsuit during his tenure as Missouri’s attorney general, has framed the outcome as a structural check on what he and others describe as an entrenched network of federal influence over online platforms.
Legal advocates involved in the case argue that the agreement reinforces a fundamental principle: the government cannot circumvent the First Amendment by acting indirectly through private intermediaries. In their view, the settlement sets a precedent that could shape how future administrations interact with technology companies, particularly in areas involving public health, national security, and election integrity.
At the same time, the resolution does not end the broader controversy. Critics of the lawsuit have long maintained that coordination between government agencies and social media platforms can serve legitimate purposes, such as combating misinformation during emergencies or addressing foreign interference campaigns. They warn that overly restrictive rules could hinder those efforts.
Adding another layer of complexity, the current administration is now facing accusations similar to those raised in the original case. Advocacy groups and some lawmakers claim that federal officials have pressured platforms to limit content related to protests against Immigration and Customs Enforcement operations in major cities. While these allegations have not yet resulted in litigation on the scale of Missouri v. Biden, they underscore the persistent tension between free expression and government engagement with digital platforms.
The conclusion of this case represents both an endpoint and a starting point. It closes one of the most prominent legal challenges to government involvement in online speech while simultaneously setting the stage for future disputes. As social media continues to serve as a primary forum for political and cultural debate, the boundaries defined by this settlement are likely to be tested, interpreted, and contested for years to come.
