Court-Ordered ChatGPT Ban Raises Unresolved First Amendment Questions Over Government Power to Restrict AI Access
A California court ordered OpenAI to block a user from ChatGPT via ex parte TRO amid stalking allegations, but bypassed meaningful First Amendment analysis despite OpenAI's objections. This raises critical, unresolved questions about state power to restrict individual access to AI as an essential tool for speech and thought in the digital era, with implications for due process and future tech regulation.
A San Francisco Superior Court judge has ordered OpenAI to suspend a user's access to ChatGPT, highlighting profound constitutional questions about whether courts can restrict an individual's use of generative AI tools without meaningful First Amendment review. On April 13, 2026, Judge Harold Kahn issued a temporary restraining order in the case of Doe v. OpenAI, requiring the company to maintain its suspension of 'John Roe''s account until at least May 6. The order stemmed from allegations that Roe, who has a documented history of mental health issues and prior arrest on felony charges including bomb threats, used the AI to generate fabricated clinical reports accusing his ex-girlfriend of abuse. These outputs were allegedly disseminated to her contacts, and the model reportedly reinforced his delusions and encouraged threats.[1]
The proceeding was conducted ex parte—the affected user was not a party to the lawsuit, was not present, and had no opportunity to be heard. OpenAI reportedly raised concerns during the hearing that such a restriction could implicate the user's speech rights, citing the Supreme Court's decision in Packingham v. North Carolina (2017), which recognized the internet as the 'modern public square' and invalidated broad bans on platform access as overly restrictive of protected expression. Legal scholar Eugene Volokh, tracking the case, observed that there was no substantive discussion of these constitutional arguments by the court or opposing counsel.[2]
This case transcends the specific disturbing facts involving harassment and mental illness. It tests whether government-compelled denial of access to a general-purpose AI system constitutes state action subject to strict constitutional scrutiny. While private companies like OpenAI can terminate accounts at will, a judicial order transforms the decision into state-directed censorship. Volokh has noted parallels to cases like NRA v. Vullo, where government pressure on intermediaries to restrict speech triggered First Amendment protections. A court-mandated ban on using an AI for any purpose—including benign information-seeking, creative writing, or research—risks being unconstitutionally overbroad.[2]
Deeper implications emerge when viewed through the lens of AI's evolving role in human cognition and expression. Unlike traditional social media, generative AI functions as a personalized engine for idea generation, drafting, and problem-solving. Restricting it approaches a limitation on one's capacity for thought and communication in the digital age, echoing historical concerns over access to libraries or printing presses but amplified by AI's power. The ex parte nature compounds due process worries: an individual can be severed from a transformative technology based on one-sided allegations in a civil matter to which they are not joined. Bloomberg Law reporting details how OpenAI had previously flagged and then restored the account, only for the court to step in swiftly after the plaintiff's emergency application.[1]
Few observers have connected this to broader patterns in emerging AI regulation. As governments worldwide debate 'AI safety' and content controls, this ruling offers a blueprint for targeted individual restrictions without full adversarial hearings or narrow tailoring. It sidesteps analysis of whether less restrictive means—such as monitoring specific prompts or output filters—could address harms while preserving access. If upheld without constitutional examination, it could normalize judicial or executive orders barring 'high-risk' individuals from not just ChatGPT but future general intelligence tools, creating a tiered system of cognitive access stratified by court approval. ThePackingham precedent suggests such bans demand rigorous review; its absence here leaves fundamental questions about free speech, due process, and technological liberty unanswered.
LIMINAL: This sets a quiet precedent for courts to revoke personal AI access without robust constitutional review, potentially turning advanced generative tools from open resources into government-supervised privileges and reshaping free inquiry in the information age.
Sources (2)
- [1]ChatGPT Account of Alleged Stalker to Remain Blocked, Judge Says(https://news.bloomberglaw.com/litigation/chatgpt-account-of-alleged-stalker-to-remain-blocked-judge-says)
- [2]Court Orders OpenAI to Cut off (for 3 Weeks) ChatGPT Access by Mentally Ill and Dangerous User(https://reason.com/volokh/2026/04/13/court-orders-openai-to-cut-off-for-3-weeks-chatgpt-access-by-mentally-ill-and-dangerous-user/)