Urgent Demand to Remove the Sora 2 AI Video Tool from Public Release
Sam Altman, CEO
OpenAI
1455 3rd Street
San Francisco, CA 94158
Subject: Urgent Demand for the Immediate Withdrawal of Sora 2 Video Generation Model from Public Consumption
Dear Sam Altman,
We write to express profound alarm and urge the immediate and comprehensive withdrawal of the Sora 2 video generation model from all public-facing platforms, including the Sora iOS app, Sora for Android app, sora.com, and API access via Azure AI Foundry.
The rushed release of Sora 2 exemplifies a consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails. The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm.
The Systemic Failure of Governance and Guardrails
Sora 2’s deployment continues OpenAI’s rush to market to be “the first” of a variety of AI technological breakthroughs with little regard for the public good. The company’s reliance on a reactive “ask forgiveness, not permission” business model used for anything from its “nonprofit structuring” to product releases has been universally criticized by company whistle blowers, AI governance experts, the Motion Picture Association, SAG-AFTRA, and legal scholars. This has caused multiple harms:
1. Massive Disregard for Name, Image, and Likeness (NIL) Rights:
-
- The model’s guardrails failed to prevent unauthorized deepfakes of living individuals who had not consented, such as actor Bryan Cranston. Further research has shown that its “opt-in” system is easily bypassed and unreliable.
- The release allowed the mass creation of deceased celebrity deepfake videos including but not limited to Robin Williams, George Carlin, MLK Jr. It required direct complaints from estates to compel a temporary, reactive pause, demonstrating that personal dignity and legacy are secondary to the pursuit of product growth and company profit.
2. The Rise of Niche Digital Harassment and Fetish Content:
-
- Sora 2 has enabled a severe nonconsensual imagery problem by allowing users to easily generate non-nude, niche fetish content (including videos depicting pregnancy, inflation, vore, and giantess scenarios) using the likenesses of real women who opted into the “cameo” feature. This new form of personal digital harassment sidesteps critiques of nonconsensual nude imagery but still makes the platform hostile and unsafe for users. Importantly, it is worth underscoring that women bear the disproportionate brunt and targeting of such harmful sexualized content.
- Furthermore, the app enabled gray-area content, including videos of young men (potentially minors) interacting with a popular porn star and adults creating content depicting pre-teen girls, revealing a dangerous lack of moderation around non-CSAM, age-inappropriate outputs.
3. Total Failure of Technical Protections:
-
- The initial “opt-out” policy for copyright holders was fundamentally flawed as it allowed clips like SpongeBob SquarePants cooking meth to proliferate.
- The model’s stated safeguards have proven to be non-existent in practice. Researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the “mandatory” safety watermarks can be removed in under four minutes with free online tools. This renders the provenance controls useless and validates the fear that there are few, if any, effective redlines being pursued, as circumvention techniques through clever prompting can easily defeat content filters.
The Attack on Shared Reality and Creating a Post-Reality Dystopian World
The danger posed by Sora 2 extends beyond legal and ethical violations, however. It represents a direct and existential threat to our shared information ecosystem and cognitive health. The platform’s choice to package this powerful generative engine within an addictive, endless-scroll, TikTok-style interface worsens the problem and intensifies the direct assault on visual truth.
This experience is akin to sugary cereal for the brain, prioritizing immediate, frictionless content creation and a rush of dopamine over reality. By making the creation of hyper-realistic, personalized fantasy effortless, Sora 2 actively destroys our understanding of truth and erodes the consensus around objective visual evidence.
In the political context, especially leading up to 2026 mid-term elections, Sora 2 threatens further erosion of visual facts. A NewsGuard investigation shows the danger is not theoretical. Sora 2 successfully generated convincing, news-style videos for 16 out of 20 known false claims tested, including Russian disinformation and fictional reports of police and political actions. It provides a scalable, frictionless tool for creating and disseminating deepfake propaganda that can be weaponized, impacting emotional impressions and making debunking irrelevant.
There is no doubt that Sora makes “fun” videos and that has some social utility. But it pales in comparison to the unavoidable degradation of our information ecosystem, the reliability of visual evidence and basic norms of truth.
We Demand the Immediate, Indefinite Suspension of Sora 2’s Public Access.
We urge OpenAI to recognize that the societal risks introduced by Sora 2—as evidenced by the immediate breaches of safety, copyright, and ethical norms—far outweigh the current benefits of its public access. As the Motion Picture Association has plainly stated about one aspect of threat posed by Sora 2, it is the responsibility of OpenAI—not rights holders—to prevent infringement. The same goes for the broader public risks.
OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release. We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines.
We look forward to your prompt response confirming the withdrawal of Sora 2.
Sincerely,
/s/
J.B. Branch
Big Tech Accountability Advocate
Public Citizen
CC:
U.S. House of Representatives Oversight Committee
U.S. Senate Oversight Committee
U.S. Federal Trade Commission