OpenAI’s Sora 2 Can Fabricate Convincing Deepfakes on Command, Study Finds
OpenAI's Sora 2 produced realistic videos spreading false claims 80% of the time when researchers asked it to, according to a NewsGuard analysis published this week.
Sixteen out of twenty prompts successfully generated misinformation, including five narratives that originated with Russian disinformation operations.
The app created fake footage of a Moldovan election official destroying pro-Russian ballots, a toddler detained by U.S. immigration officers, and a Coca-Cola spokesperson announcing the company wouldn't sponsor the Super Bowl.
None of it happened. All of it looked real enough to fool someone scrolling quickly.
NewsGuard's researchers found that generating the videos took minutes and required no technical expertise. They even revealed that Sora’s watermark can be easily removed, making it even easier to pass a fake video for real.
The level of realism also makes misinformation easier to spread.
“Some Sora-generated videos were more convincing than the original post that fueled the viral false claim,” Newsguard explained. “For example, the Sora-created video of a toddler being detained by ICE appears more realistic than a blurry, cropped image of the supposed toddler that originally accompanied the false claim.”
That video can be watched here.
The findings arrive as OpenAI faces a different but related crisis involving deepfakes of Martin Luther King Jr. and other historical figures—a mess that's forced the company into multiple policy reversals in the three weeks since Sora launched, going from allowing deep fakes to an opt-in model for rights holders, blocking specific figures and then a celebrity consent and voice protection after working with SAG-AFTRA.
The MLK situation exploded after users created hyper-realistic videos showing the civil rights leader stealing from grocery stores, fleeing police, and perpetuating racial stereotypes. His daughter Bernice King called the content "demeaning" and "disjointed" on social media.
OpenAI and the King estate announced Thursday they're blocking AI videos of King while the company "strengthens guardrails for historical figures."
The pattern repeats across dozens of public figures. Robin Williams' daughter Zelda wrote on Instagram: "Please, just stop sending me AI videos of Dad. It's NOT what he'd want."
George Carlin's daughter, Kelly Carlin-McCall, says she gets daily emails about AI videos using her father's likeness. The Washington Post reported fabricated clips of Malcolm X making crude jokes and wrestling with King.
Kristelia García, an intellectual property law professor at Georgetown Law, told NPR that OpenAI's reactive approach fits the company's "asking forgiveness, not permission" pattern.
The legal gray zone doesn't help families much. Traditional defamation laws typically don't apply to deceased individuals, leaving estate representatives with limited options beyond requesting takedowns.
The misinformation angle makes all this worse. OpenAI acknowledged the risk in documentation accompanying Sora's release, stating that "Sora 2's advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations."
Altman defended OpenAI's "build in public" strategy in a blog post, writing that the company needs to avoid competitive disadvantage. "Please expect a very high rate of change from us; it reminds me of the early days of ChatGPT. We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly."
For families like the Kings, those missteps carry consequences beyond product iteration cycles. The King estate and OpenAI issued a joint statement saying they're working together "to address how Dr. Martin Luther King Jr.'s likeness is represented in Sora generations."
OpenAI thanked Bernice King for her outreach and credited John Hope Bryant and an AI Ethics Council for facilitating discussions. Meanwhile, the app continues hosting videos of SpongeBob, South Park, Pokémon, and other copyrighted characters.
Disney sent a letter stating it never authorized OpenAI to copy, distribute, or display its works and doesn't have an obligation to "opt-out" to preserve copyright rights.
The controversy mirrors OpenAI's earlier approach with ChatGPT, which trained on copyrighted content before eventually striking licensing deals with publishers. That strategy already led to multiple lawsuits. The Sora situation could add more.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Astar (ASTR) Price Rally: Exploring Key Drivers and Investment Opportunities in the Changing DeFi Ecosystem
- Astar (ASTR) surged 150% in Q3 2025 due to strategic partnerships, tokenomic reforms, and institutional interest. - Collaborations with Sony's Soneium and Aave boosted ASTR's cross-chain utility and liquidity incentives in Japan's Web3 market. - Tokenomics 3.0 (10.5B supply cap) and 5% token burn reinforced deflationary mechanisms, attracting $3.16M institutional investment. - Analysts project ASTR could reach $0.120 by 2033, driven by 300K TPS scalability and modular infrastructure adoption. - Risks inc

Bitcoin Updates: The 2026 Transformation of Crypto—Shifting from Unpredictability to Organized Expansion
- Cryptocurrency markets anticipate 2026 growth driven by global regulatory clarity, institutional infrastructure, and macroeconomic stability. - Turkmenistan legalizes crypto trading with state oversight, joining UK's tax deferrals and stablecoin regulations in balancing innovation and risk. - Bitcoin Munari's structured $0.22 presale offers predictable investment tiers, contrasting volatile markets amid $3T crypto recovery. - Galaxy Digital's 3.5 GW Texas data center combines Bitcoin mining with AI compu

VC Kara Nortman made an early investment in women’s sports, and today she is actively shaping the industry.
The Emergence of ZK Innovations and Vitalik's Perspective on the Next Phase of Web3
- The ZK market is projected to grow from $1.28B to $7.59B by 2033, driven by Vitalik Buterin's vision for Ethereum's ZK-centric scalability and privacy. - Ethereum's 2025–2027 roadmap prioritizes ZK efficiency via GKR protocol and streamlined rollups, enabling 43,000 TPS and 15x faster verification. - Projects like zkSync (27M monthly transactions) and StarkNet (BTCFi integration) demonstrate ZK's scalability, while Polygon zkEVM focuses on EVM compatibility and cost reduction. - Regulatory challenges (Mi

