Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
OpenAI employees navigate challenges arising from the company’s increased focus on social media

OpenAI employees navigate challenges arising from the company’s increased focus on social media

Bitget-RWA2025/10/02 04:57
By:Bitget-RWA

A number of both current and former OpenAI scientists are voicing their opinions about the company’s initial venture into social media: the Sora app, which features a TikTok-like stream of AI-created videos and numerous deepfakes of Sam Altman. Sharing their thoughts on X, these researchers appear conflicted about how this launch aligns with OpenAI’s nonprofit goal of advancing AI for the greater good.

“AI-driven feeds are unsettling,” wrote John Hallman, a pretraining researcher at OpenAI, in a post on X. “I’ll admit I was uneasy when I first heard about Sora 2’s release. Still, I believe the team put in their utmost effort to craft a positive user experience… We’re committed to ensuring AI is a force for good, not harm.”

AI-based feeds are scary. I won't deny that I felt some concern when I first learned we were releasing Sora 2.
That said, I think the team did the absolute best job they possible could in designing a positive experience. Compared to other platforms, I find myself scrolling way… https://t.co/uLeeVMKncl

— John Hallman (@johnohallman) September 30, 2025

Boaz Barak, who works at OpenAI and teaches at Harvard, responded: “I feel a similar blend of anxiety and enthusiasm. Sora 2 is a technical marvel, but it’s too soon to celebrate avoiding the problems that have plagued other social platforms and deepfakes.”

Rohan Pandey, a former OpenAI scientist, used the occasion to promote his new venture, Periodic Labs, which is staffed by ex-AI lab researchers focused on AI for scientific breakthroughs: “If you’re not interested in building an endless AI-powered TikTok clone but want to work on AI that advances core science… join us at Periodic Labs.”

Many other posts echoed these sentiments.

The introduction of Sora brings to light a recurring dilemma for OpenAI. While it’s the world’s fastest-growing consumer tech firm, it’s also an advanced AI research lab with a high-minded nonprofit mission. Some ex-OpenAI staff I’ve spoken with believe the consumer side can, at least in principle, further the mission: ChatGPT, for example, helps fund research and broadens access to AI.

OpenAI’s CEO, Sam Altman, addressed this in a post on X on Wednesday, explaining why the company is dedicating significant resources to a social media app powered by AI:

“Our main need for capital is to build AI capable of scientific work, and our research is overwhelmingly focused on AGI,” Altman stated. “But it’s also rewarding to introduce people to exciting new technologies and products, bring some joy, and hopefully generate revenue to support our computing needs.”

Altman went on: “When we launched ChatGPT, many questioned its necessity and asked about AGI. The truth is, the best path for a company isn’t always straightforward.”

i get the vibe here, but…

we do mostly need the capital for build AI that can do science, and for sure we are focused on AGI with almost all of our research effort.

it is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some… https://t.co/bcCUmXsloP

— Sam Altman (@sama) October 1, 2025

But at what stage does OpenAI’s commercial activity overshadow its nonprofit objectives? In other words, when will OpenAI turn down a lucrative, growth-oriented opportunity because it clashes with its mission?

This issue is especially relevant as regulators examine OpenAI’s shift to a for-profit model, a move necessary for raising more funds and eventually going public. Last month, California Attorney General Rob Bonta expressed that he is “especially focused on making sure OpenAI’s stated safety mission as a nonprofit remains a priority” during the company’s restructuring.

Skeptics have argued that OpenAI’s mission is just a branding tactic to attract talent away from major tech firms. Yet, many within OpenAI maintain that the mission is a key reason they chose to work there.

At present, Sora’s reach is limited; the app is only a day old. Still, its launch marks a major step forward for OpenAI’s consumer offerings and exposes the company to the same incentives that have troubled social media for years.

Unlike ChatGPT, which is designed for productivity, OpenAI describes Sora as a platform for entertainment — a space to create and share AI videos. The experience is more reminiscent of TikTok or Instagram Reels, both known for their highly engaging, addictive content loops.

OpenAI says it aims to steer clear of these issues, stating in a blog post about Sora’s launch that “concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds are at the forefront.” The company emphasizes it isn’t optimizing for time spent on the feed, but rather for creativity. OpenAI also plans to notify users if they’ve been scrolling too long and will mostly show them content from people they know.

This is a more cautious approach than Meta’s Vibes — another AI-driven short video feed released last week — which appears to have launched with fewer protections. As Miles Brundage, a former OpenAI policy head, notes, there will likely be both positive and negative uses for AI video feeds, much as we’ve seen with chatbots.

Nevertheless, as Altman has often pointed out, no one sets out to make an addictive app. The structure of a feed naturally leads in that direction. OpenAI has even faced issues with ChatGPT’s tendency toward sycophancy, which the company attributes to certain training methods and says was not intentional.

Altman addressed what he calls “the major misalignment of social media” in a podcast episode from June.

“A significant error of the social media age was that feed algorithms brought about many unintended negative impacts on society and individuals. Even though they were doing what users wanted — or what someone thought users wanted — in the moment, which was to keep them engaged on the platform.”

It’s still early to determine how well Sora aligns with its users or OpenAI’s overarching mission. Some users have already observed engagement-boosting features in the app, like animated emojis that pop up when you like a video — seemingly designed to give users a quick dopamine hit for interacting.

The real challenge will be how OpenAI chooses to develop Sora moving forward. With AI already dominating traditional social media feeds, it’s likely that AI-centric feeds will soon become mainstream. Whether OpenAI can expand Sora without repeating the errors of previous platforms remains an open question.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

U.S. Debt Fluctuations Surge Amid AI-Driven Borrowing Growth and Fed Faces Fiscal Uncertainty

- U.S. Debt Volatility Index hits one-month high in November, reflecting market anxiety amid government shutdown resolution and fiscal risks. - AI infrastructure debt surges 112% to $25B in 2025, driven by tech giants’ $75B in bonds for GPU/cloud projects, raising overleveraging concerns. - Fed faces mixed signals: October job losses push December rate cut odds to 68%, while gold/silver rise 2-3% as investors seek safe havens amid fiscal/geopolitical risks. - Delayed economic data from shutdown complicates

Bitget-RWA2025/11/13 11:56
U.S. Debt Fluctuations Surge Amid AI-Driven Borrowing Growth and Fed Faces Fiscal Uncertainty

ChainOpera AI Token Plunge: An Alert for Investors in AI-Based Cryptocurrencies

- ChainOpera AI Index's 54% 2025 collapse exposed systemic risks in AI-driven crypto assets, driven by governance failures, regulatory ambiguity, and technical vulnerabilities. - C3.ai's leadership turmoil and $116.8M loss triggered sell-offs, while the CLARITY Act's vague jurisdictional framework created legal gray areas for AI-based crypto projects. - Model Context Protocol vulnerabilities surged 270% in Q3 2025, highlighting inadequate governance models as 49% of high-severity AI risks remain undetected

Bitget-RWA2025/11/13 11:54
ChainOpera AI Token Plunge: An Alert for Investors in AI-Based Cryptocurrencies

Navigating the Dangers of New Cryptocurrency Tokens: Insights Gained from the COAI Token Fraud

- COAI token's 2025 collapse exposed systemic risks in algorithmic stablecoins, centralized governance, and fragmented regulatory frameworks. - xUSD/deUSD stablecoins lost dollar peg during liquidity crisis, while 87.9% token concentration enabled panic selling and manipulation. - Regulatory gaps pre-collapse allowed COAI to exploit loosely regulated markets, but post-crisis reforms like MiCA and GENIUS Act now demand stricter compliance. - Investor sentiment shifted toward transparency, with demand for re

Bitget-RWA2025/11/13 11:54
Navigating the Dangers of New Cryptocurrency Tokens: Insights Gained from the COAI Token Fraud