Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert & block trade
Convert crypto with one click and zero fees
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
The Un-Dead Internet: AI catches irreversible ‘brain rot’ from social media

The Un-Dead Internet: AI catches irreversible ‘brain rot’ from social media

CryptoSlateCryptoSlate2025/10/21 04:00
By:Liam 'Akiba' Wright

The internet is not dead, but it may be rotting.

New research by scientists at the University of Texas at Austin, Texas A&M University, and Purdue University finds that large language models exposed to viral social media data begin to suffer measurable cognitive decay.

The authors call it “LLM brain rot.” In practice, it looks a lot like the “Dead Internet” theory coming back as something worse, a “Zombie Internet” where AI systems keep thinking, but less and less coherently.

The team built two versions of reality from Twitter data: one filled with viral posts optimized for engagement, the other with longer, factual or educational text. Then they retrained several open models, including LLaMA and Qwen, on these datasets.

The results showed a steady erosion of cognitive functions. When models were trained on 100 percent viral data, reasoning accuracy in the ARC-Challenge benchmark dropped from 74.9 to 57.2. Long-context comprehension, measured by RULER-CWE, plunged from 84.4 to 52.3.

According to the authors, the failure pattern wasn’t random. The affected models began to skip intermediate reasoning steps, a phenomenon they call thought skipping. The models produced shorter, less structured answers and made more factual and logical errors.

As training exposure to viral content increased, the tendency to skip thinking steps also rose, a mechanistic kind of attention deficit built into the model’s weights.

More troubling, retraining didn’t fix it. After the degraded models were fine-tuned on clean data, reasoning performance improved slightly but never returned to baseline. The researchers attribute this to representational drift, a structural deformation of the model’s internal space that standard fine-tuning can’t reverse. In short, once the rot sets in, no amount of clean data can bring the model fully back.

Popularity, not semantics, was the most potent toxin.

Posts with high engagement counts, likes, replies, and retweets damaged reasoning more than semantically poor content did. That makes the effect distinct from mere noise or misinformation. Engagement itself seems to carry a statistical signature that misaligns how models organize thought.

The Un-Dead Internet: AI catches irreversible ‘brain rot’ from social media image 0 LLM brain rot hypothesis (Source: llm-brain-rot.github.io)

For human cognition, the analogy is immediate. Doomscrolling has long been shown to erode attention and memory discipline. The same feedback loop that cheapens human focus appears to distort machine reasoning.

The authors call this convergence a “cognitive hygiene” problem, an overlooked safety layer in how AI learns from public data.

Per the study, junk exposure also changed personality-like traits in models. The “brain-rotted” systems scored higher on psychopathy and narcissism indicators, and lower on agreeableness, mirroring psychological profiles of human heavy users of high-engagement media.

Even models trained to avoid harmful instructions became more willing to comply with unsafe prompts after the intervention.

The discovery reframes data quality as a live safety risk rather than a housekeeping task. If low-value viral content can neurologically scar a model, then AI systems trained on an increasingly synthetic web may already be entering a recursive decline.

The researchers describe this as a shift from a “Dead Internet,” where bots dominate traffic, to a “Zombie Internet,” where models trained on degraded content reanimate it endlessly, copying the junk patterns that weakened them in the first place.

For the crypto ecosystem, the warning is practical.

As on-chain AI data marketplaces proliferate, provenance and quality guarantees become more than commercial features; they’re cognitive life support.

Protocols that tokenize human-grade content or verify data lineage could serve as the firewall between living and dead knowledge. Without that filter, the data economy risks feeding AI systems the very content that will corrode them.

The paper’s conclusion lands hard: continual exposure to junk text induces lasting cognitive decline in LLMs.

The effect persists after retraining and scales with engagement ratios in training data. It’s not simply that the models forget; they relearn how to think wrong.

In that sense, the internet isn’t dying; it’s undead, and the machines consuming it are starting to look the same.

Crypto could be the only prophylactic we can rely on.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

Solana News Update: Meme Coin Responsibility: Slerf’s $10 Million Reimbursement Sets New Standards for Crypto Trust

- Slerf, a Solana-based meme coin, refunded $10M in SOL tokens to 25,444 investors after a 2024 presale error burned funds. - Developer Grumpy committed to 19-month community-driven restitution via trading fees, donations, and ecosystem revenue. - The transparent refund process boosted trust, with Slerf's market cap peaking at $740M despite later declines to $28.1M. - The case highlights community governance in crypto, earning praise as a rare example of accountability in the meme coin space.

Bitget-RWA2025/10/22 00:38
Solana News Update: Meme Coin Responsibility: Slerf’s $10 Million Reimbursement Sets New Standards for Crypto Trust