The Un-Dead Internet: AI catches irreversible ‘brain rot’ from social media
The internet is not dead, but it may be rotting.
New research by scientists at the University of Texas at Austin, Texas A&M University, and Purdue University finds that large language models exposed to viral social media data begin to suffer measurable cognitive decay.
The authors call it “LLM brain rot.” In practice, it looks a lot like the “Dead Internet” theory coming back as something worse, a “Zombie Internet” where AI systems keep thinking, but less and less coherently.
The team built two versions of reality from Twitter data: one filled with viral posts optimized for engagement, the other with longer, factual or educational text. Then they retrained several open models, including LLaMA and Qwen, on these datasets.
The results showed a steady erosion of cognitive functions. When models were trained on 100 percent viral data, reasoning accuracy in the ARC-Challenge benchmark dropped from 74.9 to 57.2. Long-context comprehension, measured by RULER-CWE, plunged from 84.4 to 52.3.
According to the authors, the failure pattern wasn’t random. The affected models began to skip intermediate reasoning steps, a phenomenon they call thought skipping. The models produced shorter, less structured answers and made more factual and logical errors.
As training exposure to viral content increased, the tendency to skip thinking steps also rose, a mechanistic kind of attention deficit built into the model’s weights.
More troubling, retraining didn’t fix it. After the degraded models were fine-tuned on clean data, reasoning performance improved slightly but never returned to baseline. The researchers attribute this to representational drift, a structural deformation of the model’s internal space that standard fine-tuning can’t reverse. In short, once the rot sets in, no amount of clean data can bring the model fully back.
Popularity, not semantics, was the most potent toxin.
Posts with high engagement counts, likes, replies, and retweets damaged reasoning more than semantically poor content did. That makes the effect distinct from mere noise or misinformation. Engagement itself seems to carry a statistical signature that misaligns how models organize thought.
For human cognition, the analogy is immediate. Doomscrolling has long been shown to erode attention and memory discipline. The same feedback loop that cheapens human focus appears to distort machine reasoning.
The authors call this convergence a “cognitive hygiene” problem, an overlooked safety layer in how AI learns from public data.
Per the study, junk exposure also changed personality-like traits in models. The “brain-rotted” systems scored higher on psychopathy and narcissism indicators, and lower on agreeableness, mirroring psychological profiles of human heavy users of high-engagement media.
Even models trained to avoid harmful instructions became more willing to comply with unsafe prompts after the intervention.
The discovery reframes data quality as a live safety risk rather than a housekeeping task. If low-value viral content can neurologically scar a model, then AI systems trained on an increasingly synthetic web may already be entering a recursive decline.
The researchers describe this as a shift from a “Dead Internet,” where bots dominate traffic, to a “Zombie Internet,” where models trained on degraded content reanimate it endlessly, copying the junk patterns that weakened them in the first place.
For the crypto ecosystem, the warning is practical.
As on-chain AI data marketplaces proliferate, provenance and quality guarantees become more than commercial features; they’re cognitive life support.
Protocols that tokenize human-grade content or verify data lineage could serve as the firewall between living and dead knowledge. Without that filter, the data economy risks feeding AI systems the very content that will corrode them.
The paper’s conclusion lands hard: continual exposure to junk text induces lasting cognitive decline in LLMs.
The effect persists after retraining and scales with engagement ratios in training data. It’s not simply that the models forget; they relearn how to think wrong.
In that sense, the internet isn’t dying; it’s undead, and the machines consuming it are starting to look the same.
Crypto could be the only prophylactic we can rely on.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Solana News Update: Meme Coin Responsibility: Slerf’s $10 Million Reimbursement Sets New Standards for Crypto Trust
- Slerf, a Solana-based meme coin, refunded $10M in SOL tokens to 25,444 investors after a 2024 presale error burned funds. - Developer Grumpy committed to 19-month community-driven restitution via trading fees, donations, and ecosystem revenue. - The transparent refund process boosted trust, with Slerf's market cap peaking at $740M despite later declines to $28.1M. - The case highlights community governance in crypto, earning praise as a rare example of accountability in the meme coin space.

Google Fi will introduce AI-powered audio features and web messaging via RCS
As competition among browsers intensifies, these are the top contenders to Chrome and Safari in 2025

Bill Gates’s former climate advocates start a new company
Trending news
MoreCrypto prices
More








