AI and the Open Web: The Dark Internet, the Dead Internet, and the Bot Problem

AI and the Open Web: The Dark Internet, the Dead Internet, and the Bot Problem

By Yuriy Zhar 7 min read
I think the public web is turning synthetic: bots flood content, trust drops, and humans retreat into private spaces. ClawBot shows how identity hijacks scale. Next platforms may chase proof of human, with privacy tradeoffs.

Claims that the internet is “dying” often sound like melodrama. When I hear the Dead Internet Theory, the idea that more and more content and engagement comes from bots instead of people, I get why people roll their eyes. It sounds like another internet panic story. But when I look at what’s happening, I don’t think the difference is philosophical. I think it’s mechanical. Bots don’t participate in the web the way humans do. They don’t show up with context and history. They simulate it. They inflate it. And more and more, they fill it.

There’s another way to describe the same shift that feels less conspiracy and more like a plain description. The Dark Forest theory. The idea is simple: open spaces become hostile, extractive, and noisy, so people retreat into private or semi-hidden spaces where identity, context, and trust still work. The forest looks empty not because life is gone, but because life is hiding. And for me, “dark” and “dead” aren’t opposites. It’s the same direction seen from two angles. The public web fills with automation, and the human web goes underground.

The web used to be held together by something fragile and hard to fake: human presence. Not just content, but costly signals. Time spent. Reputation risked. Relationships built over years. Arguments where it actually mattered who you were and what you said yesterday. That’s what made a forum thread feel alive. That’s what made a comment section feel like a community. That’s what made a blog feel like a voice.

Bots can copy the surface now. They can copy pacing. They can copy the outrage cycle. They can copy the comforting reply. They can copy the “authentic” confession. But bots don’t have skin in the game. They don’t lose face. They don’t get tired. They don’t care if they ruin a space, because they were never really in that space to begin with. So the center of gravity shifts. Metrics start replacing meaning.

Engagement starts replacing presence. And when the incentives reward volume, speed, and repetition, automation wins by default.

I think the Dead Internet framing keeps spreading because the numbers keep moving the wrong way. Imperva’s bot reports have shown for years that a huge portion of traffic is automated, and that malicious automation is a large and growing share. Their 2025 report describes malicious bots as 37% of all internet traffic in 2024, humans as 49%, and the rest as “good bots.” The important part for me is not the exact percentage, it’s what it does to everything downstream.

This doesn’t stay inside security dashboards. It changes the economics and the epistemology of the web. If cheap automation can mass-produce content, reviews, comments, and engagement, real creators get crowded out. If ranking systems can be fed synthetic signals at scale, discovery becomes a game of manipulation instead of a map to quality. And when users keep hitting spam, impersonation, and that uncanny feeling of interchangeable voices, they learn something that’s basically correct: you can’t assume a post, a review, or a debate involves real people. Once you can’t trust what’s human, you start treating everything as potentially fake. That cynicism is rational, but it also eats the web from the inside.

If I want one clean example of how bot-amplified ecosystems turn chaos into a business model, I look at the ClawBot, Moltbot, OpenClaw fiasco. In early 2026, a viral “agentic” AI assistant project, first called Clawdbot, then Moltbot, then OpenClaw, blew up in visibility and at the same time turned into a magnet for impersonation, handle squatting, scams, and malware-laced knockoffs. The rebrand confusion didn’t just happen alongside the scams, it fed them. Scammers exploited naming drift and account transitions to push fake tokens and fake identities, while the hype cycle did the distribution work for them. Security reporting went further and described fake developer tooling used to deliver malware under the project’s name. For me the point isn’t “one project had a messy week.” The point is structural. In an environment saturated with automated amplification and low-friction identity theft, confusion becomes an attack surface. Bots don’t just spread content. They industrialize impersonation.

Even if you personally avoid obvious spam, I don’t think you can avoid the bigger feedback loops. Wired reported, based on TollBit’s tracking, that AI scrapers jumped from about one in 200 website visits in early 2025 to about one in 50 by Q4 2025, and that a meaningful share ignored robots.txt. Wired also reported on Cloudflare describing the scale of AI scraping pressure, including claims of hundreds of billions of blocked AI-bot requests over a short span. The details matter, but the shape matters more. Humans publish hoping humans will read. Bots scrape to feed models and products. Platforms optimize for engagement signals that bots can manufacture cheaply. Then humans leave because the space feels fake. And the emptier it gets, the easier it is for automation to dominate what remains.

That’s the “dead” dynamic: synthetic activity replacing human activity. And it’s also the “dark” dynamic: humans retreating to spaces where bots have less access and where context is harder to counterfeit.

I think the real cost shows up in how people adapt. When the open web becomes scraped, ranked, baited, and impersonated, people stop writing in public. They move into private chats, invite-only groups, and closed communities. They share less, or they share like they’re performing for an algorithm instead of talking to people. I don’t see that as a moral failure. It looks like self-defense. But the loss is real. The public commons gets weaker. Discovery gets worse. New voices get buried. Online “culture” turns into disposable sludge that’s easy to generate, hard to verify, and not worth caring about.

And this is why I don’t think the internet gets “fixed” in the simple sense. Bots aren’t going away, and the tooling that enables them won’t be uninvented. What can change is the shape of the social layer on top. If the public web keeps rewarding imitation at scale, I think the next wave of social platforms won’t try to be maximally open. They’ll try to be maximally legible. The pitch will be simple: fewer anonymous drive-bys, fewer synthetic crowds, fewer fake “people,” and more costly proof that there’s a real human on the other side. That can look like social media where authenticity is the core feature, not a moderation afterthought. It’s an internet designed around who is speaking and what they can credibly prove, not just what went viral.

But I also think there’s an ugly trade hidden inside that promise. The fastest way to make authenticity cheap for users is to make identity expensive in privacy. We already route huge parts of the web through centralized sign-in rails and delegated access patterns. OAuth 2.0 is basically the backbone authorization framework here, and identity layers ride on top of it. If “prove you’re real” becomes the default gate, a lot of platforms will take the most convenient route: more ID checks, more document uploads, more biometric-ish verification, and more dependence on a handful of intermediaries that can vouch for you. The bot problem becomes the excuse for extracting more identity and more data, with the soothing story that it’s all “for trust.”

At the same time, I think the counter-movement is already visible. People are trying to make trust portable without making privacy the price. Decentralized identity standards like DIDs and Verifiable Credentials are built around presenting cryptographically verifiable claims without one permanent identity provider acting as a choke point. Some “Web3” variants push further by anchoring identifiers or attestations to blockchains or other distributed ledgers, trying to make identity and reputation something you carry rather than something a platform rents back to you. In parallel, content authenticity efforts like C2PA try to make media provenance tamper-evident, so “real content” can be checked by cryptographic history instead of vibes or platform authority. And newer social protocols, whether federated like ActivityPub or portability-driven like the AT Protocol, keep circling the same core idea: decouple identity and distribution so one company can’t unilaterally define what’s real, visible, or allowed.

I don’t think any of these paths is guaranteed. Each one fails in a different way. Centralized authenticity can harden into surveillance and document-hoovering. Decentralized authenticity can drift into complexity, fragmentation, and crypto theater. Provenance systems can be stripped, bypassed, or ignored at the UI layer. The part I find most interesting is that the bot flood forces the decision. Platforms and users are going to pick some model of trust. The open question is what we optimize for when we choose, and which tradeoffs people will still accept once the web is visibly crowded with synthetic humans.

Windsurf
Recommended Tool

Windsurf

All my projects and even this website is build using Windsurf Editor. Windsurf is the most intuitive AI coding experience, built to keep you and your team in flow.

Share this article:
Yuriy Zhar

Yuriy Zhar

github.com

Passionate web developer. Love Elixir/Erlang, Go, TypeScript, Svelte. Interested in ML, LLM, astronomy, philosophy. Enjoy traveling and napping.

Get in Touch

Have a question or want to work together? Drop a message below.

Book a Call

Stay updated

Subscribe to our newsletter and get the latest articles delivered to your inbox.