Skip to content

CONCEPT Cited by 1 source

Bot-vs-human frame

The bot-vs-human frame is the assumption that the important question at the origin is "is this request from a person or a program?". Cloudflare's 2026-04-21 argument is that the frame is increasingly misleading: the meaningful question is whether the client is behaving in ways the site can support — intent and behavior, not species.

The pull-quote

"There are wanted bots and there are unwanted humans."

The statement flips the bot-management framing. A search-engine crawler that returns attributed traffic is a wanted bot. A human running credential-stuffing scripts from a residential proxy is an unwanted human. Neither decision is made easier by answering "bot or human" — the decision requires asking "is this attack traffic / is crawler load proportional to attribution / are ads being gamed".

Why the frame is breaking down

Historically, "bot" traffic and "human" traffic were architecturally distinct: humans used browsers (user agents) that rendered pages, ran JavaScript, fetched subresources, viewed ads. Bots fetched raw data without rendering. The browser was the mediation layer between publisher pixel control ("present my content the way I designed it") and user agency ("let me read it the way I want").

Four recent shifts collapse that distinction:

  1. AI assistants acting on human behalf. A user asking an assistant to book concert tickets is indistinguishable, from the origin, from a human doing it manually.
  2. AI agents bypass the browser entirely. They fetch raw data without rendering, running scripts, or viewing ads — breaking the implicit monetization contract.
  3. Zero-trust corporate proxies route employee traffic, making human requests look like bot infrastructure.
  4. Automated accessibility (screen readers, voice-driven browsing) makes the browser-rendering assumption unreliable even for humans.

The result: any "bot detector" that trains on legacy signals misclassifies both directions (wanted-AI-assistant blocked as bot, attack-traffic-through-residential-proxy allowed as human).

What the better question is

The post argues origins should ask about behavior, not identity: is this traffic attack-shaped, is the crawler proportional to attribution, is this user expected from this country, are ads being gamed. These are all questions that an anonymous credential of the right shape — "I'm not abusive, I'm under rate limit, I have good history with this service" — can answer without requiring identity.

See concepts/identity-vs-behavior-proof for the fuller treatment of prove behavior, not identity as the design posture.

Two client populations, two answers

The post splits the post-bot-vs-human space into two populations:

  • Identifiable infrastructure (search crawlers, AI training pipelines, cloud platforms): tolerate attribution because reliable attributed access is worth it. Served by Web Bot Auth — cryptographic identity via HTTP Message Signatures.
  • Distributed low-volume clients (humans, AI assistants, researchers, scraper-via-residential-proxy): need anonymity while proving behavior. Served by Privacy Pass successors (ARC / ACT) — anonymous credentials proving scoped attributes.

The two branches are complementary, not competing: one population values identity, the other values anonymity, and the web-protection architecture needs distinct primitives for each.

Seen in

Last updated · 200 distilled / 1,178 read