CONCEPT Cited by 1 source
Client-server model¶
The client-server model is the foundational deployment pattern of the Internet: a client makes a request to a server, which responds with a resource. The architecture is deliberately asymmetric and stateless at the request layer — what made the Web scalable is also what creates the web-protection problem Cloudflare's 2026-04-21 post reframes.
The load-bearing property: context is asymmetric¶
The server sees a request. It does not see the context that produced the request: whether a keyboard-and-mouse human drove a browser to click, whether a script is archiving responses for later indexing, whether an AI agent is fetching raw data to feed into a model. From Figure 3 of the post:
"Two different client contexts that send requests to servers. Each server only sees a request, but not the end-user behind it."
This is not a bug — it's the openness property that made the Web successful. Many kinds of clients can exist; the network can evolve without each server needing to know exactly what software is on the other end. It is also the reason bot-vs-human is an unsolvable classification problem: the server has no ground truth to distinguish the two contexts beyond what the client voluntarily sends.
Why it matters for rate limiting and abuse prevention¶
The context gap forces origins to make access-control decisions on partial signals:
- IP address (required to respond).
- TLS handshake (required to establish session).
- Active voluntary headers (User-Agent, credentials).
- Server-observed context (edge location, time of day).
These signals are what concepts/passive-vs-active-client-signals taxonomizes. The signals are imprecise because the client is not under the server's control; the server infers intent from what leaks.
Capacity vs. trust decisions¶
The same asymmetric-context problem shows up in two faces:
- Capacity: a server provisioned for 100 rps must drop half the requests if 200 rps arrive. Random drop is feasible but unfair. Better signals let the server prioritize — but the signals are limited by the client-server asymmetry.
- Trust: a server wants to distinguish attack traffic from legitimate traffic, ad-fraud clicks from real views, scraping from attributed crawling. All require inferring intent from a request, not from the client context.
The post's argument: the asymmetry is permanent, so web protection must shift from inferring identity from partial signals to asking the client to present proof of the property the origin cares about — which is what anonymous credentials and patterns/anonymous-attribute-proof do.
Scale consequences¶
A side consequence of the client-server asymmetry is that a website can increase capacity by deploying additional servers or adding a CDN — horizontal scaling works. The client side also scales: more clients, each making more requests, without needing to negotiate a protocol change with every server. This is the openness that made the Web evolve.
The same openness means there is no built-in mechanism to authenticate the client's kind, only the client's TLS connection. Mutual TLS is an option but doesn't scale to the open Web (clients don't have certificates). Anonymous credentials fill that gap without coercing identity-first authentication onto every Web interaction.
Seen in¶
- sources/2026-04-21-cloudflare-moving-past-bots-vs-humans — explicitly walks through the client-server model (Figures 1-3) as the foundation against which bot-management and rate-limiting arguments are built.
Related¶
- concepts/passive-vs-active-client-signals — taxonomy of signals available despite the context asymmetry.
- concepts/bot-vs-human-frame — why the asymmetry makes binary bot-vs-human classification unsolvable.
- concepts/fingerprinting-vector — what happens when origins try to infer client-kind from passive signals.
- concepts/thundering-herd — the capacity side of the asymmetric-context problem.