CONCEPT Cited by 1 source
Passive vs active client signals¶
Cloudflare's 2026-04-21 post taxonomizes the signals an origin server observes from a client into three categories: passive client signals, active client signals, and server signals. The taxonomy clarifies which signals are reliable, which are spoofable, and which carry privacy cost.
The three categories¶
- Passive client signals — required for the request to exist. The client cannot avoid revealing them:
- IP address — every packet carries one.
-
TLS handshake — cipher choices, extensions, SNI, JA3/JA4 fingerprints: all emitted as a side-effect of the handshake. The client has no practical choice about sending them; they are attack-resistant in the sense that omitting them means no session. They are also the basis of most passive fingerprinting today.
-
Active client signals — voluntarily sent by the client, usually invisible to the end user:
- User-Agent header — a self-declaration; trivially spoofable.
- Authentication credentials — cookies, bearer tokens, OAuth session headers.
- Custom headers —
Accept-Language,Accept-Encoding,X-Requested-With. -
Client-injected signatures (e.g. RFC 9421). Active signals carry strong information when they are cryptographically bound (RFC 9421 signatures) and weak information when they are just self-declarations.
-
Server signals — observed by the server, not sent by the client:
- Edge / POP location handling the request.
- Local time at the edge.
- Geo inference from the IP.
- Traffic pattern aggregated across requests. Independent of client cooperation; useful for anomaly detection but don't identify the client.
Mitigation vs. tracking¶
The critical normative distinction the post draws: the same signals that origins use for mitigation decisions (should I allow this request?) can also serve as tracking vectors (can I identify and follow this user across sessions?). A TLS fingerprint that helps detect a script kiddie also helps serve a targeted ad.
"The same information creates fingerprint vectors that can be used by the server for different purposes such as personalized advertising. This transforms a mitigation vector into a tracking vector."
There is no cryptographic distinction between the two uses — only a policy / deployment distinction. This is why fingerprint-heavy bot management drifts into tracking systems even when mitigation was the intent; see concepts/fingerprinting-vector.
Why the taxonomy matters for anonymous credentials¶
Anonymous credentials are an attempt to introduce a fourth class of signal — active signals that prove a specific attribute without identifying the client. Unlike the existing active-signal class (cookies, bearer tokens, User-Agent), anonymous credentials:
- Carry cryptographic weight (they are verifiable, not self-declarations).
- Are explicit (the client presents them in response to a question, not as ambient leakage).
- Are scoped (valid for one check, not a universal identifier).
- Are unlinkable (multiple redemptions cannot be correlated back to issuance).
The post's framing: instead of stacking passive signals and server signals to infer client intent, ask the client to present a cryptographic proof of the specific attribute the origin cares about. This moves the trust question from "what can I fingerprint" to "what did I ask the client to prove" — which is a design choice, not a cryptographic constraint.
Relationship to verified bots¶
Web Bot Auth is the identity branch of the post-bot-vs-human split: it uses an active signal (RFC 9421 HTTP Message Signatures) that is cryptographically bound, so it's verifiable rather than self-declared. This is the pattern the post argues for generally: shift weight from passive + server signals to explicit active signals with cryptographic weight.
Seen in¶
- sources/2026-04-21-cloudflare-moving-past-bots-vs-humans — canonical articulation with the three-category taxonomy and the mitigation-vs-tracking framing.
Related¶
- concepts/client-server-model — why the asymmetry forces inference from signals in the first place.
- concepts/fingerprinting-vector — the mitigation-turned- tracking failure mode of passive signals.
- concepts/bot-vs-human-frame — why no signal set fully answers the binary classification question.
- concepts/verified-bots — cryptographically-bound active signals (Web Bot Auth) as the identity branch of the post-bot-vs-human architecture.