Skip to content

CONCEPT Cited by 1 source

Passive vs active client signals

Cloudflare's 2026-04-21 post taxonomizes the signals an origin server observes from a client into three categories: passive client signals, active client signals, and server signals. The taxonomy clarifies which signals are reliable, which are spoofable, and which carry privacy cost.

The three categories

  1. Passive client signalsrequired for the request to exist. The client cannot avoid revealing them:
  2. IP address — every packet carries one.
  3. TLS handshake — cipher choices, extensions, SNI, JA3/JA4 fingerprints: all emitted as a side-effect of the handshake. The client has no practical choice about sending them; they are attack-resistant in the sense that omitting them means no session. They are also the basis of most passive fingerprinting today.

  4. Active client signalsvoluntarily sent by the client, usually invisible to the end user:

  5. User-Agent header — a self-declaration; trivially spoofable.
  6. Authentication credentials — cookies, bearer tokens, OAuth session headers.
  7. Custom headersAccept-Language, Accept-Encoding, X-Requested-With.
  8. Client-injected signatures (e.g. RFC 9421). Active signals carry strong information when they are cryptographically bound (RFC 9421 signatures) and weak information when they are just self-declarations.

  9. Server signals — observed by the server, not sent by the client:

  10. Edge / POP location handling the request.
  11. Local time at the edge.
  12. Geo inference from the IP.
  13. Traffic pattern aggregated across requests. Independent of client cooperation; useful for anomaly detection but don't identify the client.

Mitigation vs. tracking

The critical normative distinction the post draws: the same signals that origins use for mitigation decisions (should I allow this request?) can also serve as tracking vectors (can I identify and follow this user across sessions?). A TLS fingerprint that helps detect a script kiddie also helps serve a targeted ad.

"The same information creates fingerprint vectors that can be used by the server for different purposes such as personalized advertising. This transforms a mitigation vector into a tracking vector."

There is no cryptographic distinction between the two uses — only a policy / deployment distinction. This is why fingerprint-heavy bot management drifts into tracking systems even when mitigation was the intent; see concepts/fingerprinting-vector.

Why the taxonomy matters for anonymous credentials

Anonymous credentials are an attempt to introduce a fourth class of signalactive signals that prove a specific attribute without identifying the client. Unlike the existing active-signal class (cookies, bearer tokens, User-Agent), anonymous credentials:

  • Carry cryptographic weight (they are verifiable, not self-declarations).
  • Are explicit (the client presents them in response to a question, not as ambient leakage).
  • Are scoped (valid for one check, not a universal identifier).
  • Are unlinkable (multiple redemptions cannot be correlated back to issuance).

The post's framing: instead of stacking passive signals and server signals to infer client intent, ask the client to present a cryptographic proof of the specific attribute the origin cares about. This moves the trust question from "what can I fingerprint" to "what did I ask the client to prove" — which is a design choice, not a cryptographic constraint.

Relationship to verified bots

Web Bot Auth is the identity branch of the post-bot-vs-human split: it uses an active signal (RFC 9421 HTTP Message Signatures) that is cryptographically bound, so it's verifiable rather than self-declared. This is the pattern the post argues for generally: shift weight from passive + server signals to explicit active signals with cryptographic weight.

Seen in

Last updated · 200 distilled / 1,178 read