Skip to content

PATTERN Cited by 1 source

Verification cache with revocation feed

Pattern

Combine a high-hit-rate local verification cache with a [[concepts/revocation-feed- subscription|revocation feed subscription]] from the token authority. Clients:

  1. Cache successful verification results locally.
  2. Subscribe (via poll) to the authority's revocation feed.
  3. On revocation notifications, prune the cache of any matching entries.
  4. On connectivity loss past a threshold, dump the entire cache — fail-closed, force every verify to round-trip to the authority.

The result: most verifies never hit the authority (Fly.io:

98% cache hits), revocation still works promptly, and disconnection degrades safely.

Canonical instance: Fly.io tkdb

Fly.io's implementation (source: sources/2025-03-27-flyio-operationalizing-macaroons):

  • Cache hit rate: "over 98%".
  • Cache coverage property: unique to Macaroons — the chained-HMAC construction means verifying a parent covers any descendant (attenuated) token the same client presents later.
  • Revocation propagation: the tkdb verification API exports an endpoint that "provides a feed of revocation notifications". Clients poll it; on arrival they prune.
  • Fail-closed on disconnect: "If clients lose connectivity to tkdb, past some threshold interval, they just dump their entire cache, forcing verification to happen at tkdb."
  • Blacklist is not distributed: "We certainly don't want to propagate the blacklist database to 35 regions around the globe." The authority keeps the blacklist; clients only hear revocation events.

Why this is better than alternatives

  • vs. no cache: every verification round-trips to the authority; transoceanic hops on the auth path; authority becomes the capacity bottleneck for the entire platform.
  • vs. cache + TTL expiry: revocations land late (up to TTL). TTL short enough to be prompt defeats the cache's point; TTL long enough to matter produces cosmetic logout vulnerabilities.
  • vs. replicated blacklist to all regions: storage and bandwidth scale with token population; every verifier holds a copy of the full blacklist; updates are write-amplified.

Feed-based revocation scales with revocation rate (small and bursty), not token-population size.

Preconditions

  • Token construction allows cache-safe verification (Macaroons' chained-HMAC is the ideal case; JWTs with public-key verification are cacheable but don't inherit verifiability across attenuated forms).
  • Authority can expose a feed endpoint — and the feed's notification identifiers (nonces, token-IDs) match what caches key on.
  • Clients tolerate fail-closed behavior on authority disconnect (bounded outage amplification, but no stale positive verifications).

When it's wrong

  • Rapidly-rotating token population where the revocation rate approaches the issue rate — the feed's scaling advantage degrades.
  • Extreme latency budgets that can't absorb even the 2% cold miss round-trip.

Seen in

Last updated · 200 distilled / 1,178 read