CONCEPT Cited by 1 source
Deploy frequency vs caching¶
Deploy frequency vs caching is the structural tension between two second-order goods: higher deploy frequency (faster iteration, faster product velocity, faster bug-fix rollout) vs effective caching (lower bandwidth, faster page loads, lower cost).
Every deploy invalidates cached assets. If you deploy every minute, assets live in client caches for at most a minute. If you deploy every day, up to a day. The inverse relationship is direct and geometric.
The compounding in 2026¶
Cloudflare's 2026-04 framing (see sources/2026-04-17-cloudflare-shared-dictionaries-compression-that-keeps-up-with-the-agent) names three trends that make this tension worse than it was:
- Pages are heavier — 6-9 %/year growth per the HTTP Archive Web Almanac. Larger payloads per cache miss.
- More automated clients — agentic actors were ~10 % of Cloudflare requests in March 2026, up ~60 % YoY. Automated clients hit endpoints repeatedly; each cache miss is paid multiple times.
- Higher deploy frequency — AI-assisted development compresses the interval between commits; CI/CD pipelines ship continuously; bundler re-chunking (bundler chunk invalidation) triggers full re-downloads per deploy.
All three trends compound in the same direction: more redundant bytes on the wire per cycle. "Ship ten small changes a day, and you've effectively opted out of caching."
Why the tension is structural (not fixable by tuning)¶
Classical cache-busting (content-hashed filenames on versioned bundles) works exactly as designed — it guarantees no stale content is served. The trade-off is its feature: safety against stale content is incompatible with reusing most of the payload across deploys, given URL-keyed caches.
Tuning HTTP cache-control headers (longer max-age, immutable directives, etag revalidation) doesn't help. The entry is evicted by URL change, not by expiry. Longer max-age on a URL that doesn't exist anymore is a no-op.
Ways out¶
Only three classes of fix exist:
- Don't deploy as often. Reduces product velocity. Often not acceptable.
- Deploy smaller diffs that don't require re-chunking. Requires bundler + release-process engineering investment; partial mitigation at best because dependency bumps + feature work + security patches still trigger re-chunks.
- Send only the diff on the wire. This is what shared-dictionary compression + RFC 9842 provide: the previously-cached version becomes the dictionary, so the new version is compressed against the old and only the diff is transferred. Deploy frequency can stay high; caching effectiveness is preserved at the bytes-on-the-wire layer even when it's lost at the URL-keyed-cache layer.
Cloudflare's framing: option 3 is the only one that doesn't force a velocity trade-off. "Delta compression helps both sides of that equation by reducing the number of bytes per transfer, and the number of transfers that need to happen at all."
Seen in¶
- sources/2026-04-17-cloudflare-shared-dictionaries-compression-that-keeps-up-with-the-agent — the agentic-web framing of why deploy-frequency-vs-caching matters more in 2026 than it did five years ago, and shared-dictionary compression as the only way out that doesn't force a velocity trade-off.