Skip to content

PATTERN Cited by 1 source

DB-Routed Request Proxy

Shape

A reverse proxy makes its per-request upstream-selection decision by querying a database rather than by consulting a static configuration file or pre-loaded in-memory map. The routing table lives in the database; the proxy does a DB lookup per request (optionally cached for a short TTL) and proxies to the selected upstream.

request → proxy → SELECT upstream FROM routes WHERE host=?
              proxy_pass → chosen upstream

When to use

  • The routing table is large (thousands to millions of keys) and won't fit comfortably in a nginx map or similar static structure.
  • The routing table changes frequently or needs near-instant propagation — waiting for a config-regeneration cadence is unacceptable.
  • Cold-restart cost of loading the static map is unacceptable.
  • You already have a DB you trust for the routing state and can accept the availability dependency on it.

Why it beats the static-map alternative

Static maps at edge proxies (nginx ngx_http_map_module, HAProxy's map files, Envoy's static config) have two failure modes at scale:

  1. Propagation latency — the map has to be regenerated and reloaded whenever the table changes. Regeneration cadence becomes user-visible publish latency.
  2. Cold-restart cost — the process has to load the map on startup. For very large maps this is slow; for rolling restarts under traffic it causes user-visible slowness.

Moving the table into a DB makes both problems disappear: - Publish latency collapses to DB-write-visible time. - Cold restart is fast because the proxy doesn't load state.

Cost: a new availability dependency

The proxy now depends on the DB. Three standard mitigations:

  1. Retry to a different replica on error — the proxy treats the DB as a pool of read replicas and reconnects on query failure.
  2. Short-TTL cache — absorb transient DB unavailability. See patterns/cached-lookup-with-short-ttl.
  3. CDN in front — let an edge cache survive a total proxy outage. See patterns/cdn-in-front-for-availability-fallback.

Canonical wiki instance

GitHub Pages — 2015 rearchitecture. Per request, an ngx_lua script in access_by_lua_file queries a MySQL read replica for the hostname's fileserver-pair assignment, sets $gh_pages_host + $gh_pages_path, and lets stock nginx proxy_pass http://$gh_pages_host$request_uri forward to the backend. Disclosed: < 3 ms p98 in Lua including the MySQL call, across millions of HTTP requests per hour. The three mitigations above are all deployed. Source: sources/2025-09-02-github-rearchitecting-github-pages.

Replaces the pre-2015 shape where a cron regenerated a nginx map file every 30 minutes — new sites waited up to 30 minutes to publish; nginx cold-restarts were slow from loading the massive map.

Trade-offs vs. alternatives

  • vs. static nginx map — gains instant-propagation, no cold-restart penalty, no storage-capped-by-single-machine scaling ceiling. Loses: availability dependency on the DB.
  • vs. gossiped routing state (e.g. Fly's fly-proxy with RIB gossip + FIB cache) — DB-routed is simpler (one source of truth, no convergence protocol) but scales worse (DB QPS bottleneck vs. distributed state) and has stricter availability coupling.
  • vs. config-service push (e.g. Envoy xDS) — config-push avoids per-request DB load, but adds a control-plane propagation layer; DB-routed is a point on the simpler end of the spectrum.
Last updated · 517 distilled / 1,221 read