PATTERN Cited by 1 source
DB-Routed Request Proxy¶
Shape¶
A reverse proxy makes its per-request upstream-selection decision by querying a database rather than by consulting a static configuration file or pre-loaded in-memory map. The routing table lives in the database; the proxy does a DB lookup per request (optionally cached for a short TTL) and proxies to the selected upstream.
When to use¶
- The routing table is large (thousands to millions of keys)
and won't fit comfortably in a nginx
mapor similar static structure. - The routing table changes frequently or needs near-instant propagation — waiting for a config-regeneration cadence is unacceptable.
- Cold-restart cost of loading the static map is unacceptable.
- You already have a DB you trust for the routing state and can accept the availability dependency on it.
Why it beats the static-map alternative¶
Static maps at edge proxies (nginx ngx_http_map_module,
HAProxy's map files, Envoy's static config) have two failure
modes at scale:
- Propagation latency — the map has to be regenerated and reloaded whenever the table changes. Regeneration cadence becomes user-visible publish latency.
- Cold-restart cost — the process has to load the map on startup. For very large maps this is slow; for rolling restarts under traffic it causes user-visible slowness.
Moving the table into a DB makes both problems disappear: - Publish latency collapses to DB-write-visible time. - Cold restart is fast because the proxy doesn't load state.
Cost: a new availability dependency¶
The proxy now depends on the DB. Three standard mitigations:
- Retry to a different replica on error — the proxy treats the DB as a pool of read replicas and reconnects on query failure.
- Short-TTL cache — absorb transient DB unavailability. See patterns/cached-lookup-with-short-ttl.
- CDN in front — let an edge cache survive a total proxy outage. See patterns/cdn-in-front-for-availability-fallback.
Canonical wiki instance¶
GitHub Pages — 2015 rearchitecture. Per
request, an ngx_lua script in
access_by_lua_file queries a MySQL read
replica for the hostname's fileserver-pair assignment, sets
$gh_pages_host + $gh_pages_path, and lets stock nginx
proxy_pass http://$gh_pages_host$request_uri forward to the
backend. Disclosed: < 3 ms p98 in Lua including the MySQL
call, across millions of HTTP requests per hour. The three
mitigations above are all deployed.
Source: sources/2025-09-02-github-rearchitecting-github-pages.
Replaces the pre-2015 shape where a cron regenerated a nginx
map file every 30 minutes — new sites waited up to 30 minutes
to publish; nginx cold-restarts were slow from loading the
massive map.
Trade-offs vs. alternatives¶
- vs. static nginx
map— gains instant-propagation, no cold-restart penalty, no storage-capped-by-single-machine scaling ceiling. Loses: availability dependency on the DB. - vs. gossiped routing state (e.g. Fly's fly-proxy with RIB gossip + FIB cache) — DB-routed is simpler (one source of truth, no convergence protocol) but scales worse (DB QPS bottleneck vs. distributed state) and has stricter availability coupling.
- vs. config-service push (e.g. Envoy xDS) — config-push avoids per-request DB load, but adds a control-plane propagation layer; DB-routed is a point on the simpler end of the spectrum.
Related¶
- systems/github-pages, systems/nginx, systems/ngx-lua, systems/mysql — canonical stack.
- patterns/cached-lookup-with-short-ttl — required partner pattern in practice.
- patterns/cdn-in-front-for-availability-fallback — second required partner pattern at the outage-survivability layer.
- concepts/availability-dependency — the cost side of the trade.