PlanetScale — Patterns for Postgres Traffic Control¶
Summary¶
Josh Brown's 2026-04-02 PlanetScale post is the canonical
Go-language implementation companion to the two earlier
Traffic Control posts —
the 2026-04-11 queue-health post (mixed-workload contention
framing) and the 2026-03-31 Ben Dicken graceful-degradation post
(user-perceived-priority framing). Where those two posts framed
what Traffic Control does and why, Brown's post canonicalises
how to wire it up in a typical Go web service: five composable
tagging patterns that each target a distinct failure mode
(service-isolation, route-isolation, deployment-canary, SaaS tier,
background jobs) + two operational-integration patterns (Enforce-
mode SQLSTATE
53000 error handling, Warn-mode [[concepts/postgres-notice-
warning-channel|OnNotice]] observability) + a layered-composition
framing showing that all five axes can be active simultaneously
and are AND-ed by the Traffic Control engine.
The load-bearing architectural move is the context.Context-
threaded SQLCommenter tag
propagation pattern — canonical new
patterns/context-threaded-sql-tag-propagation — which is the
Go-idiomatic counterpart to the ORM-middleware pattern
canonicalised on
patterns/query-comment-tag-propagation-via-orm from the 2022
Coutermarsh + Ekechukwu Rails post. Go has no global request-local
storage idiom comparable to Rails's CurrentAttributes or
thread-local; instead, tags ride on context.Context through
every function that touches the database, and a wrapper
QueryContext / ExecContext method renders them to SQL at the
last possible moment.
Key takeaways¶
-
Two application-side helpers are load-bearing substrate —
appendTags(query, tags) → taggedQuery(deterministic sort + URL-encode +SQLCommenterformat) +tagsFromContext( ctx) → map[string]string(copy-on-read for thread-safety). Every pattern in the post layers on top of these two primitives. Verbatim canonical shape:Deterministic ordering matters because the Postgres plan cache is comment-bytes-sensitive in some configurations — the same logical tag set must produce the same SQL text.func appendTags(query string, tags map[string]string) string { if len(tags) == 0 { return query } parts := make([]string, 0, len(tags)) for k, v := range tags { parts = append(parts, fmt.Sprintf("%s='%s'", k, url.QueryEscape(v))) } sort.Strings(parts) // deterministic order return query + " /*" + strings.Join(parts, ",") + "*/" } -
Service-isolation via Postgres
username+application_name— Pattern 1: the coarsest isolation axis. A dedicated Postgres role per microservice makesusername='pscale_api_ 123abc'available to Traffic Control as a budget key;application_name=myappin the connection string (or set viaParseConfig/SetRawConnecton the driver) is a second, orthogonal axis. "This also helps in incident response: you can immediately cap a service's resource share without redeploying anything." The connection-string axis is load-bearing — it works even if the application emits zero SQLCommenter tags because Postgres itself (not the app) is the source of truth for the tag value. Canonicalises the three auto-populated SQLCommenter tags already canonicalised via the 2026-03-24 enhanced-tagging-in-insights post (application_name,username,remote_address) as the baseline attribution surface that's available before any app-level discipline is deployed. -
Route-isolation via HTTP middleware — Pattern 2: the
/api/exportCSV-report endpoint should not be able to kill the/api/checkoutflow. Canonical new patterns/route-tagged-query-isolation: an HTTP middleware injectsroute=<r.Pattern>+app=webinto the request context, and wrapperQueryContext/ExecContextmethods render them. Verbatim canonical implementation:Thefunc SQLTagMiddleware(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { tags := tagsFromContext(r.Context()) route := strings.ReplaceAll(strings.ReplaceAll(r.Pattern, "{", ":"), "}", ":") tags["route"] = route tags["app"] = "web" ctx := contextWithTags(r.Context(), tags) next.ServeHTTP(w, r.WithContext(ctx)) }) }{/}→:replacement is because Go 1.22+ HTTP patterns use{param}placeholders; Traffic Control rule strings don't benefit from the brace syntax. Operational payoff: "the violation graph in Traffic Control will show you exactly whichroutetag is hitting limits." -
Deployment / canary-tag via startup environment variable — Pattern 3:
DEPLOYMENT_TAG=new_checkout_v2(or a git SHA96e350426) is read at startup and injected into every query's tags on the new pods. "Traffic Control can then have a budget onfeature='new_checkout_v2'inWarnmode from day one, so you see exactly how the new code behaves before it causes problems." Canonical deployment-gating workflow: tag new code → observe in Warn mode → decide to enforce or remove the budget after soak. Distinct from a runtime feature-flag pattern in the second half of the section (flags.Enabled(ctx, "new_order_flow")→tags["feature"] = "new_order_flow") which is flag-gated, not deployment-gated. Both specialise the general [[concepts/warn-mode-vs-enforce- mode|Warn mode → Enforce mode]] lifecycle to the ship-new- code axis. -
SaaS tier-isolation via authentication middleware — Pattern 4: the free-tier user's expensive dashboard must not degrade the enterprise customer's experience. Canonical new patterns/tier-tagged-query-isolation: after the auth middleware resolves the authenticated user, inject the user's subscription tier into the SQL tags:
Two budgets:func AuthMiddleware(users *UserService, next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { user, err := users.Authenticate(r) if err != nil { http.Error(w, "unauthorized", 401); return } ctx := WithUserTier(r.Context(), user.Tier) next.ServeHTTP(w, r.WithContext(ctx)) }) }tier='free'(conservative) +tier='pro'(moderate); enterprise unbudgeted or high-budget ceiling. Composes with Pattern 2 —tier='free' AND route='api- export'is a stricter-than-tier='free'-alone combination. Canonical wiki datum: tier + route is a two-axis budget, ANDed, giving enterprise-export more headroom than free- export. -
Background-job isolation via dedicated connection pool +
application_name— Pattern 5: long-running background workers use their own*sql.DBpool withapplication_name= background-jobsset in code (not env var), andSetMaxOpenConns(4)to cap concurrency at the connection- pool layer (a second orthogonal cap to Traffic Control's concurrent-worker dial). One-off scripts setapplication_ name=script-<scriptName>(e.g.script-backfill-order- totals) via the same helper. Operational workflow: run a Warn-mode budget before the job next runs, observe typical consumption, switch to Enforce at a level where a runaway job can't crowd out interactive traffic. Canonical new patterns/dedicated-application-name-per-workload: "Settingapplication_nameon the connection string level in code ensures that it is always set for this service, no matter the query or connection string given." -
Enforce-mode error-class handling via SQLSTATE
53000— Pattern 6: when a query exceeds its budget in Enforce mode, Postgres returns SQLSTATE53000with message prefixed[PGINSIGHTS] Traffic Control:. Canonical new concepts/sqlstate-53000-traffic-control-error. Verbatimpgx/v5detection helper:Right response is query-role-dependent: analytics / reporting → returnconst sqlstateTrafficControl = "53000" func isTrafficControlError(err error) bool { var pgErr *pgconn.PgError return errors.As(err, &pgErr) && pgErr.Code == sqlstateTrafficControl }503 Service Unavailableor cached result ("exactly the controlled failure mode Traffic Control is designed to create"); critical paths → short retry with exponential backoff. ThequeryWithBackoffhelper shown: 100ms → 200ms → 400ms over 3 attempts, bailing out on non-Traffic-Control errors and onctx.Done(). -
Warn-mode observability via
pgx/v5OnNoticehandler — Pattern 7: in Warn mode, queries succeed but Postgres emits a notice (not an error) to the driver containing[PGINSIGHTS] Traffic Control: …. Canonical new concepts/postgres-notice-warning-channel. Canonical shape:Canonical wiki datum: the Postgres wire protocol'sconfig.OnNotice = func(c *pgconn.PgConn, notice *pgconn.Notice) { if strings.Contains(notice.Message, "[PGINSIGHTS] Traffic Control:") { log.Printf("traffic control warning: %s", notice.Message) } }NoticeResponseframe is the piggyback channel that lets Traffic Control observe in-band in Warn mode without user-facing impact. "Collect these logs for a few hours of representative traffic before switching to Enforce. The pattern of which rules fire and how often tells you whether your limits need adjustment." Instrumentation altitude: the driver, not the application — every query's warn events flow through one callback. -
Layered-composition framing — the full canonical wire-up composes all five tag-axes in one middleware stack:
Canonical new concepts/composable-tag-axes: five orthogonal axes (service / route / deployment / tier / workload), ANDed at enforcement time, "A budget onvar handler http.Handler = mux handler = SQLTagMiddleware(handler) // Pattern 2: route handler = AuthMiddleware(s.users, handler) // Pattern 4: tier jobDB, _ := newJobDB(dsn) // Pattern 5: jobs // DEPLOYMENT_TAG env var set in deployment manifest // Pattern 3 // dedicated Postgres role username per service // Pattern 1tier='free'covers all free- tier traffic regardless of route. A budget onroute='api- export' AND tier='free'covers a specific combination. Multiple matching budgets all apply simultaneously and queries must satisfy every budget they match. You can build layered policies without complicated rule logic." -
The load-bearing adoption discipline: "Start in Warn mode, observe which budgets would fire during normal load, tighten the limits until only pathological cases trigger violations, then switch to Enforce." Same four-step flow as the 2026-03-31 graceful-degradation post (comment → warn → monitor → enforce), but now with the Go-side telemetry- collection mechanism (
OnNoticelog → metric → dashboard) explicitly specified. "The difference between a database outage and a degraded experience often comes down to whether you've decided in advance which traffic to shed. Traffic Control makes that decision explicit and configurable instead of leaving it to whichever query happens to win a resource race."
Systems extracted¶
- PlanetScale Traffic Control — third-framing Go implementation guide. Same three dials (server share + burst, per-query limit, max concurrent workers) applied to five tag axes. Canonical datum: the Go-side of the driver + middleware wire-up is where the five axes compose.
- PlanetScale Insights —
the extension that parses SQLCommenter tags + emits the
SQLSTATE
53000error /[PGINSIGHTS]notice. The post clarifies that both signals come from the same extension layer — Enforce as errors, Warn as notices — different wire- protocol frames. - PostgreSQL — substrate;
NoticeResponse(Warn channel) +ErrorResponseSQLSTATE53000(Enforce channel) +application_name+usernameas driver-set tag axes independent of application-level tagging. github.com/jackc/pgx/v5— the canonical Go Postgres driver referenced by the post.pgx.ParseConfig+config.OnNoticepgconn.PgErrorare the driver-specific integration points. (Not canonicalised as a new system page — narrow scope, no wiki-worthy architectural disclosure beyond the Traffic Control integration datum.)
Concepts extracted¶
- concepts/sqlcommenter-query-tagging — the standard that underpins every tag axis. This post is the canonical Go- specific implementation of the standard (previously the standard was canonicalised via Rails implementations).
- concepts/context-propagated-sql-tags — canonical new concept. Go's idiomatic alternative to ORM-middleware for threading per-request context through to SQL comment tags.
- concepts/sqlstate-53000-traffic-control-error — canonical
new concept. Enforce-mode error class with
[PGINSIGHTS] Traffic Control:prefix, detected viaerrors.Ason*pgconn.PgError. - concepts/postgres-notice-warning-channel — canonical new
concept. Warn-mode observability delivered in-band via
Postgres
NoticeResponseframes, caught by the driver'sOnNoticehook, orthogonal to SQL result rows. - concepts/composable-tag-axes — canonical new concept. Orthogonal tag dimensions AND-ed at enforcement time; the design principle that lets five separate patterns coexist on the same budget-rule engine.
- concepts/warn-mode-vs-enforce-mode — already canonical
from the graceful-degradation post; this post adds the
Go-specific implementation of the Warn-mode telemetry-
collection loop via
OnNotice. - concepts/graceful-degradation — the user-facing outcome of the Enforce-mode shedding ("The difference between a database outage and a degraded experience...").
- concepts/query-priority-classification — the prior post's canonical three-tier scheme; this post extends with non-priority axes (service / route / deployment / workload) that are orthogonal to but compose with priority.
Patterns extracted¶
- patterns/workload-class-resource-budget — the over- arching pattern, now with Go-side implementation depth.
- patterns/shed-low-priority-under-load — the spike- survival specialisation; this post adds caller-retry depth.
- patterns/context-threaded-sql-tag-propagation — canonical
new pattern. Go
context.Contextas the per-request storage for tags, with wrapperQueryContext/ExecContextmethods rendering at query time. Generalises beyond Go to any language lacking ORM-middleware idioms (Rust's request extensions, Python'scontextvars, etc.). - patterns/route-tagged-query-isolation — canonical new
pattern. HTTP-middleware-injected
routetag as the primary per-endpoint budget axis. - patterns/tier-tagged-query-isolation — canonical new
pattern. Auth-middleware-injected
tiertag for SaaS plan-based isolation (free / pro / enterprise). - patterns/dedicated-application-name-per-workload —
canonical new pattern. Distinct Postgres-level
application_nameper connection pool (web / background- jobs / script-\<name>) as a driver-side tag axis that works without application-level SQLCommenter. - patterns/query-comment-tag-propagation-via-orm — the sibling pattern canonicalised via the Rails post; this post canonicalises the framework-less Go variant alongside it.
Operational numbers¶
- Retry-backoff budget:
maxRetries = 3, initial100ms, exponential doubling (100ms → 200ms → 400ms). SetMaxOpenConns(4)for background-job pool — cap of 4 simultaneous connections as an application-layer concurrency floor orthogonal to Traffic Control's dial.- No adoption numbers, no production telemetry, no customer case study, no measured latency impact from tag rendering.
- Pattern 7's Warn-mode observation window: "a few hours of representative traffic" (qualitative; no specific hour count).
Caveats¶
- Pedagogical / how-to voice. Josh Brown is a developer- advocacy byline, not one of PlanetScale's canonical deep- internals authors. The post is genuinely architectural (five new patterns + two new mechanisms), but the voice is implementation-how-to rather than production-retrospective.
- No Go-driver-specific benchmarking of tag-rendering cost.
The
appendTagscall path runs on every query; for very high-QPS call sites, thesort.Strings+fmt.Sprintf+url.QueryEscapeloop could add measurable overhead. Not quantified. - Plan-cache sensitivity to comment bytes not discussed
deeply.
sort.Stringsgets deterministic ordering for the same tag set, but if a middleware adds / removes tags conditionally per-request, the tag set itself varies — still cache-miss-prone. Also depends on whether Postgres normalises over comments inpg_stat_statements(default: yes) or whether the extension in use preserves them. - URL-encoding choice.
url.QueryEscapeis used for values — works for keys likeroute='/api/export'containing/but means consumers reading the tags (e.g. SQL log analysers) must URL-decode. Not obvious from the raw SQL text. - Retry logic doesn't address thundering-herd risk. A spike that trips Enforce on every client simultaneously leads to synchronised retries — 3 × doubling = 100ms + 200ms + 400ms = 700ms window of retry traffic. No jitter in the canonical shape (operator is presumably expected to add it).
DEPLOYMENT_TAGpattern coverage is all-or-nothing. The env var is set at startup — all pods with the tag get the tag, pods without don't. A rolling deploy produces a mixed fleet where half the pods carry the tag. That's the point of the pattern (observe only new pods' behaviour) but implies the rule matcher must use equality, not prefix.- SaaS tier axis assumes a single-tenant subscription model. A workspace with multiple users at different tiers would complicate the tag assignment. Not addressed.
- Background-jobs
SetMaxOpenConns(4)datum is example- specific. The right number for a different application depends on job profile, database size, and concurrent-job cadence. Not derived. - No warn-mode logging volume discussion. If a budget is set too tight in Warn mode, every query emits a notice — log volume can explode. The post suggests aggregation ("log these notices to build an accurate picture") but doesn't specify the pipeline.
pgx/v5specific. TheOnNoticehook is a pgx feature; the standarddatabase/sqlinterface does not expose notice handlers directly — other Go drivers would need their own integration path.
Source¶
- Original: https://planetscale.com/blog/patterns-for-postgres-traffic-control
- Raw markdown:
raw/planetscale/2026-04-21-patterns-for-postgres-traffic-control-dc1835de.md
Related¶
- companies/planetscale
- systems/planetscale-traffic-control
- systems/planetscale-insights
- systems/planetscale-for-postgres
- systems/postgresql
- concepts/sqlcommenter-query-tagging
- concepts/context-propagated-sql-tags
- concepts/sqlstate-53000-traffic-control-error
- concepts/postgres-notice-warning-channel
- concepts/composable-tag-axes
- concepts/warn-mode-vs-enforce-mode
- concepts/graceful-degradation
- concepts/query-priority-classification
- patterns/workload-class-resource-budget
- patterns/shed-low-priority-under-load
- patterns/context-threaded-sql-tag-propagation
- patterns/route-tagged-query-isolation
- patterns/tier-tagged-query-isolation
- patterns/dedicated-application-name-per-workload
- patterns/query-comment-tag-propagation-via-orm