Skip to content

PATTERN Cited by 1 source

Per-region read-replica routing

Pattern

When a globally-deployed application has regional read replicas in several regions, have the application choose which replica region to read from based on where the application pod itself is currently deployed — typically via an environment variable set at deploy time.

"In this example, we have our connection details stored in Rails credentials. Our application has a region environment variable. We check this variable and connect to the closest DB region. When in Frankfurt, we use our Frankfurt region. When in São Paulo, São Paulo region. If no specific region exists, we'll connect to the primary." (Source: sources/2026-04-21-planetscale-introducing-planetscale-portals-read-only-regions.)

This is the application-layer routing choice — each pod knows its own deployment region and uses a lookup table to pick the correct read-replica connection string. Writes go to the single writing-region endpoint regardless.

When to apply

  • Multi-region app deployment + per-region read replicas available. E.g. Rails on Fly.io in Frankfurt + São Paulo, with PlanetScale Portals regions in both.
  • 2022-era / pre-Global-Network PlanetScale, or equivalent multi-endpoint replica model. Each replica region exposes its own MySQL endpoint; routing is the app's job.
  • Framework supports multi-database config. Rails multiple- databases, Django DATABASE_ROUTERS, Laravel multi-connection, etc. Plain-driver apps can implement the same shape directly.

When not to apply: if the DB vendor offers a single global credential that handles per-query latency-based routing itself — e.g. 2024+ PlanetScale Global Network, AWS Aurora-with-RDS-Proxy-global-endpoints — prefer that over hand-rolled per-region routing. The platform's health- checking and failover are usually better than an env-var lookup.

Rails invocation

config/database.yml defines two connections per environment:

production:
  primary:
    <<: *default
    username: <%= Rails.application.credentials.planetscale&.fetch(:username) %>
    password: <%= Rails.application.credentials.planetscale&.fetch(:password) %>
    database: <%= Rails.application.credentials.planetscale&.fetch(:database) %>
    host:     <%= Rails.application.credentials.planetscale&.fetch(:host) %>
    ssl_mode: verify_identity
  primary_replica:
    <<: *default
    username: <%= db_replica_creds.fetch(:username) %>
    password: <%= db_replica_creds.fetch(:password) %>
    database: <%= db_replica_creds.fetch(:database) %>
    host:     <%= db_replica_creds.fetch(:host) %>
    ssl_mode: <%= Trilogy::SSL_VERIFY_IDENTITY %>
    replica:  true

db_replica_creds picks the right per-region credential:

region = ENV["APP_REGION"]
region_replica_mapping = {
  "fra" => Rails.application.credentials.planetscale_fra,
  "gra" => Rails.application.credentials.planetscale_gra,
}
db_replica_creds = region_replica_mapping[region] ||
                   Rails.application.credentials.planetscale

Fallback to the primary credential when APP_REGION doesn't match a known replica region — safe default; that region's reads go to primary (farther away, but correct).

ApplicationRecord opts into automatic role switching:

class ApplicationRecord < ActiveRecord::Base
  primary_abstract_class
  connects_to database: { writing: :primary, reading: :primary_replica }
end

Reads default to the per-region replica, writes go to primary.

Architectural costs & constraints

  • N credentials to rotate. Every replica region is a separate credential. Secrets management (Rails credentials, HashiCorp Vault, AWS Secrets Manager) has to carry all N of them.
  • Deploy-time region binding. If the APP_REGION env var is wrong or missing, the pod either reads from primary (acceptable fallback) or from the wrong region's replica (higher latency — bad, silent). Monitor it.
  • No automatic failover between regions. If the local replica region is unhealthy, the app either keeps trying it and failing, or needs a circuit-breaker to fall back to another region / primary. The per-region-credential model doesn't solve this — the Global Network model's latency-sorted cluster list does.
  • Read-your-writes across regions. Routing reads to a distant replica amplifies the RYW problem: replication lag is larger cross-region. Pair with patterns/session-cookie-for-read-your-writes and size the window (Δ) to cover cross-region p99 lag.

Measured benefit

From the launch post: Frankfurt app → Frankfurt replica drops per-query latency from ~90 ms (cross-ocean to NoVa primary) to ~3 ms. 30× delta per query, multiplied by number of queries per request. See concepts/edge-to-origin-database-latency and concepts/regional-read-replica for the broader framing.

Evolution

The 2022 per-region-credential shape is the "v1" of multi-region read routing. Evolution paths:

  1. v2: single global credential + CDN-shaped edge. 2024+ PlanetScale Global Network + global replica credentials: one credential, edge terminates the MySQL connection, routes to the lowest-latency replica automatically. App no longer needs region_replica_mapping.
  2. v2 (competitor): network-layer latency routing. AWS Aurora Global Database + RDS Proxy + latency-based Route 53 records — the DNS/anycast layer picks the endpoint.
  3. Agent-side caching proxy: e.g. Cloudflare Hyperdrive — keep the central DB but cache at the edge. Complementary, not a replacement.

Seen in

Last updated · 378 distilled / 1,213 read