Skip to content

PATTERN Cited by 1 source

Database as data router

Problem

During a zero-downtime migration, the operator wants to validate the application against the destination database before committing to it as primary. The destination has to prove it works end-to-end — correct query results, correct latency, correct error behaviour — against real production traffic. But cutting the application over to the destination as authoritative writer is the irreversible step the operator is trying to de-risk. So validation must happen while the source is still authoritative.

The classic response is dual writes: the application writes to both databases. This has well-known failure modes: the application code has to change, consistency between the two stores is the application's problem, partial-failure semantics are hard, rollback requires another code change. "Fragile application-level migration mechanisms like dual writes." (Source: sources/2026-04-21-planetscale-bring-your-data-to-planetscale.)

Solution

Make the destination database a wire-protocol-level proxy that transparently forwards writes back to the source while serving reads locally. The application connects to the destination, but writes traverse the destination → source link over the existing replication channel and are then replicated back into the destination on the normal catch-up path. Reads are served from the destination's local state. "Your new PlanetScale database is also acting as a 'data router'. Queries will be served by PlanetScale with writes transparently routed to your existing database, and then replicated back to PlanetScale." (Source: sources/2026-04-21-planetscale-bring-your-data-to-planetscale.)

Cutover becomes a direction reversal: the routing rules flip so writes are now served locally on the destination, and the source is kept in sync via reverse replication in case rollback is needed.

Properties bought:

  • Application doesn't change. Connection string is the only thing rotated — the application's SQL, its transactions, its error-handling all stay the same. Contrast: dual-writes require application code changes.
  • Consistency guarantees stay single-master. The source is the authoritative write target during the validation phase. There's no window where both databases accept independent writes.
  • Reads are validated against the destination directly. Every read query the application runs is served from the destination's local state, proving the destination's read path end-to-end.
  • Rollback is free. Because the source never stops receiving writes (during validation) or stops receiving reverse-replicated writes (post-cutover), the source is always a valid fallback target.

Required substrate

  • Query-aware proxy at the destination that can terminate the database wire protocol, classify statements as read / write, and route them to different backends. Vitess's VTGate + routing rules is the canonical implementation.
  • Replication channel both ways between source and destination. During validation, source → destination carries normal CDC; during post-cutover, destination → source carries reverse-replication for rollback.
  • An unmanaged-tablet or equivalent primitive that lets the destination attach to the external source without requiring the source to be rehomed into the destination's control plane. See systems/vitess-unmanaged-tablet.

Distinct from routing-rule-swap cutover

The sister pattern patterns/routing-rule-swap-cutover — canonicalised by the 2026-02-16 Matt Lord post — is what happens at the moment of cutover in the MoveTables SwitchTraffic design: the application has been connected to Vitess the whole time, writes have been going to the source keyspace via routing rules, and cutover is an atomic sub-second flip to the target keyspace (with query buffering covering the flip). There is no explicit bidirectional validation phase; cutover is an instant.

The database-as-data-router pattern (this page) is what happens during the validation phase: the application is connected to the destination for some extended interval (minutes to days) while writes are still being forwarded back to the source. The operator drives a manual cutover ("Enable primary mode") when they're satisfied.

In implementation both patterns are built on the same primitives (routing rules + VReplication + reverse replication). They differ in the cutover ceremony:

  • Routing-rule-swap: operator starts the migration, then SwitchTraffic performs the flip atomically. Validation is pre-cutover by other means (VDiff, read-only traffic on replicas, shadow traffic).
  • Database-as-data-router: operator starts the migration, then the application's real traffic validates the destination in bidirectional mode; operator clicks a button to reverse the direction; rollback is another click.

Both shapes coexist in the PlanetScale product line — the 2021 post describes the database-as-data-router shape; the 2026 post describes the routing-rule-swap shape with VDiff substituting for live-traffic validation.

Seen in

  • sources/2026-04-21-planetscale-bring-your-data-to-planetscale — canonical wiki instance. Phani Raju (2021) describes the full lifecycle: (1) validation phase with destination-as-data-router, writes transparently forwarded back to source over replication channel; (2) "Enable primary mode" reverses the routing so destination is now authoritative, source kept in sync via reverse-replication for rollback; (3) "Detach external database" tears down the connection. The post explicitly contrasts the pattern with dual writes: "This step of routing your application's reads and writes through PlanetScale allows you to safely validate that your application is fully compatible with PlanetScale without taking downtime and without fragile application-level migration mechanisms like dual writes."
Last updated · 319 distilled / 1,201 read