PATTERN Cited by 1 source
Protocol-compatible drop-in proxy¶
Build the proxy tier so it speaks the native wire protocol of the backend it fronts (Redis RESP, MySQL, Postgres, HTTP, gRPC, etc.) — so applications migrate with a one-line endpoint configuration change and no code / client-library / protocol churn. The proxy imitates the backend closely enough (including cluster-topology semantics) that client-side cluster-awareness, TLS, and connection parameters all continue to work.
Core shape¶
- Proxy speaks the backend's wire protocol unmodified to inbound clients.
- Shim layers in the RPC frontend emulate as many backend topological affordances as needed — e.g. a cluster-mode emulation layer that exposes the proxy as a fake cluster to cluster-aware clients.
- First-party client wrappers (optional but recommended) thinly wrap existing OSS clients — interface-compatible — pinning observability + configuration defaults without demanding application rewrites.
- Client migration = endpoint DNS change or flag flip, not a code change. Reversible with a feature flag.
Why it matters¶
A protocol-incompatible proxy (or a brand-new proprietary client library) forces every application team to:
- Adopt a new client library.
- Learn a new protocol.
- Retrain internal wisdom on operational semantics.
- Rewrite connection-management, retry, and error-handling code.
- Requalify every test path.
The cost is enormous at scale and the rollout is fundamentally non-gradual (a service is either on the old path or the new one). Speaking the backend's native protocol collapses this to the lowest- friction change possible: "point here instead." Feature-flag-gated runtime reversal becomes trivial.
Enablers¶
- Protocol-aware RPC framework with structured arguments. Opaque byte shoveling is insufficient — the proxy needs to understand every command's shape to multiplex safely, route by key, enforce guardrails, and intercept custom commands. systems/respc is one realization for RESP.
- Cluster-mode emulation shims. Where the backend exposes cluster-topology metadata to clients, the proxy must answer those queries with a self-describing cluster (its own fleet as the pretend-cluster). Handles the permutation of cluster-aware vs cluster-unaware clients uniformly.
- Parameter heterogeneity absorption. Different clients use different TLS configs, different connection-param sets, different command-pipelining styles. The proxy's RPC frontend tolerates all of them — no "configure your client specially to talk to the proxy" step.
- Transparent failure-mode handling. Backend topology changes (node failovers, cluster scaling) should not surface to the client as novel error codes; the proxy handles retries / connection rebalancing internally.
Trade-offs¶
- Protocol imitation is work. Every backend-specific quirk (cluster topology, error semantics, edge-case protocol responses) has to be handled faithfully; non-obvious divergence creates subtle production incidents.
- New protocol features shipped upstream lag the proxy until someone implements them.
- Pipelining / transaction semantics must be preserved exactly — the proxy can't quietly serialize what clients expect to pipeline (or vice versa) without changing correctness.
Sequence in a rollout¶
- Productionize the proxy independently of application migration.
- Roll out first-party client wrappers while still pointing at the original backend endpoints — earns uniform observability + vetted config without touching the proxy path yet.
- Cut services over to the proxy endpoint per-service, per- workload-domain, feature-flag-gated, reversible at runtime.
- For large workloads (main API service), never cut all traffic in one step — shift incrementally across multiple independent domains.
Seen in¶
- sources/2026-04-21-figma-figcache-next-generation-data-caching-platform — FigCache is the canonical instance. Article names it explicitly: "We designed FigCache to be a drop-in Redis replacement for applications, transparently handling responsibilities of connection pooling, traffic routing, and observability. In the simplest case, migrating an application to FigCache was as trivial as a one-line endpoint configuration change." A Redis Cluster mode emulation layer exposes the proxy as a fake cluster to cluster-aware clients; first-party wrappers over existing OSS Redis clients in Go / Ruby / TypeScript preserve interface compatibility; feature flags gate the rollout for runtime reversibility without binary deploys.
Related¶
- patterns/caching-proxy-tier — the architectural pattern this integration pattern unlocks gradual, reversible rollout for
- concepts/connection-multiplexing — the primary reason the proxy exists; the reason the drop-in shape matters is rollout risk
- systems/figcache — canonical instance
- systems/respc — the structured-RPC substrate that makes a RESP proxy more than a byte shoveler
- systems/redis — the backend most commonly fronted this way