Skip to content

PATTERN Cited by 1 source

Tagged storage routing

Shape

Tagged storage routing dispatches each request for a named piece of data (configuration, cache entry, blob) to the storage backend best suited to that data's access pattern — using a prefix embedded in the request key as the routing tag. One logical storage API; N physical backends; routing is O(1) on a static prefix → strategy map.

The AWS multi-tenant-config realization:

request.key = "tenant_config_acme-corp:payment-gateway"
ConfigStrategyFactory.resolve(key)
match key.prefix:
  "tenant_config_" → DynamoDBStrategy       (per-tenant, high-frequency)
  "param_config_"  → ParameterStoreStrategy (shared, hierarchical)
  "secret_config_" → SecretsManagerStrategy (sensitive, rotated)  [future]
  "blob_config_"   → S3Strategy             (large blobs)         [future]
strategy.get(key)

The factory examines the prefix and returns the appropriate ConfigStrategy implementation (common interface across backends). Adding a new backend requires a new class and a new entry in the keyStrategyMap — no changes to existing strategies or to calling code. (Source: sources/2026-04-08-aws-build-a-multi-tenant-configuration-system-with-tagged-storage-patterns §B)

Why the key carries the tag

Putting the routing tag in the key itself, not as a separate argument, buys three properties:

  1. Self-describing requests. Anyone reading the key knows which backend owns that data — no side-table lookup, no convention doc to keep current. tenant_config_foo unambiguously routes to DynamoDB.
  2. Decoupled producers and consumers. Producers who write keys don't need to know the routing table; they just write keys with the right prefix. Consumers resolve at read time.
  3. Storage-migration affordance. To move a data class from Parameter Store to Secrets Manager, change one entry in the routing map and (optionally) rename affected keys — no business logic touched.

Why different backends for different access patterns

The forcing function is that single-backend configuration services lose on one side of the cost/performance curve:

  • DynamoDB for shared rarely-changing config → expensive (pay per read of data that rarely changes) and loses hierarchical organization.
  • Parameter Store for high-frequency per-tenant reads → throttles on account-level API limits and loses composite-key query shape.

The routing split lets each data class live on the backend whose cost model and access semantics match its real workload:

Prefix Backend Access shape Cost shape
tenant_config_* DynamoDB Single-key per-tenant, high RPS Per-request
param_config_* Parameter Store Hierarchical, bulk init Per-API-call, cheap

Sibling pattern: Strategy Pattern / polymorphic dispatch

This is the Strategy design pattern applied to storage: a shared ConfigStrategy interface (e.g., get(key) / set(key, value) / watch(key, callback)) with per-backend implementations. The factory picks the strategy; callers never branch on backend identity. (Source §B)

Without the pattern, naïve storage-routing code looks like:

if key.startswith("tenant_config_"):  dynamo.get(...)
elif key.startswith("param_config_"): ssm.get(...)
elif ...

which couples storage-dispatch to every call site, forces every new backend through a rewrite of the same conditional, and makes per-backend testing painful (mocking requires tearing through the conditional chain).

Not a new idea — what's new here

Prefix-based routing appears across the wiki as a composable primitive rather than a novel invention:

The AWS multi-tenant-config source's contribution is to consolidate this primitive specifically for configuration services, where the two-backend split (high-frequency per-tenant ↔ shared hierarchical) has a canonical pair (DynamoDB ↔ Parameter Store) and a canonical event-driven refresh story (patterns/event-driven-config-refresh).

Implementation checklist

  1. Define a common Strategy interface covering every operation the calling layer needs: get, set, watch, delete, bulk versions as needed.
  2. Implement one strategy per backend. Keep each implementation focused on its backend's shape — don't force a lowest-common- denominator API.
  3. Build a factory that maps prefix → strategy via a static map (or IoC container). Avoid dynamic dispatch cost on the hot path.
  4. Prefix discipline: pick short, stable prefixes; document them once; never reuse a prefix for a different backend. Prefixes become part of the public key contract.
  5. Per-strategy caching policy: each backend has a different freshness/latency/cost shape, so the cache TTL + refresh strategy should be per-strategy — high-frequency reads get short application- level TTLs; shared config uses event-driven invalidation (see patterns/event-driven-config-refresh).
  6. Observability per-strategy: emit a tag on metrics/traces identifying the strategy so you can see latency/error/cost breakdown per backend, which is the single most useful slice.

Caveats

  • Key naming becomes part of the public API. Changing a prefix is a breaking change for any code that constructed keys elsewhere.
  • Cross-backend transactions are not supported. A write that spans tenant_config_* and param_config_* keys can't be atomic across backends — the application layer has to handle partial failure.
  • Routing-map drift: the factory's map is the source of truth for "which backend owns what"; if it gets out of sync with operational reality (e.g., data actually lives in S3 but the factory routes to DynamoDB), calls succeed but return wrong data. Guard with integration tests.
  • Multi-tenant isolation is not automatic — the pattern orthogonal to tenant isolation. Each strategy must still enforce tenant boundaries (DynamoDB composite keys, Parameter Store path-based IAM); see concepts/tenant-isolation.

Seen in

Last updated · 200 distilled / 1,178 read