PATTERN Cited by 1 source
Tagged storage routing¶
Shape¶
Tagged storage routing dispatches each request for a named piece of
data (configuration, cache entry, blob) to the storage backend best
suited to that data's access pattern — using a prefix embedded in
the request key as the routing tag. One logical storage API; N
physical backends; routing is O(1) on a static prefix → strategy
map.
The AWS multi-tenant-config realization:
request.key = "tenant_config_acme-corp:payment-gateway"
↓
ConfigStrategyFactory.resolve(key)
↓
match key.prefix:
"tenant_config_" → DynamoDBStrategy (per-tenant, high-frequency)
"param_config_" → ParameterStoreStrategy (shared, hierarchical)
"secret_config_" → SecretsManagerStrategy (sensitive, rotated) [future]
"blob_config_" → S3Strategy (large blobs) [future]
↓
strategy.get(key)
The factory examines the prefix and returns the appropriate
ConfigStrategy implementation (common interface across backends).
Adding a new backend requires a new class and a new entry in the
keyStrategyMap — no changes to existing strategies or to calling
code. (Source:
sources/2026-04-08-aws-build-a-multi-tenant-configuration-system-with-tagged-storage-patterns
§B)
Why the key carries the tag¶
Putting the routing tag in the key itself, not as a separate argument, buys three properties:
- Self-describing requests. Anyone reading the key knows which
backend owns that data — no side-table lookup, no convention
doc to keep current.
tenant_config_foounambiguously routes to DynamoDB. - Decoupled producers and consumers. Producers who write keys don't need to know the routing table; they just write keys with the right prefix. Consumers resolve at read time.
- Storage-migration affordance. To move a data class from Parameter Store to Secrets Manager, change one entry in the routing map and (optionally) rename affected keys — no business logic touched.
Why different backends for different access patterns¶
The forcing function is that single-backend configuration services lose on one side of the cost/performance curve:
- DynamoDB for shared rarely-changing config → expensive (pay per read of data that rarely changes) and loses hierarchical organization.
- Parameter Store for high-frequency per-tenant reads → throttles on account-level API limits and loses composite-key query shape.
The routing split lets each data class live on the backend whose cost model and access semantics match its real workload:
| Prefix | Backend | Access shape | Cost shape |
|---|---|---|---|
tenant_config_* |
DynamoDB | Single-key per-tenant, high RPS | Per-request |
param_config_* |
Parameter Store | Hierarchical, bulk init | Per-API-call, cheap |
Sibling pattern: Strategy Pattern / polymorphic dispatch¶
This is the Strategy design pattern applied to storage: a shared
ConfigStrategy interface (e.g., get(key) / set(key, value) /
watch(key, callback)) with per-backend implementations. The factory
picks the strategy; callers never branch on backend identity. (Source §B)
Without the pattern, naïve storage-routing code looks like:
if key.startswith("tenant_config_"): dynamo.get(...)
elif key.startswith("param_config_"): ssm.get(...)
elif ...
which couples storage-dispatch to every call site, forces every new backend through a rewrite of the same conditional, and makes per-backend testing painful (mocking requires tearing through the conditional chain).
Not a new idea — what's new here¶
Prefix-based routing appears across the wiki as a composable primitive rather than a novel invention:
- Figma FigCache dispatches Redis commands by prefix → cache cluster — same routing primitive at the cluster selection layer rather than the storage backend layer — letting multiple logical caches share one infrastructure footprint.
- concepts/prefix-aware-routing (SageMaker HyperPod inference) uses prompt-prefix as a routing key into KV-cache-affine GPU instances — different axis (GPU affinity, not backend type), same structural idea.
- patterns/tool-decoupled-agent-framework and patterns/pluggable-component-architecture share the interface + factory pattern at the agent-tool and component layers.
The AWS multi-tenant-config source's contribution is to consolidate this primitive specifically for configuration services, where the two-backend split (high-frequency per-tenant ↔ shared hierarchical) has a canonical pair (DynamoDB ↔ Parameter Store) and a canonical event-driven refresh story (patterns/event-driven-config-refresh).
Implementation checklist¶
- Define a common
Strategyinterface covering every operation the calling layer needs:get,set,watch,delete, bulk versions as needed. - Implement one strategy per backend. Keep each implementation focused on its backend's shape — don't force a lowest-common- denominator API.
- Build a factory that maps prefix → strategy via a static map (or IoC container). Avoid dynamic dispatch cost on the hot path.
- Prefix discipline: pick short, stable prefixes; document them once; never reuse a prefix for a different backend. Prefixes become part of the public key contract.
- Per-strategy caching policy: each backend has a different freshness/latency/cost shape, so the cache TTL + refresh strategy should be per-strategy — high-frequency reads get short application- level TTLs; shared config uses event-driven invalidation (see patterns/event-driven-config-refresh).
- Observability per-strategy: emit a tag on metrics/traces identifying the strategy so you can see latency/error/cost breakdown per backend, which is the single most useful slice.
Caveats¶
- Key naming becomes part of the public API. Changing a prefix is a breaking change for any code that constructed keys elsewhere.
- Cross-backend transactions are not supported. A write that spans
tenant_config_*andparam_config_*keys can't be atomic across backends — the application layer has to handle partial failure. - Routing-map drift: the factory's map is the source of truth for "which backend owns what"; if it gets out of sync with operational reality (e.g., data actually lives in S3 but the factory routes to DynamoDB), calls succeed but return wrong data. Guard with integration tests.
- Multi-tenant isolation is not automatic — the pattern orthogonal to tenant isolation. Each strategy must still enforce tenant boundaries (DynamoDB composite keys, Parameter Store path-based IAM); see concepts/tenant-isolation.
Seen in¶
- sources/2026-04-08-aws-build-a-multi-tenant-configuration-system-with-tagged-storage-patterns
— canonical shape: NestJS gRPC Config Service, two strategies
(DynamoDB + Parameter Store), Strategy-Pattern factory keyed on
tenant_config_/param_config_prefix; per-strategy caching (short TTL application-cache for DynamoDB; event-driven invalidation via EventBridge for Parameter Store); extensibility path explicitly called out for Secrets Manager and S3 additions. - sources/2026-04-21-figma-figcache-next-generation-data-caching-platform — same primitive at the cache-cluster layer: Redis commands dispatched to one of N backend clusters by key-prefix, enabling multiple logical caches to share one infrastructure footprint. See systems/figcache + concepts/prefix-aware-routing.
Related¶
- concepts/prefix-aware-routing — same structural primitive applied to GPU-affinity routing.
- patterns/pluggable-component-architecture — broader interface + factory shape that tagged-storage-routing is a specialization of.
- systems/aws-parameter-store + systems/dynamodb — the canonical two-backend pair for configuration.
- patterns/event-driven-config-refresh — the invalidation-side pair pattern for the Parameter Store branch.
- concepts/tenant-isolation — orthogonal concern each strategy must still enforce.