Skip to content

PATTERN Cited by 1 source

Wrapper over heterogeneous stores as serving layer

Definition

Wrapper over heterogeneous stores as serving layer is the pattern of exposing a single SDK / API surface to callers while internally routing each request to one of several specialised backing stores chosen per-feature-type (or per-access-pattern) for shape fit. The wrapper absorbs store selection, consistency guarantees, fallback/caching policy, and metadata uniformity; callers see one API, one consistency story, one metadata schema.

This is a deliberate inversion of two simpler designs:

  • One store for everything — easy to operate; bad fit for any single access pattern (kNN, KV, aggregate, full-text all want different stores).
  • Direct-to-store access — each access pattern hits the right store directly; correctness is easy per-call; governance and metadata uniformity collapse across calls.

The core problem it solves

A feature store (or similar ML-serving layer) has to handle multiple access patterns in the same namespace:

  • Scalar / vector-of-floats features — KV access pattern → DynamoDB-shaped.
  • Hot metadata on frequently-accessed features — cache access pattern → Redis / ValKey-shaped.
  • Embedding features for kNN / similarity search → OpenSearch / vector-DB-shaped.

No single store is good at all three, and consumers should not be forced to know which store their feature lives in — the feature type is a platform concern, not a caller concern.

Lyft dsfeatures instance

The 2026-01-06 Lyft Feature Store post describes dsfeatures as the canonical instance:

  • One SDK surface (go-lyft-features / lyft-dsp-features) with full CRUD; callers write Get / BatchGet / Put / Delete against feature names, not against stores.
  • Three backing stores routed internally:
  • DynamoDB — persistent backing for feature data; GSI for GDPR-deletion efficiency.
  • ValKeywrite-through LRU on top of DynamoDB for ultra-low-latency hot-path reads.
  • OpenSearch — embedding features only (kNN is native to OpenSearch, foreign to DynamoDB).
  • Uniform metadata + strongly consistent reads across all backing stores — the wrapper is the enforcement layer.

Key design properties

  • Store selection is data-driven. The wrapper reads the feature's type / configuration and routes accordingly. Callers don't configure routing.
  • Consistency is a wrapper property, not a store property. The wrapper orchestrates write-through into DynamoDB + ValKey and routes embeddings to OpenSearch; it is the consistency-model implementation point.
  • Write-path choke point for other producers. In Lyft's case, the streaming spfeaturesingest app calls dsfeatures WRITE API, not the underlying stores. The wrapper is also the uniform-metadata invariant.
  • SDK, not raw REST. The SDK is the natural place to express batching, client-side typing, retry policies, and read/write consistency knobs — all of which are cross-store concerns.

When to use it

  • Multiple access patterns in one namespace (KV + kNN + hot- cache, or OLTP + full-text, or time-series + KV).
  • No single store is good at all patterns at the required scale / latency.
  • Callers should not be coupled to store choice because the choice evolves (migrate embedding store; swap cache tier; change PK strategy) independent of caller code.
  • Metadata or consistency uniformity is a first-class requirement — you need a place that owns that invariant.

When not to use it

  • Only one access pattern matters — one store is enough.
  • The wrapper adds its own latency / SPOF / operations cost that outweighs the abstraction benefit.
  • The access patterns are semantically different enough that forcing one API shape distorts them (e.g. full-text search + OLTP don't usefully share a CRUD surface).

Seen in

Last updated · 319 distilled / 1,201 read