Skip to content

CONCEPT Cited by 1 source

SPICE in-memory caching

SPICESuper-fast, Parallel, In-memory Calculation Engine — is Amazon QuickSight's columnar in-memory cache. Datasets loaded into SPICE are held in a compressed columnar representation across a pool of worker nodes, and dashboard queries over SPICE-backed datasets evaluate against the in-memory copy rather than round-tripping to the underlying database. The stated performance property is "subsecond response times on complex analytics across large datasets."

The pattern

SPICE is an instance of the general BI pattern direct-query for fresh data, in-memory cache for hot aggregates:

  • Datasets that benefit from SPICE: aggregated or frequently-accessed data. Large joins, pre-computed rollups, denormalized marts.
  • Datasets that skip SPICE: freshness-critical operational dashboards that must reflect the last write. Direct-query mode forwards each query to the source database (e.g. Aurora PostgreSQL).
  • Refresh strategy: SPICE is a cache, not a replica. Schedules (full or incremental) re-load data from the source. Staleness between refreshes is the main trade-off (concepts/cache-ttl-staleness-dilemma).

Why it appears on this wiki

SPICE is a canonical instance of pair a fast small in-memory cache with a slow large source of truth at the BI tier (see patterns/pair-fast-small-cache-with-slow-large-storage). The same shape appears elsewhere: Redis + relational DB; CDN edge caches + origin; QuickSight SPICE + Aurora.

What the ingested source discloses (and doesn't)

From sources/2026-04-21-aws-oldcastle-infor-aurora-quicksight-real-time-analytics:

  • Oldcastle "identified which datasets benefit from SPICE caching — typically aggregated or frequently accessed data — and configured incremental refresh schedules to keep them current."
  • Reported outcome: "subsecond response times on complex analytics across large datasets."
  • Not disclosed: refresh cadence, SPICE capacity provisioning (per-user GiB allocation), staleness tolerance thresholds, or p99 latency distribution.
Last updated · 476 distilled / 1,218 read