CONCEPT Cited by 1 source
Storage media tiering¶
Definition¶
Storage media tiering is the architectural practice of deploying multiple distinct storage media types in a coordinated hierarchy, so that each data class lands on the media whose cost / performance / endurance profile best matches that data's workload characteristics.
Orthogonal distinctions on the wiki:
- Storage media tiering — this page, physical-media axis (HDD vs flash vs tape vs DRAM).
- patterns/tiered-storage-to-object-store — Kafka-era software pattern of offloading cold segment files from a broker's local disk to object storage. Different axis (local vs remote).
- Hot-cold tiering — data-access-frequency axis, can be implemented via media tiering or not.
These can compose.
The hyperscale media tier ladder (2025)¶
| Tier | Media | Role | Density (drive) | BW/TB | Endurance | Cost/byte |
|---|---|---|---|---|---|---|
| 0 | DRAM | Hot cache | <1 TB | >1000 MB/s/TB | n/a | highest |
| 1 | SLC / cache flash | Write-buffer / index | 1-4 TB | >500 MB/s/TB | highest | very high |
| 2 | TLC | Primary serving | 4-30 TB | 50-100 MB/s/TB | high | high |
| 3 | QLC (new) | Batch IO / read-BW-dense | 32-600 TB | 10-20 MB/s/TB | moderate | middle |
| 4 | HDD | Bulk / cold | 20-30 TB | 5-10 MB/s/TB | high | low |
| 5 | Tape | Archive | 15-50 TB | <1 MB/s/TB | very high | very low |
Meta's 2025 QLC post is canonical: it adds tier 3 to the ladder, explicitly positioned between TLC (tier 2) and HDD (tier 4).
Why tiers get added¶
Tiers get added when the gap between existing tiers exceeds what workload-placement-at-the-edges can paper over. Three forcing functions:
- Upper-tier cost too high for a workload band that doesn't need its full capability.
- Lower-tier capability too low for a workload band whose requirements grew (or whose lower-tier BW/TB shrank).
- New media becomes economical for the gap band.
Meta's 2025 argument hits all three: (1) TLC too expensive for batch IO; (2) HDD BW/TB falling as drive densities climb; (3) QLC 2 Tb dies + 32-die stacks mainstream, write endurance sufficient for read-dominant workloads.
Architectural consequences¶
- Each tier has its own form factor optimization. See systems/u2-15mm-form-factor (QLC) vs systems/e1s-form-factor (TLC) at Meta — density / thermal / slot-count trade-offs differ per media.
- Each tier may have its own software stack. See patterns/userspace-ftl-via-io-uring for QLC — different arbitration needs mean different stacks.
- Workload migration tooling becomes load-bearing — when a tier is added, large amounts of existing data must move to the new tier, which takes years at hyperscale.
- Tier-boundary analysis is a recurring capacity-planning exercise — which workloads are stranded on the wrong tier? The answer drives the next tier's business case.
Relationship to heat¶
concepts/heat-management is about distributing load within a tier (placement). Storage media tiering is about choosing which tier at all. The two compose: you tier first, then distribute heat within the chosen tier.
Seen in¶
- sources/2025-03-04-meta-a-case-for-qlc-ssds-in-the-data-center — canonical three-tier (HDD / QLC / TLC) framing for flash-era data-center storage.
Related¶
- concepts/bandwidth-per-terabyte — the axis tiers are ordered on.
- concepts/hard-drive-physics — the HDD-side physics that makes the bottom tier's BW/TB decline.
- concepts/write-endurance-nand — the property that differentiates TLC vs QLC at the hyperscale level.
- concepts/qlc-read-write-asymmetry — the QLC-specific design consequence.
- systems/qlc-flash / systems/tlc-flash.
- patterns/middle-tier-storage-media — the operational pattern for tier insertion.
- patterns/tiered-storage-to-object-store — adjacent tiering concept on a different axis.
- companies/meta.