Skip to content

SYSTEM Cited by 1 source

QLC flash

Definition

QLC (Quad-Level Cell) flash is a NAND flash variant that stores 4 bits per cell (16 distinguishable voltage states). Higher storage density than TLC (3 bits, 8 states) at the cost of slower writes, lower endurance, and asymmetric read-vs-write bandwidth.

Invented ~2009. Consumer-market adoption was slow; hyperscale data-center adoption lagged further because of low drive capacities (<32 TB), cost, and write-endurance limits. As of 2025, mainstream availability of 2 Tb QLC NAND dies + 32-die stacks closed the density gap; Meta is pursuing QLC as a new middle storage tier between HDD and TLC.

Key properties

Property QLC TLC
Bits per cell 4 3
States per cell 16 8
Density highest 2nd highest
Read throughput high higher
Write throughput moderate (~1/4 of read) higher
Endurance (P/E cycles) lower higher
Cost per byte lower higher

Read-vs-write asymmetry

Canonical QLC constraint: read throughput is 4× or more than write throughput on the same media. Software stacks serving QLC must arbitrate I/O so latency-sensitive reads don't queue behind writes — see concepts/qlc-read-write-asymmetry + patterns/rate-controller-for-asymmetric-media.

Write-endurance framing

QLC's lower P/E-cycle ceiling (see concepts/write-endurance-nand) is the historic blocker to data-center deployment. 2025 reframing: match QLC to workloads with infrequent, low-bandwidth writes (read-bandwidth-intensive, large batch IOs). Then the endurance floor is met with "sufficient headroom."

Power-efficiency argument

"The bulk of power consumption in any NAND flash media comes from writes." QLC deployed on read-dominant workloads consumes less power than TLC per byte served — which pairs with the density argument to make QLC attractive even when cost-per-byte is not yet parity with TLC.

Density trajectory

Meta's 2025 statement: "We expect QLC SSD density will scale much higher than TLC SSD density in the near-term and long-term." Factors: more bits per cell + taller die stacks + larger package footprint in U.2-15mm + Pure Storage's DFM custom form factor (600 TB). Rack-level byte density at Meta projected at 6× the densest TLC server today.

Deployment at Meta

Two channels disclosed 2025-03-04:

  1. Pure Storage DFMs — custom form factor, userspace FTL via DirectFlash software, ublk + io_uring stack — see systems/pure-storage-directflash-module.
  2. Standard NVMe QLC SSDs from multiple NAND vendors — integrated via io_uring directly against the NVMe block device.

Both channels share the U.2-15mm slot, enabling vendor diversity in a single server design.

Seen in

Last updated · 319 distilled / 1,201 read