CONCEPT Cited by 1 source
Rack-level power density¶
Definition¶
Rack-level power density is the amount of electrical power a single datacenter rack can deliver and dissipate, measured in kW per rack. It is the binding scarce resource for hyperscale hardware planning once concepts/performance-per-watt has been optimized at the chip level — a rack that can't be fed 16 kW can't run 16 kW worth of Genoa cores in it, no matter how efficient those cores are.
Two separate constraints conflated¶
The term covers two related but distinct capacities:
- Power delivery — can the PDUs, busways, and upstream feeds carry the electrical load to the rack?
- Thermal dissipation — can the cooling (hot aisle / cold aisle / in-row / liquid) remove the heat generated by that load?
A rack is constrained by whichever limit is tighter. In commodity air-cooled datacenters these are usually matched; in hyperscaler or co-designed facilities they can diverge and the tighter one wins.
The modeling trap: nameplate vs real-world¶
Manufacturer-listed nameplate TDP routinely overestimates real workload draw — it's a theoretical maximum under synthetic worst case. Provisioning against nameplate:
- Over-provisions facility power upstream.
- Rejects workloads that would actually fit at real draw.
Dropbox's 7th-gen post names the pattern explicitly:
Rather than taking worst-case "nameplate" power measurements — manufacturer-listed maximums that often overestimate real usage — we modeled real-world system usage.
Outcome of the modeling: real draw was ~16 kW per cabinet, exceeding the 15 kW budget. Nameplate would have reported higher still, rejecting the move.
Responding to the constraint¶
Option 1: rebuild the facility¶
Increase busway capacity, upgrade upstream feeds, add cooling. Expensive, slow, high disruption. Often not viable within a hardware-refresh timeline.
Option 2: architect software to fit the envelope¶
Pack fewer servers per rack; lower CPU TDP cap; downsize GPUs. Wastes the chip-level perf/watt gains that made the new generation worth refreshing to.
Option 3: redistribute power at the rack level¶
This is what Dropbox did: add PDUs per rack (2 → 4) using existing busways and adding receptacles. Effectively doubles the rack's deliverable power without rebuilding the facility upstream. Formalized as patterns/pdu-doubling-for-power-headroom.
Tradeoffs not fully enumerated in the post — presumably cable management and rack density both got tighter — but the outcome lifted the per-rack envelope from 15 kW to the 16+ kW real-world draw.
Option 4: co-design chassis cooling¶
Better fan curves, airflow redirection, improved heatsinks, acoustic damping (see systems/sonic). Works with the power side, not instead of it. Essential for storage racks where vibration also matters.
Future forcing functions¶
Dropbox names the next frontier:
- Liquid cooling — transitioning from niche to necessity as chip TDP climbs past ~600 W (the Gumby upper bound). Air-cooled racks struggle above ~25–30 kW sustained; liquid-cooled or immersion-cooled racks unlock 50–100 kW.
- HAMR drives — denser storage, same 3.5" form factor, more platters, tighter thermal tolerance.
Both push rack-level power density higher; both make concepts/hardware-software-codesign more load-bearing as the tolerances tighten.
Quantitative scale at Dropbox¶
- Pre-gen-7 budget: 15 kW per rack
- Real-world post-gen-7 draw: ~16+ kW per cabinet
- PDU topology: 2 per rack → 4 per rack (reusing existing busways, adding receptacles)
- Crush server density: 46 servers per rack (1U "pizza box")
- Net outcome: power per petabyte and per core decreased while total rack power increased — the composition Dropbox was optimizing for.
Seen in¶
- sources/2025-08-08-dropbox-seventh-generation-server-hardware — real-world-modeling over nameplate; PDU doubling response; framing of thermals + power as "the new bottlenecks."