PATTERN Cited by 1 source
PDU doubling for power headroom¶
Pattern¶
When per-rack power draw exceeds the rack's nominal power budget but the upstream power distribution has unused capacity, double (or otherwise increase) the number of PDUs per rack on the existing busways rather than rebuilding the facility. Effectively raises the rack's deliverable power without a facility-scale rebuild.
The problem shape¶
Modern hardware generations — higher-core-count CPUs, denser storage chassis, GPU tiers — routinely drive per-rack power draw past the rack's nominal budget. Conventional response:
- Deploy less hardware per rack — wastes the per-server density improvement that motivated the refresh.
- Rebuild the facility — bigger busways, upgraded feeds, more cooling. Expensive, slow, high disruption.
Neither is great. The PDU-doubling pattern is a third path that works when the rack cabinet is the bottleneck, not the upstream feed.
How it works¶
- Existing datacenters typically have busway capacity reserved — the electrical feed to the rack space carries more current than any single rack's PDUs draw.
- Each rack exposes only 2 PDUs, each handling ~half the rack's nameplate load.
- If the rack needs more power than 2 PDUs can deliver, add 2 more PDUs using the busway's existing capacity and spare receptacles.
- 4 PDUs can deliver roughly 2× the power of 2 PDUs on the same busway.
Dropbox's 7th-gen execution¶
Dropbox modeled 7th-gen real-world draw at ~16 kW per cabinet, exceeding the prior 15 kW budget. Teaming with the datacenter engineering team:
We switched from two PDUs to four PDUs per rack, using existing busways and adding more receptacles.
This move effectively doubled our available rack power, giving us the breathing room to support current loads — and even future accelerator cards.
No new busways. No facility rebuild. Just more PDUs and more receptacles on the rack side.
Preconditions¶
- Upstream busway headroom — the electrical feed to the rack space has to carry the increased load. If it's already at capacity, this pattern doesn't work.
- Physical receptacle space on the busway — enough spare taps to add the PDU drops.
- Cooling headroom — more power delivered = more heat to remove. If the cooling is already at the edge, doubling delivered power shifts the bottleneck rather than removing it.
- Rack physical space — 4 PDUs take more rack space than 2; some rack form factors may require cable-management rework.
- Electrical-code compliance — regional electrical codes may require additional permits or inspections for higher rack current draw.
What this pattern does not do¶
- Doesn't fix the cooling side. If the facility cooling can't remove the additional heat, the servers will throttle or shut down regardless of power delivery. Paired with chassis-level cooling improvements in Dropbox's case (Sonic's fan-curve tuning + airflow redirection).
- Doesn't help in cloud-rented capacity. The pattern requires operating the rack + the facility busway. Cloud operators already run this optimization internally; tenants can't.
- Doesn't generalize to sub-rack-scale. Bespoke datacenters typically have per-cabinet metering; co-located racks may not support retrofitting PDUs. Pattern fits operators with their own facilities or long-term-reserved hall space.
When to use¶
- A new server generation materially increases per-rack draw (Dropbox: +10–20%).
- The upstream power distribution has been sized with headroom that the current 2-PDU-per-rack topology doesn't use.
- A facility rebuild is not on the roadmap for 1–2 years.
- Cooling capacity is either already sufficient or can be upgraded more cheaply than power delivery.
When not to use¶
- Facility is already at peak power draw upstream.
- Cooling is already the binding constraint.
- Rack cabinets don't have the physical space for additional PDUs.
- The operator doesn't own/control the facility wiring.
Relationship to co-design¶
PDU doubling is an instance of concepts/hardware-software-codesign at the facility layer: software workload drove CPU choice, CPU choice drove per-server power, per-server × per-rack density drove per-rack power, per-rack power drove PDU topology. Each layer constrains the next; the co-design loop surfaces the constraint in time to address it via the cheapest available response — here, more PDUs rather than more facility.
Seen in¶
- sources/2025-08-08-dropbox-seventh-generation-server-hardware — concrete 2 → 4 PDU move at Dropbox; 15 kW → ~16 kW+ per-rack envelope; reuse of existing busways + added receptacles.