SYSTEM Cited by 1 source
DCPerf¶
DCPerf is Meta's open-source benchmark suite for hyperscale compute workloads. Each benchmark is designed by referencing a large Meta production application and validated against the production application at microarchitectural level (IPC, core frequency). Open-sourced on 2024-08-05 at github.com/facebookresearch/DCPerf.
Design stance¶
- Per-benchmark production anchor. "Each benchmark within DCPerf is designed by referencing a large application within Meta's production server fleet." Not synthetic, not HPC-derived — workload-shape must come from a real internet- scale service.
- Representativeness validated microarchitecturally. Meta publishes IPC and core-frequency comparison graphs vs production applications and vs SPEC CPU; DCPerf tracks production values more closely.
- Multi-ISA. x86 + ARM since early development — Meta runs both; a benchmark suite that only targets one ISA is not useful at Meta.
- Emerging-trend coverage. Extended to chiplet-based architectures and multi-tenancy (core-count scaling) over two years.
Internal use at Meta (five named)¶
- Data-center deployment configuration choices.
- Early performance projections for capacity planning.
- Identifying performance bugs in hardware + system software.
- Joint platform co-optimization with CPU vendors.
- Deciding which platforms to deploy in Meta data centers.
Pre-silicon / early-silicon partnership¶
Meta collaborates with leading CPU vendors to run DCPerf on pre-silicon / early-silicon setups — a two-year program yielding "performance optimizations in areas such as CPU core microarchitecture settings and SOC power management optimizations." Canonical instance of patterns/pre-silicon-validation-partnership.
Positioning vs SPEC CPU¶
DCPerf does not replace SPEC CPU at Meta — it supplements it. SPEC CPU remains useful; DCPerf adds the hyperscale-application signal that SPEC CPU's integer/floating- point synthetic workloads don't carry. The published IPC + frequency comparison is Meta's evidence that SPEC CPU exhibits concepts/benchmark-methodology-bias relative to production hyperscale workloads.
Ambition¶
Meta frames DCPerf as aspiring to become "an industry standard method to capture important workload characteristics of compute workloads that run in hyperscale datacenter deployments." Academia, hardware-industry, and internet-company adoption are invited explicitly.
Seen in¶
- sources/2024-08-05-meta-dcperf-open-source-benchmark-suite — canonical open-sourcing post; Meta's statement of design process, use cases, industry ambition.
Caveats¶
- Announcement post does not enumerate the per-benchmark list (i.e., which production applications anchor which DCPerf components); that content lives in the GitHub repo.
- No quantified IPC / frequency deltas published in the post — only visual comparison graphs.
- Multi-tenancy support is stated but the topology (tenants per benchmark, resource isolation) is not specified.
Related¶
- systems/spec-cpu — the industry-standard comparison point.
- systems/arm64-isa — named target ISA.
- concepts/hyperscale-compute-workload — the workload shape DCPerf targets.
- concepts/benchmark-representativeness — the property DCPerf optimises for.
- concepts/benchmark-methodology-bias — the sibling concept (Cloudflare Workers frame) that DCPerf's IPC+frequency comparison is evidence for (against SPEC CPU for hyperscale).
- patterns/workload-representative-benchmark-from-production — DCPerf's load-bearing design rule.
- patterns/pre-silicon-validation-partnership — the vendor- collaboration pattern DCPerf enables.
- patterns/custom-benchmarking-harness — the Figma sibling (application-config axis vs DCPerf's hardware-evaluation axis).
- companies/meta.