Skip to content

PATTERN Cited by 2 sources

Co-design with OCP partners

Context

Building hyperscale hardware alone means paying the full design/validation/standardization cost of every subsystem. Building hyperscale hardware via bilateral or multilateral co-design with OCP partners lets two or more hyperscalers share design effort, agree on open standards, and amplify the reach of the resulting artefact.

The pattern

Identify a hardware subsystem whose design cost exceeds single-hyperscaler ROI but whose standardization benefit is cross-industry. Co-design the subsystem with one or more OCP partners. Contribute the design as an OCP standard. Consume the standard when building production systems.

Meta × Microsoft canonical lineage

Meta's 2024-10 post explicitly traces the lineage of Meta × Microsoft OCP co-design:

Year Artefact Layer
2018 SAI (Switch Abstraction Interface) Network switch ASIC API
~2019 OAM (Open Accelerator Module) Accelerator-module form factor
~2020 SSD standardization Storage silicon integration
2024 Mount Diablo 400 VDC disaggregated power rack

"Meta and Microsoft have a long-standing partnership within OCP, beginning with the development of the Switch Abstraction Interface (SAI) for data centers in 2018. Over the years together, we've contributed to key initiatives such as the Open Accelerator Module (OAM) standard and SSD standardization, showcasing our shared commitment to advancing open innovation." (Source: sources/2024-10-15-meta-metas-open-ai-hardware-vision)

Meta × Pure Storage — flash-media co-design (2025)

The pattern extends laterally to a flash-media partner in 2025. Meta's QLC post discloses co-design with Pure Storage"our storage teams have started working closely with partners like Pure Storage, utilizing their DirectFlash Module (DFM) and DirectFlash software solution to bring reliable QLC storage to Meta. We are also working with other NAND vendors to integrate standard NVMe QLC SSDs into our data centers." (Source: sources/2025-03-04-meta-a-case-for-qlc-ssds-in-the-data-center)

Shape variations from the Microsoft pattern:

  • Bilateral, not industry-wide. DFM is Pure Storage proprietary; the DFM form factor isn't an OCP standard. Meta co-designs the slot to accept both DFMs and standard U.2 drives, preserving vendor-substitutability at the interface-level without requiring OCP standardisation.
  • Custom form factor + custom FTL stack — the partnership's technical surface is the full vertical from the drive module up through the userspace FTL (DirectFlash software) on the host.
  • NAND-vendor parallel track — while Pure Storage co-design is the DFM lane, standard NVMe QLC SSD integration with other NAND vendors is the parallel substitutable-path lane. This hedges against single-partner risk.

This extends the Meta co-design portfolio: Microsoft (power + networking ASIC APIs), NVIDIA + AMD (GPU compute platforms via OAM/Grand Teton/Catalina), Pure Storage (flash media).

Why the pattern holds

  • Design cost amortised across hyperscalers — each partner pays a fraction of the total design effort.
  • Standardization benefit: vendors can build against one spec rather than N bespoke hyperscaler specs, which lowers vendor integration cost and increases multi-sourcing options for the hyperscalers.
  • Validation depth — each hyperscaler stress-tests the design in its own deployment shape, surfacing bugs and generalising the spec.
  • Ecosystem alignment — the pattern builds credibility for OCP as the locus for data-center-hardware standardization (vs NIH variants at each hyperscaler).

When to apply

  • Subsystem with broad applicability — fabric ASIC APIs, accelerator modules, power racks all qualify. Meta's in-house FBNIC ASIC is arguably in this zone (contributed to OCP rather than via bilateral co-design — a slightly different pattern shape).
  • Partner with complementary-enough infrastructure to stress the design in different shapes — Meta's + Microsoft's data-center populations cover different regions + workloads, which makes the design validation broader.

When NOT to apply

  • Subsystem with proprietary moat — the stack-rank algorithm, the hot-path network observability tech, the ML model architecture. Co-design is not pattern-matched here.
  • Partner with adversarial interests in the artefact — the pattern presumes aligned incentives.
Last updated · 319 distilled / 1,201 read