SYSTEM Cited by 1 source
AMD Instinct MI300X¶
The AMD Instinct MI300X is AMD's flagship data-center GPU for AI training and inference workloads, competitive with the NVIDIA H100 class. In 2024-10 Meta announced that its Grand Teton AI platform is now extended to support the MI300X, with the new variant contributed to OCP.
Seen in (wiki)¶
- Meta Grand Teton AMD variant (2024-10). "Now, we have expanded the Grand Teton platform to support the AMD Instinct MI300X and will be contributing this new version to OCP. Like its predecessors, this new version of Grand Teton features a single monolithic system design with fully integrated power, control, compute, and fabric interfaces. This high level of integration simplifies system deployment, enabling rapid scaling with increased reliability for large-scale AI inference workloads." (Source: sources/2024-10-15-meta-metas-open-ai-hardware-vision)
Why it matters¶
- Multi-vendor GPU support at Meta. Meta's original Grand Teton (2022) was NVIDIA-GPU-only. The MI300X variant is the first publicly-disclosed second-vendor accelerator in the Grand Teton lineage — a necessary precondition for DSF's vendor-agnostic fabric thesis.
- Inference-focused framing. The MI300X-Grand-Teton post text explicitly names "large-scale AI inference workloads" — Meta is publicly positioning AMD silicon for inference, distinct from the NVIDIA-H100-for-training posture established in 2024-06.
- OCP-contributed. The design is being contributed to OCP, consistent with Meta's broader open-hardware thesis.
Related¶
- systems/grand-teton — the Meta AI platform now multi-accelerator.
- systems/nvidia-h100 — NVIDIA counterpart in the Grand Teton chassis.
- systems/nvidia-gb200-grace-blackwell — the next-generation NVIDIA silicon, hosted on Catalina rather than Grand Teton.
- patterns/modular-rack-for-multi-accelerator — the pattern Grand Teton instantiates.
- companies/meta.