SYSTEM Cited by 1 source
Google TPU¶
Google TPU (Tensor Processing Unit) is Google's custom ASIC family for AI workloads, offered commercially via cloud.google.com/tpu and used internally as the primary training + serving substrate for Google's own large-scale AI products.
This is a minimal-viable page — the wiki has currently only ingested one post that touches TPU (the 2025-11-04 Project Suncatcher announcement), and that post names TPU as the compute substrate without architectural depth on the accelerator itself. Future Google / Google Cloud posts will populate this page with the architectural detail (generation history, perf/watt, interconnect topology, compiler / XLA integration, pod sizing, availability-zone footprint, etc.).
Known from the current corpus¶
- TPU is the compute substrate chosen for Project Suncatcher's orbital constellation. The choice is load-bearing — it pins the space-based-AI-infrastructure programme to commercial commodity silicon rather than a purpose-built radiation-hardened space ASIC, which in turn pushes the radiation-tolerance problem into architectural / software mitigation rather than silicon mitigation (Source: sources/2025-11-04-google-exploring-space-based-scalable-ai-infrastructure).
Seen in¶
- sources/2025-11-04-google-exploring-space-based-scalable-ai-infrastructure — named as the compute element inside each Project Suncatcher satellite.
Related¶
- systems/project-suncatcher — orbital-constellation moonshot that integrates TPUs as the per-satellite AI accelerator.
- companies/google — Google Research / Google Cloud, Tier 1 source.