Skip to content

PATTERN Cited by 1 source

Primitive mapping — Kubernetes API to cloud primitives

Pattern. When building a managed Kubernetes offering on top of an existing cloud compute platform, map each Kubernetes primitive to a pre-existing cloud primitive 1:1 rather than reimplementing the Kubernetes reference stack (containerd, CNI plugin graph, kube-proxy, PVC-provisioning drivers). The Kubernetes API becomes a compatibility surface over the cloud's native substrate.

Context

You've built a cloud with its own compute, networking, secrets, storage, DNS, and load-balancing primitives. Customers want to use the Kubernetes API. You want to:

  • Ship a K8s-API-compatible product quickly.
  • Avoid running a second, parallel implementation of everything K8s assumes on a Node.
  • Focus engineering effort on improving the underlying primitives rather than on conformance-driven reimplementation.

Solution

Draw a table. On the left: every K8s object / runtime dependency a workload can hit. On the right: the existing cloud primitive that already does the same job. Then build a small compatibility layer (often a Virtual Kubelet provider + a handful of reconcilers) that keeps the two sides in sync.

The canonical instance is Fly Kubernetes's explicit mapping:

Kubernetes primitive Fly.io primitive
Containerd / CRI flyd + Firecracker + Fly init
CNI internal IPv6 WireGuard mesh
Pods Fly Machines (micro-VMs)
Secrets Fly Secrets
Services Fly Proxy
CoreDNS CoreDNS (to be replaced with custom internal DNS)
Persistent Volumes Fly Volumes

Fly's framing:

We looked at what people need to get started — the API — and then started peeling away all the noise, filling in the gaps to connect things together to provide the power.

Consequences

  • Small, focused compatibility layer — the provider becomes a translation table, not a runtime reimplementation.
  • Customers benefit from the underlying primitive. Fly's IPv6 WireGuard mesh is strictly better than iptables + ClusterIP for some workloads — that benefit is carried through into FKS for free.
  • Scope is forced. The mapping makes gaps visible: anything that doesn't have an obvious cloud-primitive analogue (e.g. NetworkPolicy, multi-container Pods, StatefulSets, HPA) is deferred rather than half-implemented.
  • Conformance tests probably fail — the mapping is lossy. Fly acknowledges this directly ("this isn't Kubernetes! — we agree!").
  • Workloads with assumptions baked into Kubernetes defaults (sidecars, init containers, NetworkPolicy-based isolation, HPA) may not port.
  • Ongoing maintenance cost — upstream Kubernetes continues to add API surface; each new object requires a mapping decision.

Adjacent patterns

Seen in

Last updated · 200 distilled / 1,178 read