PATTERN Cited by 1 source
Virtual Kubelet provider¶
Pattern. Implement a cloud's managed-Kubernetes offering by registering a Virtual Kubelet provider that turns each Pod-create request into a call against the cloud's own compute API — not by running real Kubelet processes on real Nodes. Kubernetes' control plane sees a fully-functional Kubelet; the provider runs as a small process alongside the API server and is the only thing aware of the cloud substrate underneath.
Context¶
You own or operate a compute primitive (micro-VMs, serverless functions, containers-as-a-service). You want to expose it through the Kubernetes API because that's the interface customers ask for, but you do not want to:
- Reimplement containerd / runc / CRI on every host.
- Build a CNI plugin chain.
- Give customers a Node object they can configure.
Solution¶
- Run a Virtual Kubelet process next to a lightweight K8s control plane (FKS uses K3s).
- Implement the Virtual-Kubelet provider interface against your
cloud's compute API.
CreatePodon the provider issues a VM-create / container-create / function-create on your primitive;DeletePoddeletes it; status reconcilers keep the K8s API in sync with the primitive's status. - Map other K8s objects (Services, Secrets, Volumes) to pre-existing cloud primitives using a reconciler pattern — see the companion patterns/primitive-mapping-k8s-to-cloud.
- Accept that some Kubelet-adjacent APIs (
kubectl exec,kubectl port-forward) will need provider-specific support or an explicit fallback to your cloud's native tooling.
Canonical production instance¶
Fly Kubernetes (FKS) — a small Go Virtual-Kubelet provider alongside K3s translates Pod creates into Fly Machines (Firecracker micro-VMs) via the Machines API. flyd handles scheduling; the Fly Proxy + CoreDNS handle Service routing; the internal IPv6 WireGuard mesh replaces CNI. Fly is explicit about the pattern:
What we have is Kubernetes calling out to our Virtual Kubelet provider, a small Golang program we run alongside K3s, to create and run your pod. It creates your pod as a Fly Machine, via the Fly Machines API, deploying it to any underlying host within that region. This shifts the burden of managing hardware capacity from you to us.
Same pattern in:
- AWS Fargate on EKS — Pods backed by Firecracker on Fargate.
- Azure AKS Virtual Nodes — Pods backed by Azure Container Instances.
Consequences¶
- ✅ The K8s API surface becomes a thin compatibility shell —
customers get familiar
kubectlergonomics. - ✅ The cloud provider owns placement, isolation, and capacity — can densely pack micro-VMs / functions under their native scheduler.
- ✅ Pricing can follow per-Pod units (per-VM, per-request) not per-Node hours.
- ❌ Support matrix starts narrow — multi-container Pods, StatefulSets, NetworkPolicies, HPA, emptyDir tend to be missing at launch (FKS confirms this list at beta).
- ❌ Conformance tests may not pass; the resulting product is "K8s-API-compatible" rather than CNCF-conformant.
- ❌ Kubelet-adjacent UX (exec, port-forward, probes,
logs-via-kubectl) needs provider support; cloud-native tooling
(
flyctl,awsCLI) often fills the gap short-term.
Seen in¶
- sources/2024-03-07-flyio-fly-kubernetes-does-more-now — FKS as an instance; small Go VK provider + K3s + Fly Machines.