SYSTEM Cited by 1 source
vCluster¶
vCluster (by Loft Labs) is an open-source Kubernetes
virtualisation layer that lets you run multiple independent
Kubernetes clusters inside one host Kubernetes cluster. Each
virtual cluster gets its own Kubernetes API
server, controller manager, and data store running as pods in the
host — so from a user's perspective the virtual cluster behaves like
a real independent K8s cluster (kubectl connects to the virtual
API server, namespaces / RBAC / CRDs are virtual-cluster-scoped) —
while workload pods scheduled by the virtual cluster actually run on
the host cluster's nodes via the vCluster syncer.
It is the reference implementation of the [[concepts/virtual- kubernetes-cluster]] primitive: control-plane-per-tenant, shared data plane.
Editions¶
- Open-source vCluster — the core virtual-cluster primitive.
CLI-driven (
vcluster create,vcluster connect), Helm-chart installable, sufficient for most single-team use. Apache 2.0. - vCluster Platform (vcluster-pro) — commercial product from
Loft Labs adding a management UI, SSO, per-cluster RBAC, audit
logging, and the "self-service portal" that QA engineers use to
provision their own vclusters without platform-team involvement.
Deployed via the
vcluster/vcluster-platformHelm chart. Used by Deloitte in the 2026-04-27 case study.
Core design primitives¶
- Virtual control plane — per-vcluster K8s API server, controller manager, and data store running as pods in the host cluster. Default data store is SQLite embedded in the control- plane pod; can be swapped for an external etcd.
- Syncer — the component that translates virtual-cluster objects
to host-cluster objects. Workload pods created in the virtual
cluster are synced down to the host as real pods in a dedicated
host namespace. Bidirectional sync is configurable per resource
type:
sync.fromHostsurfaces host resources (IngressClasses, StorageClasses) up to the virtual cluster;sync.toHostpushes virtual resources (Ingresses, PVCs) down to the host. - Embedded CoreDNS — each virtual cluster runs its own CoreDNS pod so that DNS resolution inside the virtual cluster is isolated (service names in vcluster-A don't collide with service names in vcluster-B).
- Dedicated host namespace per vcluster — the host cluster allocates a namespace for each vcluster; all of that vcluster's pods, services, etc. live under that host namespace (prefixed with the vcluster name).
Config shape¶
Virtual-cluster config is a YAML file specified at creation time that controls sync behaviour. Deloitte's config:
sync:
fromHost:
ingressClasses:
enabled: true
storageClasses:
enabled: true
toHost:
ingresses:
enabled: true
controlPlane:
coredns:
enabled: true
embedded: true
The fromHost block makes host IngressClass + StorageClass
objects visible in the virtual cluster (so users can reference them
in their own Ingress + PVC objects). The toHost block synchronises
virtual-cluster Ingress objects back to the host, where the host's
Load Balancer Controller materialises them as real ALB rules. The
embedded CoreDNS is the virtual cluster's own DNS resolver.
When to use vCluster¶
- QA / pre-production testing environments — the Deloitte case study is the canonical wiki example: 50+ virtual clusters on one host cluster, sub-5-minute provisioning, QA engineers self-service. Pre-prod workloads tolerate shared host-kernel isolation.
- Per-developer ephemeral clusters — each PR gets its own vcluster; sibling concept to per-PR ephemeral environments at the pod-namespace altitude.
- Multi-tenant CI/CD runners — CI jobs that need a "fresh cluster" semantically can get a vcluster in seconds rather than a dedicated cluster in minutes.
- Kubernetes feature testing — exercise different K8s versions (virtual cluster can run a different K8s minor version from the host) without standing up a separate physical cluster.
When not to use vCluster¶
- Hard-multi-tenant production — vcluster shares the host kernel across all virtual clusters. A pod in vcluster-A and a pod in vcluster-B running on the same host node share a kernel; a kernel escape or resource-exhaustion attack in one vcluster can impact siblings. Production-grade hard isolation still requires dedicated clusters (or Firecracker-style microVM pod sandboxing).
- Very high-scale clusters — the host cluster accumulates the overhead of N virtual control planes + N CoreDNS pods + all workload pods. At some density the host runs out of node capacity / etcd throughput, and the multiplexing stops winning over separate clusters. Deloitte's 50+ vclusters on one host is a reasonable upper-bound reference point for QA use.
- When virtual clusters need different network configurations — vcluster inherits the host cluster's CNI and network policy; it cannot express "this vcluster uses Calico and that one uses Cilium".
Seen in¶
- sources/2026-04-27-aws-deloitte-optimizes-eks-environment-provisioning-with-vcluster
— Deloitte's 50+-vcluster deployment on a single EKS host cluster
with EKS Auto Mode as the node
provisioner. 89% provisioning-time reduction (45 min → <5 min),
~500 engineer-hours / year reclaimed, >50 vCPU + >200 GB RAM saved
at peak from shared controllers. The first canonical wiki instance
of vcluster at scale, with hands-on config fragments
(IngressClassParams + per-vcluster sync YAML + shared-ALB Ingress)
captured on the source page. Uses the commercial
vcluster/vcluster-platform(vcluster-pro) Helm chart 4.0.1 for the self-service UI that QA engineers exercise directly.
Related¶
- systems/kubernetes — the underlying orchestrator
- systems/aws-eks — the managed-Kubernetes service used as the host cluster in the Deloitte case study
- systems/eks-auto-mode — the managed-data-plane variant used as the host
- systems/helm — how vcluster-platform is installed
- concepts/virtual-kubernetes-cluster — the primitive vCluster implements
- concepts/platform-team-bottleneck — the organisational anti- pattern vCluster dissolves in QA
- concepts/self-service-infrastructure — the platform- engineering property delivered
- concepts/tenant-isolation — relevant for understanding vcluster's isolation boundaries (weaker than per-cluster)
- concepts/control-plane-data-plane-separation — vcluster inverts the classic split: control-plane-per-tenant, shared data plane
- patterns/shared-host-cluster-with-virtual-clusters — the topology
- patterns/vcluster-fast-test-environment-provisioning — the operational pattern