CONCEPT Cited by 1 source
Kubernetes label length limit (63 characters)¶
Kubernetes metadata label values are limited to 63 characters. This comes from RFC 1123 DNS-label syntax that Kubernetes adopted for its label grammar. Label keys have their own length limits (253 chars for prefixed keys, 63 chars for the name part); label values are strictly 63 chars max.
Any label value that exceeds 63 characters is rejected by the API server with:
error: metadata.labels: Invalid value: "analytics-bigdata-spark-executor-pool-m6a-32xlarge-az-a-b-c":
must be no more than 63 characters
Why it turns into a migration blocker¶
The limit is innocuous while you're writing a new system. It turns into a migration blocker when:
- A team has accumulated human-friendly, self-documenting naming
conventions that embed several concept tokens (workload domain,
instance family/size, AZ, purpose) into one label. Names like
analytics-bigdata-spark-executor-pool-m6a-32xlarge-az-a-b-c(67 chars) become common. - A new system relies on those labels for scheduling decisions
(not just informational tagging). Karpenter
reads labels in its
NodePool+EC2NodeClassmatching logic — bad labels break placement, not just observability.
Result: the migration stalls on hundreds or thousands of cluster configurations that need naming-convention refactors before Karpenter can even start managing nodes.
Salesforce example (canonical wiki instance)¶
Salesforce hit this during their 1,000-cluster Karpenter migration:
"During the migration, the team discovered that Salesforce's human-friendly legacy naming conventions often exceeded Kubernetes's 63-character label length limit, creating challenges with Karpenter's label-dependent operations. The team resolved this by refactoring naming conventions across node pools to comply with Kubernetes standards." (Source: sources/2026-01-12-aws-salesforce-karpenter-migration-1000-eks-clusters)
The lesson the post draws: "seemingly minor technical constraints can become significant blockers in automated infrastructure management if not properly addressed early in the migration process." This is a generalisable principle — not just Karpenter.
Mitigation patterns¶
- Audit labels before the migration, not during.
kubectl get nodes --show-labels+ length check as a pre-flight gate. - Shorten with stable abbreviations —
m6astays; spelling outanalytics-bigdata-spark-executordoesn't. - Split into multiple labels — one per dimension (workload, instance, AZ, purpose) each short enough to pass 63.
- Use annotations for long human-readable names — annotations have no length limit; labels are for scheduling.
- Fail admission on long labels via OPA / Kyverno to prevent regressions.
Related¶
- systems/kubernetes — the source of the limit.
- systems/karpenter — the scheduler whose label dependence made the limit migration-blocking at Salesforce.
- concepts/tight-migration-scope — label refactors are a classic old-behavior-match expense that a tight-scope migration tries to avoid but sometimes must eat.
Seen in¶
- sources/2026-01-12-aws-salesforce-karpenter-migration-1000-eks-clusters — Salesforce's fleet-wide naming-convention refactor driven by Karpenter's label-length requirements.