Skip to content

CONCEPT Cited by 1 source

Kubernetes label length limit (63 characters)

Kubernetes metadata label values are limited to 63 characters. This comes from RFC 1123 DNS-label syntax that Kubernetes adopted for its label grammar. Label keys have their own length limits (253 chars for prefixed keys, 63 chars for the name part); label values are strictly 63 chars max.

Any label value that exceeds 63 characters is rejected by the API server with:

error: metadata.labels: Invalid value: "analytics-bigdata-spark-executor-pool-m6a-32xlarge-az-a-b-c":
  must be no more than 63 characters

Why it turns into a migration blocker

The limit is innocuous while you're writing a new system. It turns into a migration blocker when:

  • A team has accumulated human-friendly, self-documenting naming conventions that embed several concept tokens (workload domain, instance family/size, AZ, purpose) into one label. Names like analytics-bigdata-spark-executor-pool-m6a-32xlarge-az-a-b-c (67 chars) become common.
  • A new system relies on those labels for scheduling decisions (not just informational tagging). Karpenter reads labels in its NodePool + EC2NodeClass matching logic — bad labels break placement, not just observability.

Result: the migration stalls on hundreds or thousands of cluster configurations that need naming-convention refactors before Karpenter can even start managing nodes.

Salesforce example (canonical wiki instance)

Salesforce hit this during their 1,000-cluster Karpenter migration:

"During the migration, the team discovered that Salesforce's human-friendly legacy naming conventions often exceeded Kubernetes's 63-character label length limit, creating challenges with Karpenter's label-dependent operations. The team resolved this by refactoring naming conventions across node pools to comply with Kubernetes standards." (Source: sources/2026-01-12-aws-salesforce-karpenter-migration-1000-eks-clusters)

The lesson the post draws: "seemingly minor technical constraints can become significant blockers in automated infrastructure management if not properly addressed early in the migration process." This is a generalisable principle — not just Karpenter.

Mitigation patterns

  • Audit labels before the migration, not during. kubectl get nodes --show-labels + length check as a pre-flight gate.
  • Shorten with stable abbreviationsm6a stays; spelling out analytics-bigdata-spark-executor doesn't.
  • Split into multiple labels — one per dimension (workload, instance, AZ, purpose) each short enough to pass 63.
  • Use annotations for long human-readable names — annotations have no length limit; labels are for scheduling.
  • Fail admission on long labels via OPA / Kyverno to prevent regressions.

Seen in

Last updated · 200 distilled / 1,178 read