Skip to content

CONCEPT Cited by 1 source

ConfigMap hash rollout

Definition

ConfigMap hash rollout is the mechanism by which Kustomize forces a rolling restart of pods that mount a ConfigMap by appending a content-hash suffix to the ConfigMap name, so that whenever the underlying file content changes, the ConfigMap name changes, which Kubernetes treats as a new object — triggering the rolling restart that would otherwise be silent on a plain kubectl apply to an in-place-modified ConfigMap.

Why it exists — the naive-apply failure mode

Kubernetes does not restart pods when a mounted ConfigMap is updated in place. If you edit a ConfigMap with kubectl edit configmap my-config or push a new version with kubectl apply -f my-config.yaml, the ConfigMap's contents change but the pods continue running with the old mounted content until they happen to be recreated for some other reason. For a pipeline / application that reloads config only at startup, this means the new config never takes effect unless the operator manually restarts the deployment.

Workarounds:

  • Manual restartkubectl rollout restart deployment/foo after every ConfigMap update. Imperative, easy to forget, breaks GitOps discipline.
  • Annotation-based hash — hand-compute the ConfigMap content hash, stuff it into a pod template annotation. Requires custom tooling.
  • Reloader controller — install a Kubernetes controller that watches ConfigMaps and restarts referencing deployments. An extra runtime component to operate.
  • ConfigMap hash rollout via Kustomize — the canonical mechanism discussed here.

The mechanism

Kustomize's configMapGenerator produces a ConfigMap with a name suffix computed from the hashed content:

configMapGenerator:
  - name: connect-streams
    files:
      - config/first-names.yaml
      - config/last-names.yaml
generatorOptions:
  disableNameSuffixHash: false  # default — hash is enabled
  • Initial build produces ConfigMap connect-streams-abc123
  • Edit first-names.yaml → rebuild → produces connect-streams-def456
  • Kustomize rewrites all references to the ConfigMap inside the rest of the manifest set (Deployment spec, Helm chart values, etc.) so they point at the new hashed name
  • Kubernetes sees a new ConfigMap + an updated Deployment referencing it → rolling restart

The hash is deterministic — same content → same hash → idempotent applies. disableNameSuffixHash: true opts out, returning to naive in-place mutation.

Why this is a GitOps-native primitive

Under an Argo CD-driven GitOps loop, the full chain is:

  1. Operator edits config/first-names.yaml in the Git repo
  2. git commit + git push
  3. Argo CD syncs the repo, runs kustomize build
  4. Kustomize computes new hash, rewrites references, emits new Deployment manifest with the new ConfigMap name
  5. K8s sees new object, executes rolling restart with graceful shutdown

The whole thing is triggered by a Git commit, not by a kubectl invocation — preserving GitOps discipline. The commit message becomes the audit trail for the rollout.

Rollout semantics

Canonical wiki disclosure from the Redpanda 2025-12-02 tutorial: "ArgoCD then executed a rolling restart with graceful component shutdown to minimize the data loss."

For streaming pipelines specifically:

  • Graceful shutdown — in-flight messages drain before pod termination
  • Rolling update — one pod at a time, respecting maxUnavailable / maxSurge on the Deployment
  • Per-pipeline data loss — bounded by broker offset commits + idempotent producer configuration on the consuming side

For Redpanda Connect Streams mode specifically, this means pipeline updates do not require operator intervention — the Git commit is the full operation. Contrast with Standalone mode, where every config change rebuilds the pod anyway because the config is baked into the Helm values.

Limitations

  • Hash computation is content-sensitive — whitespace changes, line-ending changes, YAML key reordering all produce new hashes and new rollouts. Churn-prone unless the ConfigMap source files are linted + canonicalised.
  • Multiple pipelines share one ConfigMap — in Redpanda Connect Streams mode, editing one pipeline file rolls all pods running all pipelines in the shared ConfigMap. Fine-grained per-pipeline rollout requires per-pipeline ConfigMaps.
  • Pod startup cost amortises against rollout frequency — for heavyweight pipeline stacks with long warm-up, high-churn config editing can create an effective availability ceiling.
  • Not all workloads tolerate rolling restart — stateful consumers with in-memory state accumulated over time may lose state on restart. Budget + replay against durable offset storage.

Seen in

Last updated · 470 distilled / 1,213 read