Skip to content

PATTERN Cited by 1 source

Automated configuration mapping

What it is

Automated configuration mapping is the migration pattern where you encode the translation rules between old and new system configurations as code — not as a runbook or a manual translation spreadsheet — and apply them mechanically across every affected configuration in the fleet.

The translation is:

  • Deterministic. Given the same old config, you always get the same new config.
  • Auditable. The mapping rules are a reviewable artifact.
  • Reversible. If the mapping is well-defined, reverse-translating back is tractable (pairs with rollback-capable migration tools).
  • Idempotent. Re-running the mapping produces no change on already-migrated configs.

When it becomes necessary

  • Fleet size × config diversity too large for human translation. Salesforce's canonical instance: 1,180+ node pools, each with different instance types, volume sizes, IOPS settings, node labels, min/max counts, AZ policies. Hand-translating each at migration time produces O(configs) opportunities for human error.
  • High cost of misconfiguration. A botched node-pool config can cause workloads to fail to schedule; compounded across a fleet, it's unrecoverable.
  • The old and new schemas are both structured. Manual translation is the failure mode of naming-convention-driven configs; automation is possible only when the old-to-new mapping can be expressed as rules.

Salesforce canonical instance

The Salesforce Karpenter migration post describes the mapping in concrete detail:

*"To convert existing Auto Scaling group configurations to Karpenter-based definitions, the team automated the mapping logic between legacy and modern configurations. For example:

  • Auto Scaling group instance types → EC2NodeClass instance types
  • Root volume sizes → Storage parameters in Karpenter config
  • Node labels → Applied in both NodePool and EC2NodeClass"*

"With over 1,180 node pools containing highly diverse configurations, automation was essential to minimize errors and reduce manual toil." (Source: sources/2026-01-12-aws-salesforce-karpenter-migration-1000-eks-clusters)

The post surfaces the input shape of one ASG config — instance type, root volume size/IOPS/type/throughput, min/max node counts, multi-AZ flag, launch-template method, GPU-enabled flag. Each of these is a field with a known target in the new Karpenter EC2NodeClass / NodePool schema.

Shape

  1. Formalise the source schema. Enumerate every field in the old config, its data type, and its legal values.
  2. Formalise the target schema. Same, for the new system.
  3. Write the mapping. Per-field translation rules, enriched with any implicit-defaults translation where the old format relied on platform conventions.
  4. Run mapping as a code-gen pass over the fleet — typically producing a PR / diff per source config so the translation is reviewable.
  5. Gate on validation — ensure the generated configs lint cleanly against the new schema before they get applied.
  6. Apply. Usually in a phased rollout with soak times, so any systematic mapping error surfaces on a small subset first.
  7. Idempotence check. Re-running the mapping produces empty diffs for already-migrated configs.

Anti-patterns

  • Translating by hand — for any N > ~20 configs, human error rate dominates the cost.
  • Ignoring implicit defaults — the old config's platform often has implicit defaults (ASG's instance-type fallback, default volume IOPS); if the new platform doesn't, a spec-verbatim mapping fails silently.
  • One-shot irreversible mapping — no rollback path if the target schema has undiscovered edge cases.

Seen in

Last updated · 200 distilled / 1,178 read