PATTERN Cited by 1 source
CAB approval gate (anti-pattern)¶
CAB approval gate is the traditional change-management pattern in which every significant production change must be reviewed and approved by a Change Advisory Board — a cross-functional committee of operations / security / platform / compliance representatives — before it can be deployed. It is the operational implementation of ITIL-style governance.
This page documents the pattern as an anti-pattern: the research evidence (concepts/external-approval-ineffectiveness) against its risk-reduction efficacy is strong enough that new systems should not adopt it, and existing systems should plan migrations away.
The pattern¶
- Team writes a change request (CR) describing the intended change, risk assessment, rollback plan, proposed window.
- CR submitted to a ticketing system (ServiceNow, Jira-SM, etc.).
- CAB meeting (weekly / daily) reviews queued CRs.
- CAB either approves, rejects, or requests more information.
- On approval, the CR is scheduled for the designated change window.
- The team executes the change in the window; post-change review documents success/failure.
Why it looks correct¶
- Separation of duties — the person requesting is not the person approving. Satisfies SOX / PCI-DSS / ISO 27001 surface requirements.
- Audit trail — there is a paper record of every approved change.
- Cross-functional visibility — ops / security / compliance all see what the delivery team is doing.
- Regulator-legible — when a regulator asks "how do you manage change?" the CAB is an answer they recognize.
Why it fails¶
From the 2023-08-16 Swedbank post:
- ~100% approval rate — UK FCA's multi-firm review of ~1M changes found "CABs approved over 90% of the major changes they reviewed, and in some firms the CAB had not rejected a single change during 2019." A filter that passes everything is not a filter.
- Catches documentation, not risk — "Change management gathers documentation of process conformance, but it doesn't reduce risk in the way that you'd think. It reduces the risk of undocumented changes, but risks in changes that are fully documented can sail through the approval process unnoticed."
- Negatively correlates with DORA stability metrics — per Accelerate, external approvals make lead time, deployment frequency, and restore time worse, and are uncorrelated with change-fail rate. "Worse than having no change approval process at all."
- Incentivizes dangerous batching — because CAB approval is expensive, teams batch multiple changes per CR. Bigger batches = bigger blast radius.
- Extends MTTR when applied to rollbacks — a CAB that gates emergency deploys prolongs incidents. Organizations commonly carve out an "emergency CAB" path that ends up handling most high-pressure changes, making the normal-CAB gate an administrative detour rather than a control.
- Bypasses accumulate — the Swedbank case: "none of the bank's control mechanisms were able to capture the deviation and ensure that the process was followed." Because nobody is monitoring production state for drift (patterns/runtime-change-detection), changes that bypass the CAB are invisible until they cause an outage.
When it is unavoidable¶
Some constraints force it:
- Explicit regulator requirement: some regulators prescribe CAB-shaped governance by name.
- Legacy deployment surface: monolithic quarterly releases have genuinely elevated per-change risk; CAB review at that cadence is proportional. See concepts/legacy-system-risk.
- Highly-integrated multi-vendor environments: when the delivery team cannot unilaterally trigger a deploy (e.g. mainframe + outsourced ops), some approval/coordination ritual is required.
Even in these cases, the CAB's work should be supplemented by the DORA-endorsed capabilities, not trusted on its own.
Preferred alternatives¶
- Small, frequent releases (the FCA-corroborated positive finding)
- Peer review within the delivery team
- Automated test gates + staged rollout
- Fast rollback to minimize MTTR
- Runtime change detection to catch bypasses
- Automated compliance evidence collection (GitOps history, deployment logs, feature-flag audit trails — see concepts/audit-trail)
Migration path¶
- Replace the CAB meeting with an automated pipeline that emits the evidence the CAB was producing (change record, approver, timestamp, rollback plan, test results).
- Move "approval" to peer review in PRs / CRs by members of the delivery team plus any required functional stakeholder (security, data-privacy) as a named reviewer on risky changesets.
- Invest in runtime monitoring so that the "lake" is continuously diffed against the stream of approved changes.
- Negotiate with regulators / auditors to accept the pipeline-as-evidence model; this is the slow step.
Seen in¶
- sources/2023-08-16-highscalability-the-swedbank-outage-shows-that-change-controls-dont-work — Swedbank + UK FCA + Accelerate triangulation.