CONCEPT Cited by 1 source
External approval ineffectiveness¶
External approval ineffectiveness names the empirical finding — established by the DORA / Accelerate research program and corroborated by regulator-level studies — that approval of production changes by an external body (change manager, Change Advisory Board, committee not part of the delivery team) is not only unhelpful but net-negative for production stability.
The strongest single statement of the finding, quoted verbatim in the 2023-08-16 High Scalability post from Forsgren/Humble/Kim's 2018 book Accelerate:
"We found that external approvals were negatively correlated with lead time, deployment frequency, and restore time, and had no correlation with change fail rate. In short, approval by an external body (such as a change manager or CAB) simply doesn't work to increase the stability of production systems, measured by the time to restore service and change fail rate. However, it certainly slows things down. It is, in fact, worse than having no change approval process at all."
What the claim is (and isn't)¶
It is: external approval fails all four DORA metrics (concepts/dora-metrics).
| DORA metric | Correlation with external approval |
|---|---|
| Lead time | negatively correlated |
| Deployment frequency | negatively correlated |
| Restore time (MTTR) | negatively correlated |
| Change fail rate | uncorrelated |
It isn't: a claim that any form of review fails. The positive patterns DORA research endorses — peer review / pair programming / pre-commit review by the delivery team itself — are different social objects: reviewers who built the change can actually evaluate its risk surface. External reviewers in a 30-minute CAB meeting cannot.
Why the finding is counter-intuitive¶
Traditional ITIL-shaped IT governance treats external approval as the load-bearing control — the idea being that a cross-functional group with org-wide perspective will "catch what the delivery team missed." In practice, per the UK FCA's multi-firm review, CABs approve >90% of major changes and some firms had a 0% rejection rate across all of 2019. A 100% approval rate cannot be acting as a filter; the CAB is a documentation ritual, not a risk review.
The proposed mechanism¶
External approval fails for structural reasons, not malice:
- Depth asymmetry — the CAB has less context than the delivery team. Reviewers can sanity-check "is a deploy happening?" but cannot evaluate "does this schema migration's lock-acquire path interact with the replica's hot-keys correctly?"
- Information asymmetry — the change request is the delivery team's summary; anything omitted or misrepresented is invisible to the CAB.
- Batching incentives — because approvals are expensive and slow, teams batch multiple changes into one approval. Bigger batches = bigger blast radius per deploy = higher per-deploy failure probability.
- Approval fatigue — reviewers see so many approvals they stop reading substantively; the 90%+ approval rate is the evidence.
- Negative externality on MTTR — if a CAB approval is needed to deploy a rollback or hotfix, restore time balloons. This is the hardest-to-argue but strongest of the DORA effects: the process that nominally reduces risk actively prolongs incidents.
What does work, by contrast¶
- Smaller, more frequent releases — UK FCA: "firms that deployed smaller, more frequent releases had higher change success rates than those with longer release cycles." See patterns/small-frequent-releases-for-risk-reduction.
- Peer review within the delivery team — reviewers with full context, during implementation, not as a gate.
- Automated tests + staged rollout + fast rollback — the DORA-endorsed stack.
- Runtime change detection to catch changes that bypass governance entirely.
Regulatory trap¶
External approval remains near-universal in regulated industries because the cost of dropping it is a fine, not an incident. Swedbank was fined partly for non-compliance with its own declared process (concepts/change-management); a bank that removes its CAB faces regulator escalation even if DORA data predicts fewer incidents. Dismantling the ineffective process requires regulator cooperation, which is slow — so the empirical finding and operational practice persist in misalignment.