CONCEPT Cited by 1 source
Severity-filtered violation reporting¶
Definition¶
Severity-filtered violation reporting is the practice of surfacing only the highest-severity bucket of violations when a new automated-check class is introduced, deferring lower-severity buckets to explicit future work. The lever exists because automated checks typically produce a long tail of violations at first launch, and reporting everything collapses the signal into noise — teams triage the flood, give up, and the check loses operational trust.
The mechanism¶
A severity-aware check engine (e.g. Axe) tags each violation with an impact level. Axe's taxonomy:
critical— directly prevents access.serious— likely to block.moderate— partial impairment.minor— nuisance.
A severity filter keeps only a named subset:
At launch: report only critical. Over time, add severities as
the team's triage capacity grows.
Why this is the canonical launch posture¶
Three compounding reasons to start narrow:
- Alert fatigue is path-dependent. A suite that fires 500 violations on day one is a suite that gets an email filter applied to it on day two. You can't un-noise an over-reporting system without operator trust cost.
- Signal calibration is empirical. Teams discover which violations are real and which are false-positives only by triaging them. Starting with the smallest credible set lets the team calibrate the exclusion list and the rule-fit with the codebase incrementally — see patterns/exclusion-list-for-known-issues-and-out-of-scope-rules.
- Blocking-vs-non-blocking is a separate axis. Starting non-blocking removes urgency; starting narrow limits volume. The two levers compose.
Growth path¶
The canonical future-work declaration:
"We chose to report only the violations deemed Critical according to the WCAG. Serious, Moderate, and Mild are other possible severity levels that we may add in the future."
This phrasing canonicalises the intention: the severity filter is a rollout-stage lever, not a permanent scope limitation. The team commits publicly to expanding as operational capacity grows, which is itself an alignment mechanism with the triage stakeholders.
Generalisation¶
The same lever applies to any new automated-check class:
- Security scanners — surface critical CVEs first; add medium / low as triage capacity scales.
- Performance budget alerts — alert on >10% regression first; tighten the threshold later.
- Lint rules — start with errors; add warnings once error count is zero and stable.
- SLO alerting — page on burn-rate >10x at launch; tighten to 5x, 2x as the service matures.
The anti-pattern is flipping every switch at launch — which produces noise and erodes the operational trust that automated checks depend on.
Seen in¶
- sources/2025-01-07-slack-automated-accessibility-testing-at-slack
— Slack filtered Axe violations to
criticalimpact only at initial launch, explicitly deferringserious/moderate/mildas named future work.
Related¶
- patterns/severity-gated-violation-reporting — the pattern that applies this concept.
- systems/axe-core — the engine whose
impacttaxonomy enables the filter. - concepts/automated-vs-manual-testing-complementarity — the layering context within which severity-gating is one scope lever among several.