Skip to content

PATTERN Cited by 1 source

Severity-gated violation reporting

Context

A team is launching a new automated-check class (accessibility, security scanning, perf budget enforcement) that produces severity-tagged violations (critical / serious / moderate / minor, or equivalent taxonomy). At initial launch:

  • The team lacks triage capacity for all severities.
  • The false-positive rate is uncharacterised.
  • Operator trust hasn't been earned yet.
  • Flooding operators with low-severity noise on day one will erode the check's credibility before it has a chance to prove signal value.

The pattern

At launch, filter violations to only the highest-severity bucket. Explicitly defer lower severities as named future work. Expand scope incrementally as triage capacity and calibration grow.

// Launch config:
const reportedImpacts = ['critical'];
// Later, when operationally ready:
// const reportedImpacts = ['critical', 'serious'];

const reportableViolations = violations.filter(
  v => reportedImpacts.includes(v.impact)
);

The filter is code-level and visible in PRs, not a hidden runtime config — making the severity widening a deliberate team decision.

Commitment to growth as alignment mechanism

The canonical public phrasing:

"We chose to report only the violations deemed Critical according to the WCAG. Serious, Moderate, and Mild are other possible severity levels that we may add in the future."

This matters:

  • It aligns the team's future work with the triage-stakeholders' expectations (accessibility team knows more will come).
  • It signals the severity filter is a rollout-stage lever, not a permanent scope decision.
  • It creates a natural next-milestone ("enable serious") that the team can plan toward.

Composes with other rollout levers

Severity-gating composes naturally with:

  • Non-blocking at launch — violations surface but don't block merges. Composed: narrow severity + non-blocking = minimum-operator-cost launch.
  • patterns/exclusion-list-for-known-issues-and-out-of-scope-rules — pre-audit exclusion handles the "this rule doesn't apply here" axis; severity-gating handles the "this violation is too low-priority right now" axis. Orthogonal filters.
  • patterns/tri-mode-opt-in-test-execution — environment-flag-gated execution reduces where the checks run; severity-gating reduces what is reported from each run. Orthogonal.

The widening playbook

Over time, a team wants to widen the severity filter. The canonical steps:

  1. Calibrate at the current filter. Confirm critical is close to fixed (low open-ticket count in the triage queue) and the false-positive rate is acceptable.
  2. Audit the deferred severities. Run a one-off report with serious included, triage the output, estimate ticket volume.
  3. Update the exclusion list for serious-specific out-of-scope entries before widening.
  4. Widen. Add serious to the reported list. Watch ticket-creation rate.
  5. Re-calibrate before the next widen. Don't go from critical to critical + serious + moderate + minor in one step.

Anti-pattern

Flipping every switch at launch — reporting all severities on day one. Produces:

  • Hundreds or thousands of low-signal violations.
  • Triage-queue explosion.
  • Operators build email filters, Slack mutes.
  • The check loses credibility before it has produced any high-signal finding.
  • Unpicking later is costly — operator-trust decay is path-dependent.

Generalisation

Applies to any new automated-check class at launch:

  • SAST / security scanners — start with critical CVEs, add high / medium / low over time.
  • Performance budget alerts — start with >20% regression, tighten to 10%, 5%.
  • Lint rules — start with errors, add warnings as errors stay at zero.
  • SLO alerting — start with 10x burn-rate, tighten to 5x / 2x.

Canonical rule: launch a new signal at the tightest actionable threshold, widen as calibration justifies.

Seen in

Last updated · 470 distilled / 1,213 read