CONCEPT Cited by 2 sources
Threat modeling¶
Definition¶
Threat modeling is the discipline, originating in security engineering, of enumerating threats against a system before deciding on countermeasures. A canonical threat model artifact contains:
- A summary of the system (or change) under review.
- A comprehensive list of threats — "all the nasty things that an adversary might try."
- A description of how the system is resilient to each threat (or an explicit acknowledgment that it isn't).
Writing down threats forces the author to think like an adversary and to enumerate before filtering — which catches more than designing countermeasures reactively.
Generalization: from security to durability¶
S3 adopts threat modeling as the structure of its durability reviews (see patterns/durability-review):
The process borrows an idea from security research: the threat model. The goal is to provide a summary of the change, a comprehensive list of threats, then describe how the change is resilient to those threats. In security, writing down a threat model encourages you to think like an adversary and imagine all the nasty things that they might try to do to your system. In a durability review, we encourage the same "what are all the things that might go wrong" thinking, and really encourage engineers to be creatively critical of their own code.
The same shape transfers: "adversary" becomes "failure mode," "attack" becomes "corruption / data-loss path." The structural benefits — comprehensive enumeration, explicit coupling between risks and countermeasures — remain.
Two specific properties the S3 team values¶
- It encourages authors and reviewers to really think critically about the risks we should be protecting against.
- It separates risk from countermeasures, and lets us have separate discussions about the two sides.
The second is the less-obvious payoff. In normal code review, "the risk" and "the mitigation" get argued in the same breath; you can win an argument about a specific fix while missing that the risk itself was mis-scoped. Threat modeling forces an explicit split.
Why the separation matters¶
Once risk and countermeasure are separate artifacts, the team can:
- Prefer coarse-grained guardrails (simple mechanisms that kill whole classes of risks) over per-risk mitigations.
- Notice when several risks would all be addressed by one structural change — and refactor rather than patch.
- Hold the risk catalog constant across reviews, and spot when a new change should have triggered but didn't.
Warfield states the guardrail preference explicitly:
When we are identifying those protections, we really focus on identifying coarse-grained "guardrails". These are simple mechanisms that protect you from a large class of risks. Rather than nitpicking through each risk and identifying individual mitigations, we like simple and broad strategies that protect against a lot of stuff.
ShardStore's executable specification (see systems/shardstore) is the canonical example of such a guardrail: one mechanism that defeats many classes of disk-layer durability bugs.
Third generalization: to agentic-AI behavior envelopes¶
Byron Cook's 2026-02 interview (Source: sources/2026-02-17-allthingsdistributed-byron-cook-automated-reasoning-trust-ai) extends the shape one layer further — from security to durability to agentic-AI behavior correctness:
- Adversary becomes out-of-envelope agent trajectory (anything the agent might do that the system should not permit).
- Countermeasure becomes capability envelope + automated reasoning over composition (patterns/envelope-and-verify).
- Coarse-grained guardrail becomes the envelope itself: one spec that kills entire classes of agent misbehavior, rather than per-action filters.
The structural benefits translate intact: comprehensive enumeration of "what could go wrong" before designing countermeasures; explicit separation of risk catalog from mitigation; preference for coarse-grained guardrails. AgentCore (see systems/bedrock-agentcore) is the runtime that enforces the envelope; concepts/automated-reasoning is what reasons about whether the envelope is tight enough.
This is the third consecutive domain generalization of the same discipline: security threat models → durability reviews (S3) → agent-behavior envelopes (Bedrock). Each time "adversary" gets reinterpreted; the rest of the methodology stays put.
Seen in¶
- sources/2025-02-25-allthingsdistributed-building-and-operating-s3 — original security framing and the durability-review generalization.
- sources/2026-02-17-allthingsdistributed-byron-cook-automated-reasoning-trust-ai — further generalization to agentic-AI safety via patterns/envelope-and-verify and concepts/automated-reasoning over agent composition.