Skip to content

CONCEPT Cited by 1 source

Agent D&D alignment framing

Definition

The D&D alignment framing is Tyler Akidau's rhetorical device for locating human workers vs. AI agents on a two-axis risk chart adapted from Dungeons & Dragons character alignments:

  • Horizontal axislawful (follows established rules) vs. chaotic (doesn't follow a consistent rule set).
  • Vertical axisgood (selfless / intends benefit) vs. evil (selfish / intends harm).

Four quadrants: lawful-good, lawful-evil, chaotic-good, chaotic-evil (with neutral variants omitted for compactness).

The framing's load-bearing claim: enterprises hire humans into the lawful-good quadrant (background checks + job descriptions + policies + performance reviews = evidence of lawful behaviour with good intent). AI agents default to the chaotic column — because, verbatim, "you don't know what you don't know. Without the ability to govern or audit an agent, you can't confirm the agent is doing exactly what it's supposed to do (and only what it's supposed to do)."

Canonical source: Akidau's How to safely deploy agentic AI in the enterprise talk recap:

"Agentic AI, on the other hand, mostly falls into the right column of the chart. Despite the guardrails and training that companies attempt to put AI through, the best outcome at this point is that of 'chaotic good' — because you don't know what you don't know."

Rhetorical move: governance as leftward pull

The frame's payoff: governance + auditing infrastructure is the mechanism that moves agents leftward from chaotic toward lawful. Specifically:

  • Auditing turns unknowable behaviour into re-inspectable behaviour — you can now tell what the agent did.
  • Governance / access control constrains what the agent is allowed to do — you can stop it from doing things outside its remit.
  • Replay lets you validate the agent is doing the right thing against known-good inputs.

Without any of these three, the agent stays chaotic-by-construction. Deploying a chaotic-unknown worker on your private data + internet access is Akidau's rhetorical "what could possibly go wrong?" setup.

Why the frame is useful (and why it's limited)

Useful for: - Compressing risk argument for non-technical audiences. Boards and CIOs understand "we're hiring lawful-good workers and plugging in chaotic-good agents" faster than "per-call-policy-check + durable-event-log audit envelope with replay semantics". - Naming the default risk posture. Agents are chaotic by construction — you have to build infrastructure to make them behave otherwise. This is the framing concepts/governed-agent-data-access operationalises. - Mapping onto hiring-is-the-model analogy. Akidau's frame is structurally parallel to hiring-process-as-risk-mitigation; enterprises recognise the shape.

Limited for: - Diagnostic precision. The chaotic-vs-lawful axis doesn't map cleanly onto actual infrastructure axes — access control, auditability, replay determinism, and policy enforcement are orthogonal, not a single spectrum. A sophisticated AAC implementation can deliver lawful behaviour without predictability (policy-enforced but non-deterministic). - Good vs evil vertical axis. In practice, the enterprise fear isn't malicious agents (chaotic-evil); it's misaligned agents (chaotic-good-that-does-the-wrong-thing). The good-evil axis collapses to "does this align with what we asked for?". - Static framing. An agent's alignment isn't a fixed attribute; it shifts with prompts, context, tool surfaces, and model updates. The chart is a point-in-time snapshot, not a behaviour model.

Sibling framings on the wiki

Seen in

Last updated · 470 distilled / 1,213 read