Skip to content

CONCEPT Cited by 1 source

Explainable AI decision

Definition

An explainable AI decision is one whose output is accompanied by a first-class, queryable justification — not a post-hoc reconstructed one — that downstream consumers (human reviewers, auditors, regulators) can inspect, cite, and if necessary rebut. Explainability is a system property, not a model property: the runtime decides to emit a justification object, store it durably, link it to its evidence sources, and expose it via the same interface the decision itself comes through.

(Source: sources/2026-04-23-aws-modernizing-kyc-with-aws-serverless-solutions-and-agentic-ai.)

Canonical framing from the KYC architecture

"Explainable AI decisions with comprehensive audit trails support regulatory compliance and enable rapid audit responses."

"[Fraud Detection] maintains dynamic risk scores with explainable fraud assessments."

"[Compliance & Risk] generates compliance attestations with audit trails for regulatory examinations."

"Low confidence (<75%) escalates to human review with comprehensive context."

Four distinct "explain" tasks inside one architecture:

  1. Per-decision justification — why was this customer approved?
  2. Per-sub-agent explanation — why did Fraud Detection assign risk = 0.42?
  3. Regulatory attestation — which rules were applied, what evidence supports them?
  4. Escalation context — what did the sub-agents see that they couldn't resolve, so the human has somewhere to start?

These are all part of the same "explainable" surface but have different consumers (customer-ops, regulator, auditor, compliance specialist).

Why this is a system concern, not a model concern

A single LLM call can produce a text explanation, but that explanation is:

  • Not linked to the evidence (which Knowledge Base chunks were retrieved? which tool outputs were used?).
  • Not durable (it lives in the response payload; if not logged, lost).
  • Not verifiable (the model can hallucinate an explanation that doesn't reflect the actual decision path).

A system-level explainability design has to:

  • Pin provenance. Which Knowledge Base documents grounded the decision? systems/amazon-bedrock-knowledge-bases citations.
  • Serialise the reasoning chain. Supervisor → sub-agent dispatch log → sub-agent tool calls → sub-agent confidence → aggregation rule → final decision. systems/agentcore-memory is a natural substrate.
  • Emit audit events. systems/aws-cloudtrail for API-level events; application-level audit trail for business-semantic events.
  • Be queryable. Auditors need "who was approved by the 75-95 band last quarter and what was the additional verification?" — implies an analytical store, not just logs.

Hazards

  • Explainability theatre. A system can emit a plausible-looking rationale that doesn't actually reflect the decision path. This is worse than no explanation — it creates the illusion of accountability. Mitigation: bind the explanation to the literal tool-call + Knowledge-Base-chunk-id graph, not to a generated narrative.
  • Volume. Per-decision explainability at millions of customers per year implies substantial storage + indexing cost. DynamoDB for hot + S3 for cold is the canonical AWS shape (and is what the KYC architecture uses).
  • Regulatory drift. What counts as "sufficient explanation" varies per jurisdiction and per regulator. A system that satisfies MAS may not satisfy AMLD today or BSA in five years.

Relation to audit trail

concepts/audit-trail is a sibling concept: the audit trail is the durable record of what happened; the explanation is why. A production-grade compliance system needs both. The KYC architecture treats explanation + audit trail as a compound output of every sub-agent, rather than bolting on an audit trail at the end.

Caveats

  • Claimed, not demonstrated. The KYC post describes explainability as a commitment ("explainable fraud assessments", "compliance attestations") but discloses no sample artefact, no attestation schema, no citation format. This page will thicken as explainability-first architecture retrospectives land.
  • Not all sub-agents explain the same way. Document Analysis can cite OCR confidence + a document region; Fraud Detection can cite similar-past-cases retrieval; Compliance can cite specific regulation-document chunks. The taxonomy of "what counts as an explanation" differs by domain.

Seen in

Last updated · 476 distilled / 1,218 read