Skip to content

PATTERN Cited by 1 source

Three-channel context architecture

Intent

In a long-running multi-agent loop (planner + N experts + critic), provide each agent with a tailored view of the investigation state via three complementary context channels, rather than passing raw message history forward:

  • Director's Journal — the planner's structured working memory, accumulating typed entries (decisions, observations, findings, questions, actions, hypotheses) over rounds.
  • Critic's Review — the per-round annotated findings report with credibility scores against a 5-level rubric; consumed by the Director for decisions and by the Critic's Timeline task for narrative assembly.
  • Critic's Timeline — the consolidated chronological narrative built from credible findings; the Director's primary summary input.

Each channel serves a different purpose, is consumed by different agents, and is produced with a different model tier. Together they provide online context summarisation (see concepts/online-context-summarisation) that replaces raw message history entirely.

Canonicalised by Slack's Security Engineering team in the Spear security-investigation service (Source: sources/2026-04-13-slack-managing-context-in-long-run-agentic-applications).

Diagram

          ┌─────────────────────────────────────────────┐
          │       DIRECTOR'S JOURNAL (typed, grows)     │
          │   decision / observation / finding /        │
          │   question / action / hypothesis            │
          │   + phase + round + timestamp               │
          └─────────────────────────────────────────────┘
                 │                  ▲          ▲
                 │reads to all      │writes    │reads for
                 │agents in prompt  │(Director │Timeline
                 │                  │only)     │
                 │                  │          │
   ┌─────────────┴────┐    ┌────────┴──┐   ┌───┴────────────┐
   │     EXPERTS      │    │  DIRECTOR │   │     CRITIC     │
   │  (domain agents) │    │  (planner)│   │  (meta-review) │
   └──────────────────┘    └───────────┘   └────────────────┘
         │findings                ▲              │ scores
         │                        │              │ + consolidates
         ▼                        │              ▼
   ┌──────────────────┐           │      ┌─────────────────────┐
   │ CRITIC'S REVIEW  │───────────┘      │  CRITIC'S TIMELINE  │
   │ (scored findings)│           ▲      │ (chronological      │
   │  0.0-1.0 rubric  │───────────┼──────┤  narrative + top-3  │
   │  per finding     │           │      │  gaps + coherence   │
   └──────────────────┘           │      │  score)             │
        │                         │      └─────────────────────┘
        │              merge(prev_timeline, latest_review,
        │                    journal) → new_timeline
        └──────────────────────────┘

Why three channels, not one

One big summary would over-share

A single merged summary containing decisions + findings + timeline would force every agent to consume everything. Slack's explicit architectural principle (Source: sources/2026-04-13-slack-managing-context-in-long-run-agentic-applications):

"For each agent to optimally execute its role, it requires a tailored view of the investigation state. Each view must be carefully balanced. If agents are not anchored to the wider team, the investigation will be disconnected and incoherent. Conversely, sharing too much information stifles creativity and encourages confirmation bias."

Over-sharing is a cost, not just a non-benefit. This is a novel framing: more context is not strictly better in a multi-agent system. Channel separation is the mechanism that controls per-agent context shape.

One big summary would conflate altitudes

The Journal is planning state (what the Director thinks). The Review is findings audit (what evidence supports Expert claims). The Timeline is narrative state (what story the evidence tells). Merging these into one document would require the consumer to re-sort altitude on every read — an LLM call waste.

Three channels enable differential consumption

Agent Journal Review Timeline
Director Reads (+ writes) Reads (for decisions) Reads (primary summary)
Expert Reads
Critic (Review task) Reads Writes
Critic (Timeline task) Reads Reads (latest) Reads (prev) + Writes

Each agent's prompt includes exactly what it needs. The Expert doesn't need the Timeline (it's working on a specific question). The Director doesn't need raw Expert findings (the Review already scored them).

Channel-by-channel mechanism

Channel 1: Director's Journal

See concepts/structured-journaling-tool.

  • Schema. Six typed entries; priority; follow-up actions; citation refs. Auto-annotated with phase + round + timestamp.
  • Mutation. Append-only (strongly implied by "accumulate entries").
  • Update cadence. Multiple entries per Director turn; Director can journal often.
  • Audience. All agents receive current content in their prompt as chronology.

Channel 2: Critic's Review

  • Schema. Annotated findings with credibility scores (0.0-1.0) against the 5-level rubric (Trustworthy / Highly-plausible / Plausible / Speculative / Misguided) + overall summary. See concepts/credibility-scoring-rubric.
  • Mutation. Replaced each round (not carried forward beyond being merged into Timeline).
  • Update cadence. Once per round, after the Experts produce findings, before the Director's next decision.
  • Audience. Director consumes for decisions; Critic's Timeline task consumes as input.

Channel 3: Critic's Timeline

See patterns/timeline-assembly-from-scored-findings and concepts/narrative-coherence-as-hallucination-filter.

  • Schema. Chronological events from credible citations + top-3 gaps (evidential / temporal / logical) + narrative-coherence score against a second 5-level rubric.
  • Mutation. Rewritten each round as merge(prev_timeline, latest_review, journal).
  • Update cadence. Once per round, after the Review, before the Director.
  • Audience. Director consumes as primary summary; becomes prev_timeline for the next round's Critic Timeline task.

Implementation mechanics

1. Each channel is its own model invocation

Journal writes, Review scoring, and Timeline assembly are three separate invocations with three separate structured- output schemas. This is patterns/one-model-invocation-per-task applied to the context architecture — each channel's production is a bounded, schema-validated model call.

2. Each channel runs on a different model tier

  • Journal writes — run on Director's apex-tier model (strategic reasoning, expensive but low-token).
  • Review scoring — runs on Critic's mid-tier model (reasoning-dense audit; tokens manageable via narrow scope).
  • Timeline assembly — also Critic's mid-tier model, but with narrower scope (pure in-prompt reasoning, no tool calls).

This matches the knowledge-pyramid tier structure: tier-by-tier cognitive load dictates tier-by-tier model choice.

3. Channels compose cleanly with phase progression

Each channel entry is annotated with phase + round, so the Director can ask "what did we know in the discovery phase?" or "how has the Timeline evolved across the trace phase?" without re-computing. See concepts/investigation-phase-progression.

Operational properties

  • Replay-friendly. Point-in-time snapshots of all three channels reconstruct the investigation's state at that round. Useful for debugging, regression testing, and supervisor review.
  • Each channel is bounded. Journal grows linearly with rounds (each entry is small); Review is bounded per round (just the latest findings); Timeline is bounded by a maximum length constraint + the top-3 gap cap.
  • Independent tiering. Each channel can evolve its model tier independently — e.g. moving the Critic from Opus to Sonnet only affects Review + Timeline, not the Journal.

When to reach for it

  • Long-running multi-agent loops where no single agent's view of state is sufficient for all participants.
  • Planner-executor-reviewer shapes where the three roles consume different parts of the state.
  • Hallucination-sensitive tasks where per-finding credibility scoring (Review) and narrative consistency (Timeline) are both load-bearing.
  • Supervisory contexts where a human (or higher-altitude agent) needs a clean, readable summary artifact at any point.

When not to reach for it

  • Single-agent loops. The three-channel overhead is not earned; one working-memory artifact is enough.
  • Short tasks. If the task fits in one context window, raw message history is simpler and works.
  • Tasks without natural altitude separation. If planning, execution, and review collapse into one concern, channel separation creates artificial complexity.

Composes with

Contrasts

  • vs. single message-history-carry-forward — default agent-framework behaviour. Simple but scales poorly and conflates altitudes.
  • vs. single running summary — one compacted artifact shared across agents. Over-shares (confirmation bias risk) and conflates altitudes.
  • vs. memory-backed RAG (e.g. MemGPT, Letta) — retrieval- driven context assembly. Retrieves relevant past information per-prompt. Orthogonal to this pattern — you can combine them (use retrieval to populate channel content) but retrieval alone does not produce the three tailored views.
  • vs. scratchpad + shared state — two-artifact shape. Closer to Slack's pattern but missing the Timeline's narrative-coherence filter, which is the canonical second hallucination-filter stage.

Tradeoffs

  • Three prompts to maintain. Schema drift across Journal / Review / Timeline requires coordinated updates.
  • Three model tiers to tune. Each channel has its own tier, so cross-channel quality depends on tier coordination.
  • Timeline assembly is the fold's stability point. If the Timeline task hallucinates the narrative, subsequent Director decisions cascade badly. The cost of the Timeline rubric + narrative-coherence filter is the mitigation.
  • Message-history replay still needs the event stream. If a human supervisor wants raw replay of what happened, the three-channel artifacts don't carry that — the Hub/Worker/Dashboard event stream does.

Seen in

  • systems/slack-spear — canonical first wiki instance. Three channels: Director's Journal (typed entries, grows), Critic's Review (scored findings, replaced each round), Critic's Timeline (fold over prev_timeline + review + journal). Tailored per-agent views; explicit claim that over-sharing "stifles creativity and encourages confirmation bias." No raw message history between invocations. (Source: sources/2026-04-13-slack-managing-context-in-long-run-agentic-applications)
Last updated · 470 distilled / 1,213 read