Skip to content

CONCEPT Cited by 1 source

Agentic AI infrastructure challenges (eight-axis checklist)

Definition

Tyler Akidau's eight-axis checklist of what an enterprise has to solve to deploy agentic AI safely. Each axis names a concrete infrastructure problem that agent deployment surfaces. Together they form a canonical shopping list for evaluating "what does my agent platform need?" — and for classifying which vendor / substrate / pattern answers which axis.

Canonical source: Akidau's How to safely deploy agentic AI in the enterprise talk recap.

The eight axes

# Axis One-line meaning
1 Context building and maintenance Keep the agent's knowledge base / vector store / analytical tables fresh with business data.
2 Context querying Retrieve the right context at the right time (RAG, semantic search, SQL).
3 Authentication Agent has access only to the data it's supposed to; can't escalate.
4 Governance Enforce policies (PII, regulatory) uniformly across the agent fleet.
5 Auditing Know what every agent did — full inputs + outputs, not just metadata.
6 Replay and validation Re-run past interactions to confirm agents behave correctly.
7 Routing Send the right subset of data to the right agent (or decide not to use an agent at all).
8 Multi-agent coordination Synchronise multiple agents in complex workflows.

Verbatim:

"If you want to deploy agentic AI safely and effectively, you need to prioritize several non-trivial moving pieces: context building and maintenance, context querying, authentication, governance, auditing, replay and validation, routing, multi-agent coordination."

Akidau's load-bearing claim: six of eight are data-streaming problems (the other two — context querying + authentication — fall outside streaming's scope).

The streaming reduction

The six streaming-reducible axes map to existing wiki primitives:

Axis Wiki primitive
1. Context building/maintenance CDC + ETL into vector / OLAP stores; patterns/cdc-fanout-single-stream-to-many-consumers
4. Governance Enforcement at interconnection points — the agentic data plane / MCP proxy
5. Auditing patterns/durable-event-log-as-agent-audit-envelope — full inputs + outputs captured as first-class events
6. Replay + validation patterns/snapshot-replay-agent-evaluation — replay audit log to validate agent behaviour
7. Routing patterns/dynamic-routing-llm-selective-use — use AI where it wins, route to ML/heuristics otherwise
8. Multi-agent coordination patterns/multi-agent-streaming-coordination — streaming broker as decoupled coordination substrate

The remaining two:

  • Context querying — vector DB + semantic search + SQL are the answer shapes. RAG covers the inference-time retrieval axis; a multi-modal catalog covers the discovery axis. Streaming doesn't serve this.
  • Authentication — IdP / OIDC / OBO substrates serve this; see concepts/short-lived-credential-auth + OBO authorization. Streaming doesn't serve this either.

Why this checklist matters

Evaluation rubric for agent platforms. When evaluating a candidate agent platform (Databricks Unity AI Gateway, AWS Bedrock Agents, Anthropic MCP, Redpanda ADP, Snowflake Cortex), the eight-axis checklist lets you ask "which of these eight does this product solve, and which does it outsource?" A platform claiming to be the "complete agentic data plane" should answer all eight credibly; platforms that solve 2-3 axes well and require partners for the others should be evaluated on the integration seams.

Gap enumeration for in-house builds. For teams building their own agent infrastructure, the checklist enumerates what must be solved before production deployment. Each unaddressed axis is a risk vector: unaudited agents can't be debugged; ungoverned agents can leak PII; unreplay-able agents can't be validated against regressions.

Shared vocabulary for cross-team discussions. The checklist gives platform teams, security teams, compliance teams, and application teams the same eight names for the same eight problems — reducing the coordination overhead of multi-team agent deployments.

Relationship to other framings on the wiki

  • concepts/governed-agent-data-access — Gallego's 2025-10-28 two-axis framing (access controls + observability) is a compression of axes 3 (authentication), 4 (governance), 5 (auditing) on this checklist. Gallego's framing is more mechanism-load-bearing; Akidau's eight-axis is the full-shopping-list altitude above it.
  • concepts/autonomy-enterprise-agents — the capability-side framing; the eight-axis checklist is what's required to deploy autonomy safely.
  • concepts/agent-dnd-alignment-framing — the eight axes together are the governance + auditing infrastructure that moves agents leftward from chaotic-good default toward lawful-good operable.
  • concepts/streaming-as-agile-data-platform-backbone — the structural claim this checklist rides on; six of eight axes reduce to streaming problems.

Caveats

  • Editorial decomposition, not mechanism spec. Each axis has deep mechanism complexity (policy engines for governance, replay determinism for LLMs, consensus semantics for multi- agent coordination) that the checklist doesn't engage.
  • Vendor-position framing. Akidau is Redpanda's CTO; the "six of eight are streaming problems" framing is structurally aligned with Redpanda's product positioning. Other vendors would decompose differently (Databricks would route more axes through the Catalog + Gateway; AWS would route more through Bedrock + IAM; Snowflake through Horizon + Cortex).
  • Axis coupling ignored. The eight axes are presented as independent; in practice they compose — e.g. governance + auditing share the same durable log substrate; replay + validation require both.
  • No axis priority. The list is unordered; in production, authentication + governance are prerequisites for everything else (an unauthenticated ungoverned agent shouldn't reach context-building).
  • No capability-axis. The checklist covers safety infrastructure; orthogonal capability infrastructure (tool surface breadth, model selection, context window management, agent-loop orchestration) isn't engaged.

Seen in

Last updated · 470 distilled / 1,213 read