Skip to content

CONCEPT Cited by 2 sources

Machine-readable documentation

Definition

Machine-readable documentation = repository docs structured for AI agents first, humans second: short, factual, consistently-named files with predictable layout — AGENT.md, RUNBOOK.md, CONTRIBUTING.md, .kiro/steering/*.md — plus structured config formats (YAML, JSON, TOML) preferred over prose wherever a fact can be expressed as data rather than sentences.

Design principle from AWS 2026-03-26

"A monorepo allows the agent to navigate across services, understand shared patterns, and evaluate the impact of changes system-wide. Within that repository, concise and structured documentation is essential. Files such as AGENT.md can explain architectural principles and constraints, while RUNBOOK.md and CONTRIBUTING.md describe operational and development workflows. Machine-readable formats, such as YAML or configuration files, are more straightforward for agents to interpret than lengthy prose."

"Kiro can use foundational steering documents — summaries of structure, technology, and product guidelines — to help the agent maintain situational awareness as the project evolves."

What "machine-readable" means concretely

  • Short. One page ≈ one topic; no multi-page narratives the agent has to summarize.
  • Structured. Predictable sections (## Purpose, ## Interfaces, ## Failure modes, ## Related) rather than free-form prose.
  • Factual. Declarative statements the agent can match / cite, not opinion pieces.
  • Data-where-possible. A YAML list of endpoints + owners beats a paragraph describing the same.
  • Cross-linked. Agents follow [[links]] as aggressively as humans do — the wiki format is itself a machine-readable-docs realization.
  • Version-controlled alongside code. Agent reads the same-commit docs as the code it's editing.

Canonical filenames (convention, not standard)

  • AGENT.md / AGENTS.md — agent-facing project intro: tech stack, conventions, anti-patterns, gotchas.
  • RUNBOOK.md — operational workflows: how to deploy, roll back, on-call procedures.
  • CONTRIBUTING.md — development workflow: how to run tests, branch naming, PR template.
  • .kiro/steering/*.md — Kiro-specific; see concepts/project-rules-steering.
  • README.md — human-facing but still read by agents.

Prose vs structured

The AWS post's claim: "Machine-readable formats, such as YAML or configuration files, are more straightforward for agents to interpret than lengthy prose." Evidence-free but directionally correct — LLM context windows favor token-efficient structured data over long explanations, and structured data is less prone to misparsing.

Concrete migrations:

  • Long "how endpoints map to handlers" → OpenAPI spec + a one-line pointer.
  • Long runbook prose → numbered checklist.
  • Long architecture narrative → layer diagram + per-layer README.

Pairs with

Caveats

  • No precision study. Claim that agents parse structured formats better than prose is intuitive but the 2026-03-26 source cites no measurement.
  • Drift risk. Machine-readable docs have the same failure mode as any docs — they rot unless reviewed-on-PR and enforced by CI (patterns/repo-health-monitoring).

Two audiences of machine-readable docs

The 2026-03-26 AWS post frames machine-readable docs as an internal-monorepo discipline — AGENT.md, RUNBOOK.md, Kiro steering files consumed by coding agents working on the codebase.

The 2026-04-17 Cloudflare Agent Readiness Score post extends the same posture to public-facing agent-audience documentation:

  • llms.txt = public-audience equivalent of AGENT.md (agent's structured reading list).
  • Markdown content negotiation = public-audience version of the YAML-over-prose guidance (serve the representation the agent can actually consume efficiently).
  • Split llms.txt per top-level directory = public-audience version of "short, one-topic-per-file" (chunk the index to fit an agent's context window).

Cloudflare's developer docs are the 2026 reference implementation of the public-facing instance — measured 31 % fewer tokens / 66 % faster to correct answer vs the average non-refined technical docs site on a Kimi-k2.5 / OpenCode benchmark.

Seen in

Last updated · 200 distilled / 1,178 read