Skip to content

PATTERN Cited by 1 source

Markdown profile output for agents

Problem

Profiling tools (flame graphs, trace viewers) emit their native output in formats optimised for UI consumption — JSON for Perfetto / Chrome chrome://tracing to render flame graphs, protobuf for denser storage, SVG for static flame charts. These formats encode span data in structurally-nested shapes that are fine for the visualiser's parser but materially bad for LLM-based coding agents reasoning about hotspots (see concepts/chrome-trace-event-format for the specific pathologies).

An agent asked "which function is the biggest hot spot in this profile?" has to grep across line-split function names, filter out irrelevant metadata, and piece together timing data — work the underlying data format actively resists. The agent's output quality on optimisation suggestions suffers even at the same model, same harness, and same underlying data.

The pattern

Emit a companion Markdown file alongside every machine-readable profile format. The Markdown file carries the same underlying span data, reshaped into agent-friendly tables + call trees:

  • Hot Functions (Self Time) table — top-N functions ranked by self-time, one per line, column-aligned with self-percent / self-ms / total-percent / total-ms / function-name / source-location.
  • Call Tree (Total Time) table — top-level down, sorted by total-time, showing caller/callee relationships.
  • Top 10 summary line at the top for quick agent consumption.

All single-line per row, all grep-friendly, all heading-structured so an agent can jump to a section. The UI-facing JSON is preserved unchanged so Perfetto / flame-graph tools keep working.

Canonical wiki instance: turborepo-profile-md

From Anthony Shew's 2026-04-21 Turborepo performance post, PR #11880 added a turborepo-profile-md crate that generates a companion .md alongside every Chrome Trace Event Format JSON Turborepo's --profile flag produces.

Canonical abbreviated sample:

**Top 10:** `visit_recv_wait` 69.8%, `put` 30.6%,
`build_http_client` 0.6%, `capture_scm_state` 0.5%,
`find_untracked_files` 0.2%, `repo_index_untracked_await`
0.2%, `walk_glob` 0.2%, `cache_save` 0.1%, `parse_lockfile`
0.1%, `hash_scope` 0.1%

## Hot Functions (Self Time)

| Self%  | Self     | Total% | Total    | Function            | Location                                                 |
| 69.8%  | 15.1s    | 69.8%  | 15.1s    | `visit_recv_wait`   | `crates/turborepo-lib/src/task_graph/visitor/mod.rs:358` |
| 30.6%  | 6.6s     | 30.6%  | 6.6s     | `put`               | `crates/turborepo-cache/src/fs.rs:196`                   |
| 0.6%   | 127.0ms  | 0.6%   | 127.0ms  | `build_http_client` | `crates/turborepo-api-client/src/lib.rs:623`             |

The load-bearing datum from the post is that the format change alone — same model, same harness, same data — produced "radically better optimization suggestions". Canonical verbatim: "Same model, same codebase, same data, same agent harness. Different format, radically better optimization suggestions. The profile data was finally in a format that both I and the agent could read at a glance."

Precedent: Bun --cpu-prof-md

Bun shipped a --cpu-prof-md flag in 2026-04 that emits a Markdown version of Bun's CPU profile (documentation). Jarred Sumner's tweet announcing the flag is explicitly credited in the Turborepo post as the motivating precedent: "A week prior, I saw a tweet from Jarred Sumner about how Bun shipped a new flag: --cpu-prof-md. It outputs profiles as Markdown, which easily fits into my view of how agents work best."

Bun + Turborepo are the two canonical 2026-04 implementations of this pattern, making Markdown profile output an emerging norm in Rust / Zig / TypeScript runtime tooling as of 2026-04.

Composition with the supervised agent loop

The pattern composes tightly with Plan-Mode-then-implement:

  1. Collect a profile end-to-end (turbo run build --profile).
  2. Hand the Markdown profile to the agent in Plan Mode.
  3. Agent produces hotspot analysis and proposed optimisations.
  4. Human reviews proposals.
  5. Agent implements approved changes.
  6. End-to-end hyperfine validation inside sandbox.
  7. PR.

Step 2 is where this pattern earns its keep — the JSON version of the same profile would produce materially worse Plan-Mode analysis.

Implementation sketch

For a trace format like Chrome Trace Event:

Parse JSON → aggregate per-function timing →
  emit .md with:
    - Top-N summary line
    - Hot Functions (Self Time) table sorted desc
    - Call Tree (Total Time) table sorted desc
    - Same filename as the .json but .md extension

Total implementation cost is modest (one crate in Turborepo's case); the agent-output-quality benefit is material.

Why it works (mechanism)

  • Line-per-record means grep recovers complete rows of timing data.
  • Column alignment means the agent's reasoning about "which is bigger" works visually in the tokens.
  • Table headings + section headings let the agent jump to the right section without parsing outer structure.
  • No nesting means the agent doesn't have to maintain parent/child context across many tokens.

See concepts/markdown-as-agent-friendly-format for the broader framework.

Anti-patterns

  • Replace rather than supplement. Don't drop the JSON — it's needed for UI consumption. The .md is a companion format, not a replacement.
  • Summarise too aggressively. Top-10 summaries are fine for quick reference, but agents need the full table to reason about less-obvious wins. Emit both.
  • Embed huge embedded JSON inside the Markdown. Keep the .md grep-friendly — if you're embedding multi- line JSON blocks it starts re-introducing the problem.

Anti-alternative: tune the JSON format

One alternative is to restructure the JSON itself for agent-friendliness (one event per line, flatter structure). This breaks UI compatibility — Perfetto / Chrome Tracing expects the existing format. The companion-Markdown pattern avoids this tradeoff by producing both.

Seen in

  • Making Turborepo 96 % faster (Vercel, 2026-04-21) — canonical wiki instance; definitional source for this pattern; turborepo-profile-md crate + supervised Plan-Mode loop producing 20+ PRs.
  • Bun --cpu-prof-md (Jarred Sumner, 2026-04) — direct precedent explicitly credited in the Turborepo post as the motivating example.
Last updated · 476 distilled / 1,218 read