PATTERN Cited by 1 source
Dynamic UI generation¶
Pattern¶
Instead of returning text-only responses, an agentic interface generates UI components alongside text, selected from a library of visual blocks (charts, tables, architecture maps, etc.) based on the shape of the user's question. The user interacts with the agent through an adaptive grid they can carve up and populate with blocks described in natural language. The chat history becomes "a living dashboard" rather than a transcript.
Why it works¶
Many questions users ask an agent have answers that are inherently structured — trends over time, tabular breakdowns, resource topologies. Forcing the answer into prose discards most of the information value. Rendering the structure inline, backed by the same data sources the agent just queried, makes the answer actionable without a context switch out of the conversation.
From Cloudflare's Agent Lee launch post (Source: sources/2026-04-15-cloudflare-introducing-agent-lee):
"Ask what your error rate looks like over the last 24 hours, and it renders a chart inline, pulling from your actual traffic, not sending you to a separate Analytics page."
"By blending the flexibility of natural language with the clarity of structured UI, Agent Lee transforms your chat history into a living dashboard."
Canonical instance: Agent Lee¶
- Block library at launch: dynamic tables, interactive charts, architecture maps — and extensible over time.
- Adaptive grid: user clicks and drags to carve out a tile, describes what should go in it, the agent populates it with real data.
- Data source: the agent's retrieval already pulled the numbers (from user traffic, DNS records, Worker error logs, etc.) — the UI layer is a second renderer over the same data, not a separate query.
Prerequisites¶
- A component library the agent can pick from deterministically (not LLM-generated arbitrary HTML/JS — too risky, too slow).
- A layout system that accommodates variable-shape output without breaking the conversation affordance (adaptive grid, not linear transcript).
- A data-access layer the agent can call to populate blocks from live account / telemetry data.
- Sandboxing and/or the same credentialed-proxy boundary used for write actions (patterns/credentialed-proxy-sandbox) — UI blocks that render real data are still consuming tool calls.
When it fits¶
- Dashboard-like products where users routinely go hunting for specific views. Replacing the hunt with a natural-language request collapses a 5-tab workflow into one prompt.
- Platforms with rich structured data — observability, analytics, infrastructure, CRM, finance.
- Agents that already have retrieval paths to live data; the UI generation is largely free once retrieval works.
When it doesn't¶
- Text-dominant interactions (document drafting, chat assistants, coding help) where structure adds friction.
- Products without a well-defined block library — letting the model generate arbitrary UI markup is a security + consistency liability.
- Environments that can't render rich UI (pure-terminal agents, voice-only) — fall back to prose or tabular text.
Canonical wiki instance¶
- systems/agent-lee — first production deployment on the wiki; block library includes dynamic tables, interactive charts, architecture maps; adaptive grid; 18K DAU at launch.