Vercel — Chat SDK brings agents to your users¶
Summary¶
Vercel's launch post for Chat SDK, a TypeScript library for building
chat bots that run on Slack, Microsoft Teams, Google Chat, Discord,
Telegram, GitHub, Linear, and WhatsApp from a single codebase. The
architectural thesis is the AI SDK analogue for messaging
platforms: just as AI SDK abstracts model-provider quirks into one streaming
interface, Chat SDK abstracts messaging-platform quirks — streaming
semantics, markdown dialects, UI primitives, event formats, name resolution
— into a single adapter-based framework. The core chat package handles
event routing and application logic; platform-specific behaviour lives in
adapter packages (@chat-adapter/slack, @chat-adapter/discord,
@chat-adapter/whatsapp, etc.), so "your handlers don't change when your
deployment target does." The post's load-bearing engineering content is
the streaming-fallback pipeline (native Slack streaming vs
markdown-to-native conversion at each intermediate edit on other
platforms), cross-platform component rendering (Tables rendered as Block
Kit on Slack, GFM markdown on Teams/Discord, monospace widgets on Google
Chat, code blocks on Telegram), pluggable state adapters (Redis,
ioredis, and now production-ready PostgreSQL), and the WhatsApp adapter's
24-hour messaging window platform constraint.
Key takeaways¶
-
Adapter-pattern refactoring inside one company drove the product. In January, Vercel issued a company-wide "multiply your output" challenge. Teams built agents on AI SDK + AI Elements; then each wanted Slack integration; then Discord, GitHub, Linear. "Each of those introduced a new integration adventure for every agent." The Chat SDK's origin story is the internal-adapter factoring — "instead of asking people to come to agents, we needed to deliver agents to the places they were already working." (Source: body intro.)
-
Streaming has a two-path design: native path and fallback conversion path. "Slack has a native streaming path that renders bold, italic, lists, and other formatting in real time as the response arrives. Other platforms use a fallback streaming path, passing streamed text through each adapter's markdown-to-native conversion pipeline at each intermediate edit." Before Chat SDK, fallback adapters received raw markdown so Discord / Teams users "would see literal
**bold**syntax until the final message resolved." Now conversion happens per intermediate edit. See patterns/streaming-markdown-to-native-conversion. -
Write-once UI primitives render natively on each platform. The
Table()JSX component takes headers + rows once. "Slack renders Block Kit table blocks. Teams and Discord use GFM markdown tables. Google Chat uses monospace text widgets. Telegram converts tables to code blocks. GitHub and Linear continue to use their existing markdown pipelines." Cards, modals, and buttons follow the same pattern — "each adapter renders them in whatever format the platform supports natively. If a platform doesn't support a given element, it will fall back gracefully." See patterns/platform-adaptive-component-rendering. -
State backend is a pluggable adapter, not a baked-in dependency. "Thread subscriptions, distributed locks, and key-value cache state are handled through pluggable state adapters." Redis and ioredis from launch; PostgreSQL newly production-ready. The Postgres adapter uses
pg(node-postgres), "raw SQL and automatically creates the required tables on first connect. It supports TTL-based caching, distributed locking across multiple instances, and namespaced state via a configurable key prefix." Community PR #154 by@bai. See patterns/pluggable-state-backend. -
The AI SDK text-stream pipes directly into
thread.post().One line wires an AI-SDK streaming LLM response into any platform- rendered thread. "The adapter layer handles the platform-specific rendering of that stream, including live formatting where the platform supports it." This is the shape the Knowledge Agent Template demoed earlier; here it's confirmed as the canonical call shape.const result = await streamText({ model: "anthropic/claude-sonnet-4", ... }); await thread.post(result.textStream); -
Bidirectional name resolution is automatic, even on single-platform deployments. "Channel and user names are automatically converted to clear text so your agent understands the context of the conversation. This translation works in both directions. When the agent at-mentions somebody using clear text, Chat SDK ensures the notification actually triggers in Slack." This is the minimum-viable justification for using Chat SDK even if you only target Slack — agents need clear-text context for prompting, but platforms emit raw IDs in events and require raw IDs in outbound messages for notifications to fire. See concepts/clear-text-name-resolution.
-
Agents auto-receive preview/reference/image context. "Chat SDK automatically includes link preview content, referenced posts, and images directly in agent prompts." Plus "while models generate standard markdown, Slack does not natively support it. Chat SDK converts standard markdown to the Slack variant automatically. This conversion happens in real time, even when using Slack's native append-only streaming API." The conversion running under append-only streaming is the hard case — intermediate token boundaries can split markdown constructs; converter must handle partial
**/ partial list markers / partial code fences. -
WhatsApp adapter ships with platform-level product constraints as first-class API shape. "WhatsApp enforces a 24-hour messaging window, so bots can only respond within that period. The adapter does not support message history, editing, or deletion." Cards render as "interactive reply buttons with up to three options, falling back to formatted text where needed." Supported messages/reactions/ auto-chunking/read-receipts/multi-media downloads (images, voice messages, stickers)/location sharing with Google Maps URLs. Community PR #102 by
@ghellach. See concepts/messaging-platform-24-hour-response-window. -
Credential auto-detection zero-configures platforms. "Each adapter auto-detects credentials from environment variables, so you can get started without any additional configuration." Development-experience move that complements the adapter-swap story — "switching from Slack to Discord means swapping the adapter, not rewriting the bot."
-
Coding-agent onboarding via Skills. "To augment your coding agents, install the Chat skill:
npx skills add vercel/chat." Plus a canned migration prompt for consolidating scattered platform-specific bot logic "into a single unified implementation where core behavior is defined once and adapters handle platform differences." This treats the SDK's mental model as itself an artefact to be co-trained into customer agents — parallel to the v0 + AI SDK co-maintained read-only filesystem shape.
Systems / concepts / patterns extracted¶
Systems¶
- systems/vercel-chat-sdk — canonical launch post; extends prior stub with streaming fallback, component-rendering matrix, Postgres adapter, WhatsApp adapter, name-resolution behaviour.
- systems/ai-sdk —
streamText+textStreamreferenced as the upstream forthread.post()pipe. - systems/vercel-knowledge-agent-template — prior wiki appearance of Chat SDK; this post is the SDK's own launch context.
Platforms named¶
- Slack, Microsoft Teams, Google Chat, Discord, Telegram, GitHub, Linear, WhatsApp.
Concepts¶
- concepts/messaging-platform-24-hour-response-window — WhatsApp Business API platform constraint, surfaced as an explicit SDK-level caveat.
- concepts/clear-text-name-resolution — bidirectional channel/user- name translation between platform IDs and human-readable names, needed for both prompt context and outbound-notification firing.
Patterns¶
- patterns/multi-platform-chat-adapter-single-agent — write-once, deploy-everywhere bot factoring; one agent pipeline, multiple platform adapters, state-layer abstraction.
- patterns/platform-adaptive-component-rendering — JSX-once UI primitives (Table, Card, Modal, Button) that each adapter renders in the platform's native shape, with graceful fallback when a platform doesn't support the primitive.
- patterns/streaming-markdown-to-native-conversion — adapter-layer transformation of a markdown token stream to the platform's native formatting, applied at each intermediate edit so users never see literal markup.
- patterns/pluggable-state-backend — Redis / ioredis / Postgres as swappable state adapters exposing TTL caching, distributed locking, and namespaced prefixes.
Operational / design numbers¶
- 7+ platforms supported: Slack, Microsoft Teams, Google Chat, Discord, Telegram, GitHub, Linear, WhatsApp. (Multiple distinct adapter packages.)
- 2 community PRs named:
@bai/ PR #154 (Postgres adapter),@ghellach/ PR #102 (WhatsApp adapter). - 3 card-option fallback on WhatsApp: "interactive reply buttons with up to three options, falling back to formatted text where needed."
- 24-hour WhatsApp messaging window: hard platform constraint encoded in the adapter.
npx skills add vercel/chat: coding-agent augmentation command.chat-sdk.dev/docs: launch-time docs host.
Caveats / what the post does NOT disclose¶
- No quantitative performance numbers. No latency measurements for the intermediate-edit conversion pipeline, no throughput numbers for state- adapter locking, no p50/p99 for end-to-end streaming under the fallback path.
- No architecture diagram for the state-adapter distributed-lock
implementation. The post says Postgres supports distributed locking
"across multiple instances" but doesn't disclose whether this uses
advisory locks, row locks with
SELECT FOR UPDATE, or application-level leases with a heartbeat. - No discussion of adapter-level rate-limit handling. Every platform has different rate-limit shapes (Slack Tier 1–4, Discord bucket-based, WhatsApp per-phone-number); the post doesn't explain how / whether the adapter layer normalises these.
- The markdown-to-native converter's intermediate-edit semantics are
only described at a high level; the hard case (token boundary mid-way
through a
**bold**construct) is not worked through. - Product-voice framing. Heavy marketing structure (challenge, code snippets, CTA), minimal production-scale detail. On scope narrowly, on adapter-architecture grounds — cross-platform messaging adapter design is a legitimate distributed-systems-at-the-edge topic (platform quirks, streaming semantics, at-least-once notification firing, platform-level constraints like 24-hour windows). The architectural content exceeds the 20% threshold in the AGENTS.md borderline-cases rule, though it's the thinnest of the three Vercel ingests so far on that measure.
- "Open source, public beta" — API surface likely to churn; any code-shape claims on this page should be treated as 2026-04-21 snapshot.
Scope-decision rationale¶
Marginal include, not skip. Weighed against the AGENTS.md product-launch filter:
- Skip signals present: title framing ("brings agents to your
users"), CTA-heavy body, public-beta announcement,
npx skillscoding-agent prompt, community-contributor acknowledgements with PR numbers. - Include signals present: (a) explicit streaming-fallback design with intermediate-edit semantics disclosed, (b) cross-platform UI-primitive rendering matrix, (c) pluggable state-adapter design with distributed- lock capability named, (d) AI-SDK text-stream composition pattern documented, (e) WhatsApp 24-hour messaging window surfaced as an SDK- level constraint.
- Cross-wiki value: the post resolves a dangling-link debt —
patterns/multi-platform-chat-adapter-single-agentwas referenced by the systems/vercel-knowledge-agent-template page from the 2026-04-21 knowledge-agents-without-embeddings ingest but never created. This post is its canonical source.
Source¶
- Original: https://vercel.com/blog/chat-sdk-brings-agents-to-your-users
- Raw markdown:
raw/vercel/2026-04-21-chat-sdk-brings-agents-to-your-users-3e77963a.md
Related¶
- companies/vercel
- systems/vercel-chat-sdk
- systems/ai-sdk
- systems/vercel-knowledge-agent-template
- patterns/multi-platform-chat-adapter-single-agent
- patterns/platform-adaptive-component-rendering
- patterns/streaming-markdown-to-native-conversion
- patterns/pluggable-state-backend
- concepts/messaging-platform-24-hour-response-window
- concepts/clear-text-name-resolution