PATTERN Cited by 1 source
Remote MCP server via platform launcher¶
Pattern¶
A platform CLI (flyctl, a Cloudflare-analog, a hypothetical Render/Railway/Fly competitor) ships a single subcommand that takes an existing local / stdio-style MCP server command and one-shots it into a remote HTTP MCP server running as a platform-native compute primitive, with:
- Container / image built from the stdio command (so the server runs inside the platform, not on the operator's workstation).
- Transport flipped from stdio to HTTP / Streamable-HTTP so remote clients can reach it.
- Auth surface inverted — from operator-workstation- credential-inheritance (the local MCP server risk shape) to bearer-token-in-client-config (or OAuth 2.1 if the client needs it).
- Secrets injected as platform secrets via repeated
--secret KEY=valueflags, not shell env or client-config JSON. - Client-config file(s) rewritten in place via selector
flags (
--claude,--cursor,--zed, …), eliminating the client-config fragmentation tax. - Platform knobs passed through — auto-stop, private networking (e.g. Flycast), volumes, region pinning, VM size — so the resulting server is a first-class platform resource, not a black-box.
The canonical wiki instance is Fly.io's
fly mcp launch (2025-05-19), which
implements all six properties in one command (Source:
sources/2025-05-19-flyio-launching-mcp-servers-on-flyio):
fly mcp launch \
"npx -y @modelcontextprotocol/server-slack" \
--claude --server slack \
--secret SLACK_BOT_TOKEN=xoxb-your-bot-token \
--secret SLACK_TEAM_ID=T01234567
Why it's distinct from "deploy a Docker image"¶
On the surface this is just "run this process on a platform." But the pattern specifically lists five things generic container deployment does not do:
- Transport translation — the operator's input is a stdio
command (e.g.
npx -y @modelcontextprotocol/server-slack); the output is an HTTP endpoint. Something has to wrap the subprocess in an HTTP transport. - Bearer-token issuance and distribution — the platform mints the token, writes it into the server's env, and writes the matching token into the client config. The operator doesn't see the token or have to copy-paste it.
- Client-config rewrite in place — the launcher edits
JSON config files on the operator's workstation (the
~/Library/Application Support/Claude/claude_desktop_config.jsonand~/.config/zed/settings.jsonfamily). This is unusual for a deploy tool and is what makes the UX one command instead of three. - Multi-client fan-out — one invocation can select
multiple clients (
--claude --zed) and rewrite all their config files from the same deploy. - Beta-aware ergonomics — the command accepts
--secretwithout requiring a priorfly secrets setstep; the whole flow is optimised for "try a new MCP server in one command," not "operationalise an MCP server for a team."
When to reach for it¶
- Operator wants to try a third-party MCP server (published on NPM, PyPI, or GitHub) without running arbitrary code on their workstation — eliminates the local-MCP-server risk by relocating execution to an isolated VM.
- The MCP server needs secrets (API keys, OAuth tokens, database credentials) that shouldn't live in the operator's client-config JSON or shell env.
- The operator uses multiple MCP clients (common: Claude Desktop + Cursor + an editor) and doesn't want to maintain N parallel config entries.
- The server should be shared across devices / teammates — a remote server with bearer-token or OAuth auth is shareable; a stdio server running on the operator's laptop is not.
When NOT to reach for it¶
- The MCP server is trusted and local resources are the
point. A
flymcp-style wrapper around localflyctlcredentials,kubectlcontext, orgitis deliberately scoped to the operator's workstation — deploying it remotely defeats the purpose. See patterns/wrap-cli-as-mcp-server. - Latency matters more than auth. stdio is zero-RTT by design; HTTP has at least one RTT per tool call. Chatty short-lived calls may feel slower over HTTP.
- Enterprise governance already in place. If the org already has a centralised MCP proxy (patterns/central-proxy-choke-point), bolting on a per-operator launcher fragments the governance surface.
Inverse of wrap-CLI-as-MCP-server¶
patterns/wrap-cli-as-mcp-server takes a CLI and makes it
an MCP server (local stdio). This pattern takes a local stdio
MCP server and makes it remote. The two patterns are complementary
moves on the same axis, and Fly.io has shipped both — flymcp
(2025-04-10) wraps flyctl as a local stdio MCP server;
fly mcp launch (2025-05-19) takes any stdio MCP server and
remotes it. A wiki reader tracing the MCP deployment decision
tree should consult both.
Seen in¶
- sources/2025-05-19-flyio-launching-mcp-servers-on-flyio —
canonical wiki instance (Fly.io's
fly mcp launch).
Related¶
- systems/fly-mcp-launch — canonical implementation.
- systems/fly-flyctl — CLI that hosts the subcommand.
- systems/fly-machines — compute primitive the server becomes.
- systems/model-context-protocol — target protocol.
- concepts/mcp-client-config-fragmentation — the editorial problem solved by client-config rewrite.
- concepts/local-mcp-server-risk — the security problem solved by relocating execution to a platform VM.
- concepts/mcp-long-lived-sse — the routing constraint the remote server inherits.
- patterns/wrap-cli-as-mcp-server — the inverse / paired pattern.
- patterns/session-affinity-for-mcp-sse — the routing pattern the launched server's front-end proxy must enforce.
- patterns/central-proxy-choke-point — the alternative enterprise deployment shape this pattern is complementary with, not a replacement for.