PATTERN Cited by 3 sources
MCP as centralized integration proxy¶
Pattern¶
Deploy a single MCP server tier
in front of the enterprise's internal systems (databases, queues,
SaaS APIs, code repos, docs) and make it the mandatory choke-point
through which every agent in the organisation accesses every
internal tool. The MCP server speaks intent ("create a Redpanda
cluster in us-east-1") and implements the mechanism ("the five
API calls"), hiding binary-protocol complexity, connection-pool
management, retries/backoffs, TLS, certificates, and authentication
from every agent that would otherwise re-implement them.
Canonical statement on the wiki¶
Alex Gallego's 2025-04-03 founder-voice reframing of MCP from tool- description format to infrastructure-layer proxy (Source: Gallego 2025-04-03):
"MCP is about intent, 'create a Redpanda Cluster in
us-east-1', while the MCP server worries about implementing the five API calls. While MCP is often nothing more than the wrapping of custom protocols for databases, queues, caches, docs, GitHub, Salesforce; the additional layer of abstraction pushes these complexities away from the application that is focused on the linear distribution of the data, sampling, chunking, etc., — not on validating credentials or the correct SASL handshake.""Most critically, centralization offers composability, understandability, and debugging that would have to be replicated by every agent otherwise. Many products' APIs are brittle and evolution is riddled with conditional logic. An intent-based proxy pushes that complexity to a central location."
Why centralization wins¶
Gallego names four capabilities that only a proxy tier can provide, and that every agent would otherwise have to re-implement:
- Auditing — every tool call flows through one place, so audit log is deterministically complete.
- Tracing — end-to-end prompt → tool call → result traces are materialisable without cross-agent coordination.
- Cost accounting — per-tool / per-agent cost attribution converges to one aggregation point.
- Authentication + authorization + API-token management — credential handling lives in the proxy, not in every agent.
Plus two architectural capabilities:
- Composability — tools compose uniformly across agents.
- Brittle-API insulation — vendor API churn is absorbed by the MCP server layer; agents see a stable intent surface.
Mechanism at Redpanda¶
rpk connect mcp-server exposes
Redpanda Connect pipelines, resources,
and processors as MCP tools. Because Redpanda Connect ships ~300
pre-built connectors (databases, queues, caches, SaaS APIs) as
open-source YAML-configurable pipelines, any of them becomes an
MCP tool via a simple config — "allows you to expose any redpanda
connect source and destination as a tool with a simple
configuration."
The proxy layer handles connection pooling, retries, exponential backoffs, TLS, certificates, and authentication declaratively — "a simple HTTP endpoint with a YAML config that manages all of the connection pooling, retries, exponential backoffs, TLS, certificates, authentication, etc."
Orthogonal patterns¶
- patterns/central-proxy-choke-point — the generic wiki-canon pattern; MCP-as-proxy is a content-aware instantiation targeting LLM agents specifically.
- patterns/dynamic-content-filtering-in-mcp-pipeline — the per-call content-level policy enforcement the proxy enables.
- patterns/wrap-cli-as-mcp-server — the Fly.io-canonicalised local-wrapper pattern; different altitude (individual CLI vs full integration surface).
- patterns/hosted-mcp-ecosystem — the Pinterest-canonicalised centralisation pattern at a platform-org cardinality.
Contrast with "one MCP server per tool"¶
The non-centralized alternative — which is what most early MCP deployments look like — is "one MCP server per tool": each service team ships its own MCP server, each agent configures N servers (concepts/mcp-client-config-fragmentation), and the choke-point capabilities above have to be replicated. The centralised-integration-proxy pattern consolidates them into a single tier that owns the integration contract.
Caveats¶
- Single-point-of-failure risk. A centralised proxy becomes a hot path dependency; availability needs to track the sum of its downstream tools, and a proxy outage blocks every agent. This is the classic choke-point trade-off (see patterns/central-proxy-choke-point caveats).
- Ownership complexity. "One MCP tier" is a platform-org commitment; in practice, different MCP servers with different ownership end up being registered in one registry (see concepts/mcp-registry). The proxy-ness is policy-enforced, not topology-forced.
- Content-filtering aspiration. Gallego's vision of per-call dynamic content filtering — see patterns/dynamic-content-filtering-in-mcp-pipeline — is a direction of travel, not a fully-specified MCP primitive in the source post.
Seen in¶
- sources/2025-04-03-redpanda-autonomy-is-the-future-of-infrastructure
— Gallego's founder-voice reframing of MCP as infrastructure
rather than tool protocol, with Redpanda Connect's ~300 connectors
exposed via
rpk connect mcp-serveras the canonical instantiation. - sources/2025-10-28-redpanda-governed-autonomy-the-path-to-enterprise-agentic-ai — 2025-10-28 ADP companion post upgrades the pattern framing from "centralised integration proxy" to "agentic governance layer". Verbatim: adding MCP servers to Redpanda Connect "transforms it into an agentic governance layer between all the data systems and agents connecting through it." The proxy becomes the enforcement point for AAC's pre-and-post-I/O policy checks and the producer boundary for the durable event log audit envelope.
Related¶
- systems/model-context-protocol
- systems/redpanda-connect
- systems/redpanda-agents-sdk
- concepts/autonomy-enterprise-agents
- concepts/mcp-client-config-fragmentation
- patterns/central-proxy-choke-point
- patterns/dynamic-content-filtering-in-mcp-pipeline
- patterns/wrap-cli-as-mcp-server
- patterns/hosted-mcp-ecosystem