Skip to content

PATTERN Cited by 3 sources

MCP as centralized integration proxy

Pattern

Deploy a single MCP server tier in front of the enterprise's internal systems (databases, queues, SaaS APIs, code repos, docs) and make it the mandatory choke-point through which every agent in the organisation accesses every internal tool. The MCP server speaks intent ("create a Redpanda cluster in us-east-1") and implements the mechanism ("the five API calls"), hiding binary-protocol complexity, connection-pool management, retries/backoffs, TLS, certificates, and authentication from every agent that would otherwise re-implement them.

Canonical statement on the wiki

Alex Gallego's 2025-04-03 founder-voice reframing of MCP from tool- description format to infrastructure-layer proxy (Source: Gallego 2025-04-03):

"MCP is about intent, 'create a Redpanda Cluster in us-east-1', while the MCP server worries about implementing the five API calls. While MCP is often nothing more than the wrapping of custom protocols for databases, queues, caches, docs, GitHub, Salesforce; the additional layer of abstraction pushes these complexities away from the application that is focused on the linear distribution of the data, sampling, chunking, etc., — not on validating credentials or the correct SASL handshake."

"Most critically, centralization offers composability, understandability, and debugging that would have to be replicated by every agent otherwise. Many products' APIs are brittle and evolution is riddled with conditional logic. An intent-based proxy pushes that complexity to a central location."

Why centralization wins

Gallego names four capabilities that only a proxy tier can provide, and that every agent would otherwise have to re-implement:

  1. Auditing — every tool call flows through one place, so audit log is deterministically complete.
  2. Tracing — end-to-end prompt → tool call → result traces are materialisable without cross-agent coordination.
  3. Cost accounting — per-tool / per-agent cost attribution converges to one aggregation point.
  4. Authentication + authorization + API-token management — credential handling lives in the proxy, not in every agent.

Plus two architectural capabilities:

  1. Composability — tools compose uniformly across agents.
  2. Brittle-API insulation — vendor API churn is absorbed by the MCP server layer; agents see a stable intent surface.

Mechanism at Redpanda

rpk connect mcp-server exposes Redpanda Connect pipelines, resources, and processors as MCP tools. Because Redpanda Connect ships ~300 pre-built connectors (databases, queues, caches, SaaS APIs) as open-source YAML-configurable pipelines, any of them becomes an MCP tool via a simple config — "allows you to expose any redpanda connect source and destination as a tool with a simple configuration."

The proxy layer handles connection pooling, retries, exponential backoffs, TLS, certificates, and authentication declaratively — "a simple HTTP endpoint with a YAML config that manages all of the connection pooling, retries, exponential backoffs, TLS, certificates, authentication, etc."

Orthogonal patterns

Contrast with "one MCP server per tool"

The non-centralized alternative — which is what most early MCP deployments look like — is "one MCP server per tool": each service team ships its own MCP server, each agent configures N servers (concepts/mcp-client-config-fragmentation), and the choke-point capabilities above have to be replicated. The centralised-integration-proxy pattern consolidates them into a single tier that owns the integration contract.

Caveats

  • Single-point-of-failure risk. A centralised proxy becomes a hot path dependency; availability needs to track the sum of its downstream tools, and a proxy outage blocks every agent. This is the classic choke-point trade-off (see patterns/central-proxy-choke-point caveats).
  • Ownership complexity. "One MCP tier" is a platform-org commitment; in practice, different MCP servers with different ownership end up being registered in one registry (see concepts/mcp-registry). The proxy-ness is policy-enforced, not topology-forced.
  • Content-filtering aspiration. Gallego's vision of per-call dynamic content filtering — see patterns/dynamic-content-filtering-in-mcp-pipeline — is a direction of travel, not a fully-specified MCP primitive in the source post.

Seen in

Last updated · 470 distilled / 1,213 read