Skip to content

PLANETSCALE 2026-01-29 Tier 3

Read original ↗

PlanetScale — Introducing the PlanetScale MCP server

Summary

Mike Coutermarsh (PlanetScale, 2026-01-29) announces the PlanetScale MCP server — a hosted Model Context Protocol server that exposes PlanetScale organizations, databases, branches, schema, and Insights data to any MCP-compatible client (Claude, Cursor, Open Code, …). Authentication is OAuth-based with configurable read-only vs full-access permissions settable per production-or-development branch — i.e. the permission surface is at the branch level, not the database level. The launch envelope is short, but four concrete production-oriented safeguards travel with it: automatic replica routing for read-only queries, ephemeral per-query credentials, SQL-comment query tagging (source=planetscale-mcp) so Insights can attribute MCP-originated traffic, destructive-query protection (blocks UPDATE/DELETE without a WHERE, blocks TRUNCATE), and DDL human-in-the-loop confirmation (any CREATE/DROP/ALTER prompts the LLM to request human confirmation before executing). The post also catalogues the initial tool surface — read-only introspection tools (list_*/get_* for organizations, databases, branches, schema, Insights), plus permission-gated execute_read_query / execute_write_query / list_invoices.

Scope disposition — narrow Tier-3 batch-skip override

The raw-file batch processor flagged this with skip_reason: batch-skip — marketing/tutorial slug pattern (url: /introducing-planetscale-mcp-server), 0 arch signals in body — a correct application of the operational introducing skip signal.

The article nonetheless passes on architecture-density override: the safeguards block — automatic-replica-routing, ephemeral-per-query credentials, tagged-queries-for-Insights-attribution, destructive-query protection, DDL human confirmation — is ~35% of body content and describes real production-system mechanisms that recur across the wiki's hosted-MCP corpus (Pinterest, Datadog, Dropbox Dash). Ingest is intentionally narrow-scope: one new system page (planetscale-mcp-server), one new concept page (destructive-query-protection), one new pattern page (mcp-safeguards-over-raw-db-access), plus cross-reference updates to systems/model-context-protocol, concepts/ephemeral-credentials, systems/planetscale-insights, and concepts/hosted-vs-local-mcp-server.

Key takeaways

  • Hosted MCP server over a managed database — OAuth, scoped per branch. "The PlanetScale MCP server is a hosted MCP server that exposes your PlanetScale organizations, databases, branches, schema, and Insights data to MCP-compatible tools. It's authenticated via OAuth for configurable access to permissions and scopes, and accessible from any client that supports MCP servers." Permissions are read-only or full-access, settable separately for production and development branches — the authorisation boundary follows PlanetScale's deploy-request-shaped branch model, not the database-or-org shape that the initial 2023 PlanetScale API shipped with. Canonical Tier-3-clearing example of a hosted-over-local MCP server for a stateful production system — the Pinterest paved-path argument ("central execution is the only place internal routing + security logic can be applied") applied to a third-party SaaS.

  • source=planetscale-mcp SQL-comment tag on every query. "All queries include source=planetscale-mcp SQL comments, making them easy to identify by tags in PlanetScale Insights." This is the SQLCommenter convention — the same technique documented in Identifying slow Rails queries with sqlcommenter and actor-tagged query observability — applied to agent-originated traffic. The tag lets Insights / Traffic Control attribute and rate-limit MCP-originated traffic as its own workload class separately from human / app-tier queries. A recurring wiki observation: once you have query tagging as a primitive, every new query source gets a tag — agents are just another tag value.

  • Ephemeral per-query credentials. "Each query uses short-lived credentials that are created on demand and deleted immediately after execution." This is the database-access analogue of ephemeral credentials: the MCP server never holds a long-lived MySQL / Postgres password; it mints one per query, uses it, and discards it. Contrast with the naive shape — an MCP server that would sit there with a long-lived admin password for the database — which is the local-MCP-server risk shape (concepts/local-mcp-server-risk) translated to managed SaaS. Ephemeral-per-query is a stricter shape than AWS STS session-token-per-hour; it's "token per tool call".

  • Automatic replica routing for read-only queries. "Read-only queries are automatically run against a replica if your database has replicas configured." The MCP server inspects the query shape (read-only via execute_read_query) and routes to a read replica when one exists. Writes go to primary (semantically required) via execute_write_query. This is the read-write splitting logic that Rails apps normally implement via role: :reading / role: :writing — here it's pushed down into the MCP server as a property of the tool boundary, not a property of the client. Agent doesn't know / doesn't need to know.

  • Destructive-query protection — UPDATE/DELETE without WHERE blocked; TRUNCATE blocked. "UPDATE or DELETE statements without a WHERE clause are blocked, and TRUNCATE is not allowed." This is a server-side static check on the SQL string before execution — the same pattern that sql_safe_updates = ON ships in MySQL (and Rails' destroy_all emitting a warning) but applied structurally at the MCP tool boundary. The motivation is that LLMs occasionally emit exactly this shape — DELETE FROM users; — and the consequences on a production database are catastrophic. Canonical new wiki concept: concepts/destructive-query-protection.

  • DDL operations prompt human confirmation. "Any schema-changing operations (CREATE, DROP, ALTER, etc.) prompt the LLM to request human confirmation before proceeding." This is the MCP elicitation primitive used as a structural authz gate on DDL: the MCP server doesn't execute the DDL; it first emits a tool response that tells the client "ask the human to confirm this change", and the human's affirmative response is required before the server actually runs the schema change. A variant on the human-in-the-loop theme — and specifically the schema changes are the highest-risk class narrative that runs through every PlanetScale source on deploys, from how PlanetScale makes schema changes to gated deployments.

  • Initial tool surface mixes metadata introspection with permission-gated query execution. Introspection (always available): get_insights, list_organizations/get_organization, list_databases/get_database, list_branches/get_branch/get_branch_schema, list_regions_for_organization, list_cluster_size_skus, search_documentation. Permission-gated: execute_read_query, execute_write_query, list_invoices/get_invoice_line_items. Note the absence of execute_query — the read/write split is encoded in the tool surface itself, not an argument to a single tool. An agent that only has read-only permission literally does not see execute_write_query — this is tool-surface minimization used as an authz mechanism. Compare the Datadog MCP server pattern where toolsets are the minimization primitive (sources/2026-03-04-datadog-mcp-server-agent-tools) — here the toolset is implicit in the OAuth scope.

  • Explicit "caution on write access to production" framing. "We advise caution when giving LLMs write access to any production database. Always carefully review queries before execution." Acknowledges the underlying tension: even with automatic replica routing + ephemeral credentials + destructive-query protection + DDL confirmation, the human is still the final safety gate on LLM-originated writes. The vendor's position is "we ship safeguards so the default blast radius is bounded, but human review is not optional for production writes."

Tool surface catalogue

Always available (no special permission needed beyond OAuth):

Tool Purpose
get_insights Read query-performance data + patterns from Insights
list_organizations / get_organization Inspect PlanetScale organisations the OAuth token has access to
list_databases / get_database Databases within an org
list_branches / get_branch / get_branch_schema Branches + their schema
list_regions_for_organization Available regions for an org
list_cluster_size_skus Cluster sizes available (filter by engine, include rates, per-region)
search_documentation Search PlanetScale docs / code examples / API references

Permission-gated:

Tool Permission Notes
execute_read_query Read access to branch Auto-routed to replica when available
execute_write_query Full access to branch Destructive-query + DDL checks apply
list_invoices / get_invoice_line_items Billing scope Invoice + line-item detail

Architectural observations

Safeguards are per-tool, not per-session

The five safeguards (replica-routing, ephemeral creds, SQL tagging, destructive-query block, DDL confirmation) are properties of the MCP server's tool implementations, not policies applied at a gateway / reverse-proxy layer. This differs from e.g. Pinterest's "Envoy + JWT + business-group gating" shape (sources/2026-03-19-pinterest-building-an-mcp-ecosystem-at-pinterest) where safeguards live in the mesh. PlanetScale's model is simpler because: - OAuth is the auth boundary (no mesh needed for a hosted SaaS) - The safeguards care about SQL content, not traffic shape — so they need to live where the SQL string is visible - The tool surface itself is the authz primitive (read-only agents never see write tools)

Query tagging + Insights = closed-loop observability for agent traffic

Because every MCP query carries source=planetscale-mcp, the existing Insights pipeline — already designed around actor-tagged query observability — attributes agent-originated queries as their own actor/workload class automatically. No new observability plumbing needed; the agent becomes just another workload tag in an ecosystem designed for many workload tags.

Ephemeral-per-query is stricter than per-session credentials

"Each query uses short-lived credentials that are created on demand and deleted immediately after execution" — this is per-tool-call credential rotation, not per-session. Stricter than AWS STS (per-hour session tokens), stricter than the usual OAuth-access-token-per-client-session pattern. The security claim is that at any instant, the number of valid database credentials equals roughly the number of in-flight queries — i.e. near-zero when the system is idle.

Numbers / specs

  • Tool count at launch: 7 always-available introspection tools + 3 permission-gated execution tools + 2 billing tools = 12 tools.
  • Safeguard count: 5 (replica routing, ephemeral creds, SQL tagging, destructive-query block, DDL human confirmation).
  • Permission granularity: per-branch (production vs development), not per-database or per-org.
  • SQL-comment tag: source=planetscale-mcp (follows SQLCommenter convention).

Caveats

  • The post is a launch announcement. Architecture density is ~35% of body. No performance numbers, no production incident narratives, no discussion of how the OAuth scope model was negotiated against PlanetScale's existing API auth model, no discussion of how destructive-query detection handles SQL parsing edge cases (is it regex-based? AST-based via Vitess evalengine?). The narrow ingest captures the documented production mechanisms; further details would require a deeper followup post.
  • No claimed production usage numbers (how many queries / day, how many organisations have enabled it, etc.) — this is a launch post, not a retrospective post.
  • No discussion of rate-limiting / quota. Presumably the existing PlanetScale API quota applies, but the post does not document whether MCP-originated traffic counts against the same pool or has its own quota class. This is an open operational question for heavy-agent workloads.
  • "Built-in query tracking" is unclaimed as write-path protection. The source=planetscale-mcp tag makes attribution possible after the fact — it does not prevent an adversarial agent from spoofing its own tag to hide as a different workload (the tag is just a SQL comment). Authorisation remains the OAuth + destructive-check + DDL-confirm layer; tagging is for Insights visibility.

Source

Last updated · 378 distilled / 1,213 read