Skip to content

SYSTEM Cited by 10 sources

VTTablet

What it is

VTTablet is the per-MySQL-instance middleware process in a Vitess cluster. It sits between VTGate and MySQL and owns two responsibilities:

  1. Connection pooling to its local MySQL instance.
  2. Health checking of that MySQL instance + publishing the health and role state to the topo-server.

From PlanetScale's own description:

"VTTablet behaves as a middleware between VTGate and MySQL. … The VTTablet will manage connection pooling and perform health checks for MySQL instances, updating its status in a topo-server." (Source: sources/2026-04-21-planetscale-planetscale-vs-amazon-rds)

One VTTablet runs alongside each MySQL instance (primary or replica). In PlanetScale's default topology, a production branch has 1 primary + 2 replicas = 3 MySQL instances, so 3 VTTablets.

Role in the Vitess data path

VTTablet sits at hop 4 of the 5-hop PlanetScale MySQL data path:

application  →  edge LB  →  VTGate  →  VTTablet  →  MySQL
                                       (hop 4)    (hop 5)

VTGate picks a tablet (by role and health, as published in the topo-server), sends the query to the corresponding VTTablet. VTTablet then:

  • Multiplexes inbound VTGate queries onto a fixed-size back-end MySQL connection pool (see patterns/two-tier-connection-pooling).
  • Queues excess requests when the pool is saturated — "PlanetScale offers connection pooling, which scales with your cluster and enables connection requests to queue" (sources/2026-04-21-planetscale-planetscale-vs-amazon-rds).
  • Enforces per-query policies configured in VSchema / tablet config (transaction throttler, row-limit protection, etc. — see systems/vitess-transaction-throttler).
  • Publishes health + role state (primary / replica / spare, serving / not-serving) to the topo-server so VTGate can route accordingly.

Why the split with VTGate

The architectural division of labor (see patterns/query-routing-proxy-with-health-aware-pool):

Responsibility VTGate VTTablet
Routing decision
Query planning / cross-shard
Back-end MySQL connection pool
MySQL health checks
Topo-server writes (health)
Topo-server reads (routing)
Statelessness ✅ (effectively) ❌ (owns pool + local MySQL)

VTTablet being co-located with MySQL is what makes cheap fast health checks possible — a VTTablet runs health probes against 127.0.0.1:3306 on the same host, no network round-trip. That same co-location is what makes the connection pool a local resource that doesn't compete with other tablets' pools.

Connection pooling specifics

VTTablet's connection pool is the back-end half of Vitess's two-tier connection pool:

  • Front-end pool: VTGate ↔ VTTablet connections (gRPC, not MySQL protocol).
  • Back-end pool: VTTablet ↔ MySQL connections (MySQL protocol, local).

The back-end pool is bounded; when it's full, incoming requests queue. This is the mechanism that lets a PlanetScale MySQL database claim "technically unlimited" client connections at VTGate — the client-facing ceiling is much higher than the real back-end MySQL max_connections ceiling because requests multiplex and queue at the VTTablet layer. Benchmarked at 1M client connections in sources/2026-04-21-planetscale-one-million-connections.

See sources/2026-04-21-planetscale-connection-pooling-in-vitess for the set-var / reserved-connection / tainted-connection mechanics that dictate when a pooled connection can be returned to the pool vs held open for the session.

Role transitions and failover

VTTablet roles are managed by the Vitess control plane (VTOrc / Orchestrator — see systems/vtorc, systems/orchestrator). When a VTTablet transitions role (e.g., replica → primary during a reparenting event), it:

  1. Updates its role state in the topo-server.
  2. VTGate, reading topo-server, re-routes writes to the new primary VTTablet.

This is how VTGate + VTTablet + topo-server collectively implement automatic failover without the client needing to reconnect or re-resolve — VTGate simply re-reads the topo-server and starts sending writes to the new primary tablet.

Seen in

Last updated · 470 distilled / 1,213 read