PlanetScale — Introducing sharding on PlanetScale with workflows¶
Summary¶
Ben Dicken's 2024-11-07 product-launch post introduces
PlanetScale Workflows
as a new dashboard-level operator surface over Vitess's
data-motion primitives, with the first shipped workflow
being an
unsharded-to-sharded table move on the same
PlanetScale database. The post is the canonical wiki
disclosure of two structurally distinct things: (1) that
PlanetScale now exposes Vitess's
MoveTables /
Reshard /
Materialize /
LookupVindex /
Migrate
workflow family directly to customers in the
PlanetScale dashboard, rather than gating it behind
Enterprise-plan human-driven runbooks; and (2) that the
table-migration-to-sharded-keyspace variant of
MoveTables is now a self-service operation for any
customer. The post pairs with the recently-released
Cluster configuration
UI — the already-available substrate for creating
sharded keyspaces, custom VSchema, and routing rules —
as the two halves of PlanetScale's self-service sharding
story: cluster configuration builds the target
topology; workflows moves live data into it.
Key takeaways¶
-
Workflows product surface. PlanetScale Workflows are "recipes that run a series of predefined steps to perform actions on your database" — a dashboard-UI abstraction over Vitess's workflow primitives. The first shipped workflow is the table-move-to-sharded- keyspace variant of
MoveTables— canonicalised on the wiki as the UI-wrapped workflow primitive pattern: expose a multi-step CLI/SDK primitive (MoveTables Create→VDiff→SwitchTraffic→Complete, with optionalReverseTraffic) as a monotonic UI with named phases and buttons per phase. Named verbatim: "If you're familiar with Vitess, this first workflow is similar toMoveTables, a critical component to managing your data distribution in Vitess." (Source: the post body.) -
Workflows is available on all PlanetScale plans. "Workflows functionality is available on all PlanetScale plans." This is the load-bearing business property of the post: sharding, previously Enterprise-only, is now a self-service feature on every plan. "Up until recently, much of the functionality for creating and configuring sharded databases was only accessible via our Enterprise plan." The move shifts sharding from a consulting-engagement-grade operation to a dashboard-click operation. (Source: the post body.)
-
Cluster configuration is the substrate. Creating a sharded keyspace is a prerequisite step handled via the separately-released Cluster configuration UI — "the Cluster configuration functionality allowed users to create their own sharded keyspaces as well as configure custom VSchema and routing rules." Workflows picks up from there: "with the addition of workflows, you can also easily migrate existing data to sharded keyspaces and smoothly switch production traffic between them with no downtime." Canonical wiki split: cluster configuration = topology-building substrate; workflows = data-motion substrate. Together they give self-service ownership of the full sharding lifecycle. (Source: the post body.)
-
Worked UI walkthrough. The post walks the operator through the full
MoveTables-to-sharded lifecycle via dashboard screenshots: - Verify two keyspaces in Cluster configuration
(e.g.
gymtrackerunsharded +gymtracker-shardedtarget). - Navigate to Workflows, click New workflow.
- Name the workflow, select source + target keyspaces, select tables to move.
- Click Validate — "PlanetScale will not allow you to start the workflow until all validation checks pass."
- Click Create workflow to enter copying phase — "Initially, PlanetScale will migrate your data from the source keyspace to the target. After the initial bulk migration completes, it will continue to replicate any new rows that come in to the target."
- Click Verify data to run a VDiff-equivalent pre-cutover check.
- Click Switch traffic to cut over — the moment of cutover is a single click.
-
"We also give you an option to switch traffic back to the unsharded database, providing an escape hatch in case any unexpected problems arise from the sharded configuration." This is the reverse- replication rollback primitive surfaced as a dashboard button. Canonical wiki mapping of UI verbs to Vitess CLI verbs: Validate = pre-check; Create workflow =
MoveTables Create; Verify data =VDiff; Switch traffic =MoveTables SwitchTraffic; Switch back =MoveTables ReverseTraffic; (implied) Complete =MoveTables Complete. (Source: the post body.) -
Enumeration of Vitess workflow family. First canonical wiki enumeration of the full workflow primitive set verbatim from the post:
MoveTables— move tables between keyspaces (between logical databases within your cluster).Reshard— modify the way data is sharded; spread across more or fewer shards as demand changes.Materialize— create copies, aggregations, or views of tables in a Vitess cluster.LookupVindex— create and populate Lookup Vindexes (lookup index tables that help queries execute faster).Migrate— move tables between distinct Vitess clusters.
"For this first release, we focused on supporting
MoveTables, specifically when migrating a table
from an unsharded keyspace to a sharded one. We
believe that this is one of the most important
workflows for our users, as it unlocks the ability to
horizontally scale existing unsharded databases with
minimal friction." Planned future workflows: Reshard
("important to allow users to self-manage their
growing database systems on PlanetScale") and
Migrate ("to help with migrations from outside
sources").
(Source: the post body.)
- The business argument for self-service sharding. "Over the past few decades, many companies have faced significant challenges scaling their databases, especially at the point where a single server cannot handle all traffic. Being able to easily and safely transition to a sharded environment shifts this phase of a company's existence from an extreme pain point to a smooth, well-tuned process." Canonical wiki statement of the "sharding as a regime-shift milestone in a company's life" framing already canonicalised elsewhere in the corpus — this post promotes the milestone from a multi-week engineering project to a day-of-click operation. Third-party precedent cited: Square, Etsy, and Slack use Vitess at millions of QPS. (Source: the post body, citing Abstracting Sharding with Vitess and Distributed Deadlocks
- Scaling Etsy Payments with Vitess
- Scaling Datastores at Slack with Vitess.)
Architectural numbers¶
This post is a product-launch-altitude post without
production telemetry. No numbers on typical copy
duration for real customer tables, cutover latency,
validation-check coverage, or adoption metrics. The
already-canonical
Matt
Lord petabyte-scale post provides the quantitative
backing ("typically take less than 1 second" for the
SwitchTraffic step on petabyte-scale customer
databases) that this post elides.
Caveats¶
- Product-launch altitude. No mechanism detail
beyond what's already canonicalised via the 2021
Raju import-launch and 2026-02 Lord petabyte-scale
posts. The
MoveTables/VDiff/SwitchTraffic/ReverseTraffic/Completestack is the same stack, just with a new dashboard surface. - No production numbers. No customer case studies, no latency distributions, no adoption rates, no typical shard count, no per-shard copy throughput.
- Validation-check coverage not disclosed. The post says "PlanetScale will not allow you to start the workflow until all validation checks pass" but does not enumerate what checks run. By analogy with the 2021 Raju import launch, they likely include schema compatibility, VSchema validity, target-shard health, and source-binlog configuration.
Completebutton not shown. The post stops at Switch traffic + Switch back; theMoveTables Completestep that tears down the reverse workflow and commits the migration is not surfaced in the walkthrough. Presumably it's a follow-up button the customer clicks once confident.- Sharding key selection elided. The post assumes
the target sharded keyspace already exists with a
reasonable VSchema. The much harder upstream
decision — which column to shard on, how to handle
cross-shard queries, what VSchema primary vindex to
use — is handled in Cluster configuration and is
not addressed here. The worked example uses an
off-the-shelf
gymtracker-shardedkeyspace pre-configured with an implicit vindex choice. - Reshard and Migrate are roadmap, not shipped.
The
ReshardandMigrateworkflows are named as planned future releases; at publication only the unsharded-to-shardedMoveTablesvariant is shipped. Current customers wanting to split or merge shards on an already-sharded keyspace still need the Enterprise path. - Promotional voice. "With just a few clicks, you can create a sharded keyspace with however many shards you'd like, move existing tables to that keyspace, and switch production traffic to be served from the sharded keyspace" — the operational reality is sharding is never a few clicks. The claim is true for the data-motion mechanics (genuinely automated); it is not true for the upstream shard-key design, hotspot mitigation, cross-shard query audit, and application-side query-plan verification. The post doesn't misstate anything, but the framing under-emphasises the non-data-motion work.
- Video reference. The post references a video walkthrough that is not captured in the scraped markdown.
Source¶
- Original: https://planetscale.com/blog/introducing-workflows-on-planetscale
- Raw markdown:
raw/planetscale/2026-04-21-introducing-sharding-on-planetscale-with-workflows-3dd7ed17.md
Related¶
- systems/planetscale
- systems/planetscale-workflows
- systems/vitess
- systems/vitess-cluster-configuration
- systems/vitess-movetables
- systems/vitess-vreplication
- systems/vitess-vdiff
- systems/mysql
- concepts/unsharded-to-sharded-migration
- concepts/horizontal-sharding
- concepts/schema-routing-rules
- concepts/reverse-replication-workflow
- concepts/query-buffering-cutover
- concepts/online-database-import
- patterns/ui-wrapped-workflow-primitive
- patterns/routing-rule-swap-cutover
- patterns/reverse-replication-for-rollback
- patterns/snapshot-plus-catchup-replication
- patterns/vdiff-verify-before-cutover
- companies/planetscale