Skip to content

SYSTEM Cited by 1 source

enzyme-to-rtl-codemod

What it is

Slack's open-sourced AI-powered codemod that converts Enzyme test code to React Testing Library (RTL) test code. Packaged on npm as @slack/enzyme-to-rtl-codemod (open-sourced October 2024 in response to external developer demand).

It implements the AST + LLM hybrid conversion pattern: a rule-based AST pass handles deterministic cases and writes in-code annotation comments flagging harder cases; an LLM (Anthropic Claude 2.1 at the time of the original post) consumes the annotated file plus per-test-case DOM context plus a structured prompt to finish the conversion.

Pipeline shape

  1. AST codemod transforms well-understood Enzyme patterns (top-10 methods: find, prop, simulate, text, update, instance, props, hostNodes, exists, first — plus Jest-matcher rewrites). For patterns it cannot handle, it leaves in-code annotation comments with suggestions and doc links.
  2. DOM collection runs the Enzyme tests after patching enzyme.mount / enzyme.shallow to append each test case's rendered HTML (via wrapper.html()) to a file keyed by currentTestName (captured from expect.getState() in beforeEach). See concepts/dom-context-injection-for-llm.
  3. LLM conversion sends a three-part structured prompt + the original test code (in <code></code> tags) + AST-partial conversion (in <codemod></codemod> tags) + per-test-case DOM tree (in <component><test_case_title>...</test_case_title> and <dom_tree>...</dom_tree></component> tags).
  4. Output is bucketed by pass-rate: fully-converted / partially 50-99% passing / partially 20-49% / partially <20%. Humans verify before merge.

Prompt structure

The post discloses the prompt in full, structured in three parts:

  1. Context setting — names the three input envelopes and their tags (<code></code>, <codemod></codemod>, <component>).
  2. Main request — 10 explicit required tasks (complete the conversion inside <codemod>, preserve test count, swap Enzyme methods for RTL equivalents, update imports, adjust Jest matchers, return the full file in <code> tags, preserve non-test code and imports verbatim, preserve describe/it naming, wrap component rendering in <Provider store={createTestStore()}>) + 7 optional instructions (data-qascreen.getByTestId mapping, augmented matchers with DOM suffix, userEvent for interactions, query priority order getByRolegetByTextgetByTestId, query* only for non-existence, lowercase regex for text matchers, toBeEmptyDOMElement substitution).
  3. Self-evaluation"evaluate your output and make sure your converted code is between <code></code> tags. If there are any deviations from the specified conditions, list them explicitly. If the output adheres to all conditions and uses instructions section, you can simply state 'The output meets all specified conditions.'"

Operational envelope

  • On-demand: 2-5 minutes per file (local iteration friendly).
  • CI nightly: hundreds of files per run, output bucketed for developer triage.
  • Adoption: ~64% of files being migrated to RTL at Slack passed through this tool.
  • Quality on selected files: ~80% auto-converted, 20% manual.
  • At-scale pass-rate: ~500 of ~2,300 test cases auto-passing across 338 files (~22% developer time saved, lower-bound).

See sources/2024-06-19-slack-ai-powered-conversion-from-enzyme-to-react-testing-library for the full retrospective.

Why it matters on the wiki

  • Canonical production instantiation of patterns/ast-plus-llm-hybrid-conversion at 15,000-test scale.
  • Reusable artifact (open-source, npm-installable) beyond test-migration: the pipeline shape applies to any deterministic code-transformation task where LLM-alone hits a quality ceiling.
  • Documented prompt structure is itself a reference implementation of the three-part (context / mandatory + optional instructions / self-evaluate) prompt template for code-transformation tasks.

Source

Last updated · 470 distilled / 1,213 read