Skip to content

PATTERN Cited by 1 source

Opportunity-to-PR AI pipeline

Intent

Turn a proposed performance-optimization opportunity into a review-ready candidate fix, delivered directly into the engineer's editor, via the same three-phase AI-agent pipeline the defense side uses for regression mitigation (Source: sources/2026-04-16-meta-capacity-efficiency-at-meta-how-unified-ai-agents-optimize-performance-at-hyperscale).

Canonical instance

Meta's Opportunity Resolver on the Capacity Efficiency Platform (2026-04-16). "We built a system where engineers can view an opportunity and request an AI-generated pull request that implements it. What used to require hours of investigation now takes minutes to review and deploy."

The three phases

Meta frames the pipeline as "mirror[ing] the defensive AI Regression Solver" (patterns/ai-generated-fix-forward-pr):

  1. Gather context with tools — AI agent looks up:

    • Opportunity metadata (which function, which service, what regressed, what's the target improvement).
    • Documentation explaining the optimization pattern.
    • Examples of similar opportunities previously resolved.
    • Specific files + functions involved.
    • Validation criteria for confirming the fix works.
  2. Apply domain expertise with skills — the matching skill encodes the senior engineer's playbook for this optimization type. Worked example: "memoizing a given function to reduce CPU usage."

  3. Create resolution — produce a candidate fix with guardrails:

    • Verify syntax + style.
    • Confirm it addresses the right issue.
    • Surface the generated code in the engineer's editor, ready to apply with one click.

Why it matters

Meta's framing: offensive work was previously rate-limited by engineer investigation time"hours of investigation" per candidate optimization, multiplied across a fleet with orders of magnitude more candidate optimizations than engineer-hours available. The opportunity-to-PR pipeline compresses per-candidate investigation into minutes of review, which raises the effective per-engineer throughput by the same ratio — "handling a growing volume of wins that engineers would never get to manually."

When to reach for it

  • You have a surface enumerating candidate optimizations — Meta's efficiency-opportunity library, produced by profiling + pattern matching + prior human analysis.
  • Each optimization belongs to a recognized pattern class that a skill can encode (memoization, cache placement, algorithmic swap, hot-path rewrite, allocation reduction, loop vectorization, …).
  • You have validation criteria — ways to check the fix worked (unit tests, benchmarks, production telemetry).
  • You have an engineer-facing review surface (IDE plugin, code-review UI) where the candidate lands.

Mechanism

Opportunity surface

Each opportunity is a structured record: - Target function / file / service. - Measured inefficiency (profile signal). - Proposed optimization pattern. - Validation criteria. - Prior-resolution examples.

Meta's post implies this surface predates the AI pipeline — engineers already "use our efficiency tools to work on these problems every day."

Skill selection

Per opportunity class, the matching skill tells the agent: - Which tools to invoke for context (code search, docs, examples). - How to interpret the opportunity metadata. - What the resolution shape looks like (add-memoization-decorator, inline-hot-callee, replace-algorithm-X-with-Y, …).

Guardrails on generation

The post names three verification layers: - Syntax + style check — the generated code compiles + conforms to project style. - Right-issue confirmation — the fix is addressing the opportunity's actual target, not something adjacent. - Editor-surface delivery — the fix lands in the engineer's IDE, not an autoland PR. Engineer clicks to apply; human is the final gate.

Not disclosed in the post: unit-test execution, benchmark-replay, static analysis, ML judge, test-coverage check. Likely some subset is present but Meta doesn't enumerate.

Relationship to the defense sibling

Meta makes the symmetry explicit: "the pipeline mirrors the defensive AI Regression Solver."

Axis Defense (patterns/ai-generated-fix-forward-pr) Offense (this pattern)
Trigger FBDetect regression event Engineer-requested from opportunity library
Phase 1 context Regression symptoms + root-cause PR + changed files Opportunity metadata + pattern docs + examples + files + validation criteria
Phase 2 skill Mitigation skill (e.g. logging → sampling) Optimization skill (e.g. memoization)
Phase 3 resolution PR sent to root-cause author for review Candidate code in engineer's editor for one-click apply
Review target PR author Requesting engineer

The shared three-phase shape is what makes a unified platform (patterns/mcp-tools-plus-skills-unified-platform) economical.

Tradeoffs

  • Opportunity-library quality bounds the pipeline quality. A weak opportunity ("optimize this function somehow") yields a weak candidate. The pipeline is as good as the opportunity-enumeration step upstream.
  • Skill-coverage gaps produce silent misses. Opportunities outside the skill catalogue get no candidate.
  • One-click apply risks shipping generated code with incomplete review. Review ergonomics matter — editor-surface delivery is more skimmable than a separate PR but also easier to merge superficially.
  • Benchmark regression risk. Memoization / cache placement are load-dependent; a candidate that wins on a micro-benchmark may lose on production traffic shape. Validation-criteria enforcement is the mitigation, but the post doesn't quantify its strictness.

Relationship to pre-AI offense pipelines

  • patterns/feedback-directed-optimization-fleet-pipeline (the Meta FDO pipeline via Strobelight + BOLT) — the fleet-level, compiler-driven offense pipeline. Produces binary-level optimizations automatically via FDO profiles. Orthogonal to the opportunity-to-PR pipeline: FDO picks low-level optimizations a compiler can make safely; opportunity-to-PR picks higher-level optimizations requiring source-code changes.
  • Both contribute to the program-level "hundreds of megawatts" figure; Meta doesn't split the attribution in this post.

Seen in

Last updated · 319 distilled / 1,201 read