PATTERN Cited by 1 source
Self-maintaining context layer¶
Intent¶
Keep precomputed AI-agent context files in sync with the code they describe without routine human intervention, by running an automated refresh loop on a bounded cadence that validates file paths, detects coverage gaps, re-runs critic agents, and auto-fixes stale references.
The load-bearing rationale is context-file freshness: "context that goes stale causes more harm than no context." The pattern is the operational answer to the freshness concern.
Mechanism¶
On a cadence (Meta: "every few weeks"), automated jobs execute four steps:
1. Validate file paths¶
Walk every file-path reference in every context file. Any path that no longer exists in the target repos is flagged. Meta's invariant: zero hallucinated file paths maintained continuously.
2. Detect coverage gaps¶
Enumerate modules in the target repos; compare against the modules for which context files exist. New modules → new context files needed.
3. Re-run critic agents¶
Score all changed context files (or all files on a longer cadence) against the multi-round critic gate. Scores below threshold → feed into fixer agents.
4. Auto-fix stale references¶
Apply fixer-agent corrections for flagged paths, coverage gaps, and low-scoring sections. Re-run the critic gate post-fix.
Why it works¶
- Bounded cadence makes cost predictable. Meta's "every few weeks" bounds the automation budget while matching typical refactor rhythms.
- Step 1 (path validation) is cheap + high-leverage — just filesystem existence checks. Catches the bulk of compile-passing-but-wrong failure modes.
- Step 2 (coverage gaps) prevents blind spots — new modules would otherwise be invisible to agents until humans noticed.
- Steps 3 + 4 (critic + fixer) handle content drift — renamed functions, moved invariants, evolved conventions.
- Automation eliminates human bottleneck — freshness at Meta's scale would need more engineer-hours than the original authors had.
Tradeoffs¶
- Cadence calibration — too frequent = cost burn; too infrequent = staleness window widens past tolerance. Meta's "every few weeks" is empirical, not principled.
- Auto-fix trust — auto-fixing is a write operation on a surface agents rely on. Requires critic-gate validation post-fix to avoid introducing new staleness. Humans should review the first few cycles.
- Silent drift in unchanged files — if a file's paths are still valid and no modules were added, step 3 may be skipped. Conventions can drift without path-level signals.
- Critic-score drift — if rubrics evolve between refresh cycles, score comparability breaks.
- Compute cost — less than the original extraction pass (most files unchanged), but non-trivial at Meta's scale: 59 files × 10+ critics × possible 3 rounds per cycle.
Contrast with sibling patterns¶
| Pattern | What it maintains | Trigger |
|---|---|---|
| Self-maintaining context layer (this) | Context files for AI agents | Cadence (weeks) |
| CI lint on docs | Doc syntax + link validity | Per-PR |
| LTX compaction | Object-store compaction | Watermark-driven |
| Quilt patching | OS-package freshness | CVE + calendar |
| Schema-evolution gates | Backward-compat schemas | Per-schema-change |
Applied to non-context-file artifacts¶
The pattern generalises:
- Runbooks — same 4-step loop (validate alert IDs, detect uncovered alerts, re-run critics, auto-fix) applies to automated operator guides.
- Migration guides — as the target codebase evolves, the migration mapping goes stale; auto-validation catches rename / removal.
- API changelogs — validate referenced endpoints exist, detect new endpoints not yet documented, critic for clarity.
The shape is: durable derived artifacts + automated freshness loop.
Seen in¶
- Meta AI Pre-Compute Engine (2026-04-06) — canonical wiki instance. Automated refresh runs "every few weeks", executing all four steps against 59 context files. Invariant: zero hallucinated file paths maintained continuously. "The AI isn't a consumer of this infrastructure, it's the engine that runs it." (Source: sources/2026-04-06-meta-how-meta-used-ai-to-map-tribal-knowledge-in-large-scale-data-pipelines.)
Related¶
- concepts/context-file-freshness — the concept this pattern operationalises
- concepts/tribal-knowledge — what the context files contain
- concepts/compass-not-encyclopedia — the file format that makes automated freshness tractable
- patterns/precomputed-agent-context-files — the containing pattern
- patterns/multi-round-critic-quality-gate — the critic gate reused in step 3
- systems/meta-ai-precompute-engine — the canonical instance