Skip to content

CONCEPT Cited by 1 source

Encoded domain expertise

Definition

Encoded domain expertise — what Meta calls "skills" on the Capacity Efficiency Platform — is the reusable, composable artifact form of a senior engineer's reasoning playbook for a class of task, expressed so an LLM can apply it uniformly whenever the task recurs (Source: sources/2026-04-16-meta-capacity-efficiency-at-meta-how-unified-ai-agents-optimize-performance-at-hyperscale).

Meta's definition: "Skills: these encode domain expertise about performance efficiency. A skill can tell an LLM which tools to use and how to interpret results. It captures reasoning patterns that experienced engineers developed over years, such as 'consult the top GraphQL endpoints for endpoint latency regressions' or 'look for recent schema changes if the affected function handles serialization'."

Anatomy of a skill

From the post's two worked examples, a skill contains:

  1. A trigger condition — when this skill applies (e.g. "endpoint latency regression"; "regression from logging").
  2. A tool-invocation playbook — which MCP tools to call + in what order + how to route the output (e.g. "query top GraphQL endpoints"; "search for recent schema changes").
  3. A resolution pattern — what the output should look like (e.g. "memoize this function"; "increase sampling in this logger").
  4. Interpretation heuristics — how to read the tool output in the context of this class of problem.

Skills are decoupled from the tool layer: the same tools serve every skill; only the skill-specific combination + interpretation differs. That decoupling is the lever behind patterns/mcp-tools-plus-skills-unified-platform — adding a skill is cheap, adding a tool is expensive, and Meta runs hundreds of the former atop a curated few of the latter.

Why it's a structural primitive

Before skills, "experienced engineers developed [reasoning patterns] over years" and those patterns lived in heads + scattered team wikis + the occasional internal talk. The knowledge was bottlenecked by:

  • Author-present consumption — you needed the senior engineer to actually look at your problem.
  • Search-recall consumption — you needed to know the wiki page existed to find it.
  • Context-window-costless consumption — even if you found the wiki page, getting it into your actual work session was manual.

Skills collapse all three: the LLM looks up the right skill and applies it in context, so the senior engineer's playbook runs on every problem of its class, even when the senior engineer is asleep.

Relationship to other knowledge-encoding forms on the wiki

Form Consumption Canonical instance Freshness model
Compass-shape context files LLM loads on demand before task Meta Pre-Compute Engine Self-maintenance loop every few weeks
Skills (this page) LLM invokes per class of task systems/meta-capacity-efficiency-platform Update in place; markdown-not-embeddings
Tribal knowledge (concepts/tribal-knowledge) Extracted offline via five-questions framework Pre-Compute Engine output Re-extracted on pipeline change
Agent Skills (.well-known) (systems/agent-skills) External agents discover over internet Cloudflare RFC Site owner updates

Meta Pre-Compute Engine's compass-shape files and Meta Capacity Efficiency Platform's skills are the two Meta 2026 bets on markdown-as-the-model-agnostic-substrate. Both decouple encoded knowledge from any specific LLM vendor. The difference: - Compass-shape files are descriptive ("this module does X, here are the invariants, here are the gotchas"). - Skills are prescriptive ("when X happens, use tools A+B+C and apply playbook P").

A mature platform tends to carry both.

Model-agnostic investment

Because skills are expressed as markdown / structured text rather than as fine-tuned model weights or prompt-embedding overlays, the investment compounds across model upgrades. When the underlying model is replaced (larger, faster, cheaper), the skill catalogue continues to apply. Same bet Meta makes on the compass-shape files: "works with most leading models because the knowledge layer is model-agnostic."

What's NOT disclosed

  • Skill catalogue size on the Capacity Efficiency platform (two worked examples named, total count not specified).
  • Skill authoring workflow — who writes them, how they're reviewed, how they're tested.
  • Skill-lifecycle governance — deprecation, versioning, conflict resolution when two skills match the same trigger.
  • Runtime skill-selection mechanism — is it classifier-routed (like Dash's sub-agent classifier)? Or LLM-prompted?

Seen in

Last updated · 319 distilled / 1,201 read