Skip to content

PATTERN Cited by 2 sources

System prompt role + output format constraint

Pattern

A system prompt that asserts the model's role (expert developer, domain specialist, etc.) and fixes the output format contract (opaque fence, structured schema, token constraints) as the last clauses of the system message. Two load-bearing clauses; the rest of the system prompt (task description, style requirements) sits between them.

Canonical shape

## System prompt

You are an expert frontend software developer with deep
knowledge of frontend development, component libraries, and
design systems.
You MUST follow the instructions provided exactly as they
are given.
Your task is to help migrate UI components from one library
to another while maintaining visual and functional
equivalence.

[... style requirements, best practices, error handling ...]

You MUST return just the transformed file inside the
<updatedContent> tag like:
<updatedContent>transformed-file</updatedContent>
without any additional data.

(Zalando's system prompt verbatim, Source: sources/2025-02-19-zalando-llm-powered-migration-of-ui-component-libraries)

Forces

  • Role priming consistently improves accuracy. Zalando: "using system prompts enhanced the accuracy of the transformations. By instructing the LLM to operate as an experienced developer and clearly defining the task objectives, we achieved more consistent results."
  • Without an output-format contract, downstream parsing is fragile. The model emits preamble, apology, explanation — the parser can't find the payload reliably.
  • System prompts are cached at the start of the request. Putting role + format-contract in the system message (as opposed to the user message) means they sit in the cacheable prefix (see patterns/prompt-cache-aware-static-dynamic-ordering).

Mechanism

  1. Role assertion opens the system prompt. "You are an expert X with deep knowledge of Y." Short, direct, grounds the model's decoding posture.
  2. Compliance instruction"You MUST follow the instructions provided exactly as they are given." — nudges against creative deviation.
  3. Task framing — the high-level description of what the model is doing. Covers the "why" so subsequent turn-content makes sense.
  4. Style / best-practices body — whatever domain- specific constraints apply (idiomatic code, error handling, best practices).
  5. Output-format contract — the last clause, the extraction fence (see concepts/opaque-output-format-fencing) or JSON schema. Putting it last makes it the most recent instruction in the model's context — typically the most strongly-attended-to.

Slack's Enzyme→RTL codemod uses the same shape with <code></code> as the fence. Two independent wiki instances converge on the pattern.

When to fall back to plain chat

  • Single-call prototyping — the system-prompt shape overhead isn't worth it for a one-off query.
  • Interactive tasks where the user is iteratively refining the output in a chat — too much constraint narrows useful exploration.
  • Outputs that aren't machine-consumed — if a human reads the response directly, fencing is unnecessary.

Consequences

Positive:

  • Consistent output shape. The fence contract lets pipelines parse output without LLM-shape-specific heuristics.
  • Role priming improves quality. Small-but-consistent accuracy uplift from persona framing.
  • Cache-friendly. System prompts are in the cacheable prefix.
  • Debuggable. When output is wrong, the role + format clauses are the first things to check.

Negative:

  • System prompt length trades cost for structure. Every token in the system message costs per-request (modulo caching).
  • Over-constraint can suppress useful variation. If the task needs some creativity (e.g. choosing among several equally-valid migrations), a highly-constrained prompt forces a single arbitrary choice.
  • Model-version-sensitive. How strongly a model respects the system prompt varies by model and version; the contract is best-effort.

Seen in

Last updated · 501 distilled / 1,218 read