Skip to content

CONCEPT Cited by 1 source

Generated knowledge prompting

Definition

Generated knowledge prompting is the LLM technique of eliciting intermediate factual or analytical content from the model first, then feeding that content back in as context for the downstream reasoning step.

The canonical external reference is the Prompt Engineering Guide's page on generated-knowledge prompting, cited by the Expedia STAR post as one of three named prompting techniques (alongside role prompting and prompt chaining).

The shape is a two-step (or N-step) chain:

  1. Elicit: "What are the possible causes of a livenessProbe failure on a JVM container?" → model enumerates likely causes.
  2. Reason: "Given these causes and the metric data provided, which are most likely here?" — the step-1 output is part of the step-2 prompt.

It is an instance of prompt chaining specialised to the case where the intermediate artifact is the model's own domain knowledge, not a tool output or an external retrieval.

Why it helps

  • Makes latent knowledge explicit. The model may "know" something in its weights but not emit it unprompted during the final reasoning step; eliciting it first puts it in the chain's context window.
  • Reduces hallucination per-step. Asking "what are the possible causes?" has a lower hallucination surface than "what is the cause?" — the model enumerates, the downstream step discriminates.
  • Inspection + review. The intermediate artifact is human-auditable, which is load-bearing when the final reasoning step is opaque.
  • Cheaper than RAG for well-pretrained domains. If the model already has good pretraining coverage, generated knowledge avoids the retrieval-index investment.

When it isn't enough

  • Proprietary / post-training-cutoff knowledge. Generated knowledge fails when the required facts aren't in the model's weights — you need RAG or an explicit context file. See concepts/context-engineering for the retrieval-vs-pretraining trade-off.
  • Numeric precision. The model's generated enumeration may be qualitatively right but quantitatively hallucinatory; downstream steps that depend on numbers still need a grounded source.

Seen in

  • Expedia STAR (2026-04-28) — STAR's prompt engineering explicitly names generated knowledge prompting (linking to the Prompting Guide article) as one of three core techniques. Applied at STAR's aggregate RCA step: per-metric analyses from earlier steps serve as generated knowledge for the final root-cause reasoning step. Canonical wiki instance of generated-knowledge prompting in a production RCA pipeline.
Last updated · 433 distilled / 1,256 read