PATTERN Cited by 1 source
Content-derived suggested questions¶
Intent¶
Generate user-facing "you could ask..." prompt suggestions from the specific entity's own content (reviews, owner description, pre-generated summary) rather than from a generic category-level template. The suggestions match what users would actually find answerable — producing both higher engagement and lower "unable to answer" rate.
The pattern applies wherever an LLM Q&A UI faces a blank chatbox problem and needs to educate users about what they can ask. It's a UX pattern with a concrete measurable impact on engagement and answerability.
When to apply¶
- Users see a blank chat interface and don't know what the system is good at. A pure placeholder "Ask anything..." creates friction and produces off-topic traffic.
- Each entity (business, product, service provider) has rich, varied content — two restaurants of the same category have meaningfully different answerable questions.
- You need to keep suggestions affordable. You can't afford to run the full answering pipeline for every suggestion — you need a short path that uses a small subset of the entity's content as the generator input.
- The alternative (category-level generic questions) is producing a measurable unanswerability problem — users click a suggestion and the system says it can't answer.
Mechanism¶
Input: compact per-entity content bundle¶
For cost control, do not pass the entity's full content into the suggestion generator. Use:
- A handful of recent reviews (not all reviews).
- The pre-generated business summary (already cached per-entity for other purposes).
- The owner's description (structured metadata, cheap).
LLM prompt → suggested-question list¶
The LLM gets the bundle + the request: "Generate N questions a visitor might ask, grounded in this content." Output is a small ordered list of suggestion strings.
Optional: engagement-signal-driven ranking¶
Once the suggestions are live, track which suggestions users click on and which produce satisfying answers. Use those signals to re-rank future suggestions for similar entities.
Optional: short-term cache of suggestion answers¶
If the same suggestion is shown many times, cache its answer. Trades staleness for latency + cost on the answering path.
Canonical wiki instance — Yelp BAA (2026-03-27)¶
Source: sources/2026-03-27-yelp-building-biz-ask-anything-from-prototype-to-product
Yelp's prototype used LLM-generated questions at a business- category level. The post's framing of the problem:
"Initially, we used an LLM to generate generic questions at a business category level. So, Mexican restaurants, Bars, Parks, each had their own suggestion list. This approach occasionally produced questions that were unanswerable with the available data for the particular business."
The production fix switches the generation input from category to this business's content:
"Therefore, we invested in generating suggestions from the business's actual content. To reduce cost, we used a handful of recent reviews, a summary that we have generated for every business, as well as the business owner's description (instead of using the full content). This surfaces what people really care about and talk about that specific place."
The post's example contrasts the two approaches for [Parc] (a French brunch restaurant):
- Category-level suggestions: not bad, but generic given that this is a French / breakfast / brunch place.
- Content-derived suggestion: "Can you order freshly baked bread to go?" — landing on a signature detail of this specific restaurant ("known for its freshly baked bread basket that is offered complementary to customers").
Measurable outcomes after the switch:
- +~50% engagement with suggested questions.
- -~26% inability-to-answer rate on suggested questions.
Next-step directions Yelp names:
"Next, we will be incorporating user engagement signals to adjust the suggested question ranking and invest in short-term caching of suggested question answers to improve the user experience with faster answers and cost reduction."
Why it works¶
- Suggestion–answerability alignment. The suggestion is produced from the same content substrate the answer will be drawn from — if the generator sees a salient detail in the reviews, the answerer can cite it.
- Signal-to-noise at the category level is terrible for entities with distinctive offerings. "Popular breakfast items?" is generic; "Can you order freshly baked bread to go?" is specific and high-intent.
- Cold-start handled gracefully. For new entities with thin content, suggestions naturally fall back toward category-generic — no special-case code needed.
Failure modes¶
- Review skew toward extremes. If recent reviews are dominated by complaints, suggestions may highlight negatives. Mitigation: diversify the review sample.
- Stale summary / owner description. If the per-business summary is cached and not refreshed, suggestions drift from current reality.
- Over-specific suggestions that match only a rare content slice — users click and get "we don't have info on that". Mitigation: eval suggestion set against the full content before shipping.
- New business, thin content — suggestions tend to be generic. Yelp names this: "For businesses with limited content (e.g. new businesses) the questions tend to be more generic but will adjust as content increased on the business page over time." Accept the trade-off explicitly.
Relation to sibling patterns¶
- patterns/parallel-pre-retrieval-classifier-pipeline — suggested questions are routed through the same pre- retrieval classifier pipeline as free-form user questions; a bad suggestion still hits the inquiry-type gate.
- The content-grounded-answer discipline — see concepts/content-grounded-answer.
Seen in¶
- sources/2026-03-27-yelp-building-biz-ask-anything-from-prototype-to-product — canonical wiki instance. Yelp replaced category-level LLM-generated suggestions with per-business content- derived suggestions (handful of recent reviews + pre- generated summary + owner description). Measurable impact: +~50% engagement and -~26% unable-to-answer rate on suggested questions.