PATTERN Cited by 1 source
Pre-select AI suggestions with visual disclosure¶
Intent¶
When an AI system produces suggested values for fields in a UI, pre-populate the fields with the suggestions so the default user path is accept rather than enter, and visibly mark each AI-suggested value with a uniform provenance indicator so the user can distinguish AI-produced from human-produced values at a glance.
The pattern deliberately biases the workflow toward review (fast when most suggestions are right) at the cost of biasing away from input (faster when most suggestions are wrong). It is appropriate when the AI is accurate enough that pre-selection saves more time than review-of-bad-suggestions costs, AND when the downstream process requires a final human signoff anyway.
When to use¶
- AI accuracy is high enough that most suggestions are correct — the pre-selection default pays off. If accuracy is marginal, users resent the pre-fill.
- The workflow already requires human QA / approval of the final output. Pre-selection doesn't remove QA, it shifts the altitude of human work from enter-then-QA to QA-only.
- The domain has a four-eyes / compliance requirement that demands AI-produced values be distinguishable from human-produced values in the audit trail.
- Adoption velocity matters: reviewing a pre-filled form is a smaller behaviour change from the pre-AI workflow than adopting a hover-for-suggestion UI.
When NOT to use¶
- Accuracy is low or highly skewed — pre-selecting bad suggestions traps users in correction mode, worse than no suggestion.
- The field is safety-critical (medical dosing, financial transactions, legal documents) — pre-selection biases the human toward the AI's answer in exactly the scenarios where independent judgement matters most. Use suggest-on-demand, not pre-select.
- The consumer of the field is another AI system — there's no human to disclose to, and the marker is noise.
Mechanism¶
- Backend produces suggestions. An aggregator service sitting over one or more AI backends returns suggested values + a uniform provenance marker per field.
- UI applies the suggestions as field defaults. The input form is rendered with AI values pre-selected / pre-filled.
- UI marks each AI-filled field with the provenance indicator — a dot, underline, or tint that is visually subtle but unambiguous.
- Human reviews, edits, or accepts. Edits remove the marker (the value is now human-owned); acceptance preserves it (the value was AI-owned and human- verified).
- Downstream system receives the final form plus per-field provenance metadata so audit trails and quality analytics can distinguish AI-accepted, AI-edited, and human-entered values.
Interaction with the existing workflow¶
The pattern preserves the four-eyes principle: the same human who would have entered values before now reviews pre-selected ones. The role didn't change; the starting point did. From Zalando's post:
"the attributes are already pre-filled and marked with a purple dot to make users aware that these attributes were auto-suggested. This visual cue helps streamline the workflow, allowing users to concentrate more on QA rather than the time-consuming task of enriching content."
In Zalando's case, a previously two-step enrich + QA workflow collapses to one-step QA-only, displacing ~25% of the content-production timeline (sources/2024-09-17-zalando-content-creation-copilot-ai-assisted-product-onboarding).
Trade-offs / gotchas¶
- Rubber-stamping risk. Users trained on a high- accuracy system may stop distinguishing "AI filled and I verified" from "AI filled and I glanced at it". Pair with an orthogonal quality-monitoring layer — e.g. patterns/human-in-the-loop-quality-sampling — that randomly samples accepted-as-filled outputs and re-reviews them for drift detection.
- Accuracy ceiling becomes UX ceiling. If the model declines in accuracy (drift, domain shift), users' review attention may not catch up fast enough because they've been trained by the high-accuracy regime. Monitor acceptance-rate-without-edit as a proxy for user attention-decay.
- Marker must be visible but not obstructive. A too- subtle indicator fails to disclose (compliance risk); a too-loud indicator creates visual noise and user fatigue. The Zalando purple dot is on the subtle end of this axis.
- Pre-selection is a privacy / data-use disclosure. Users' edits reveal which AI suggestions they corrected, which is signal for the AI team. Be explicit about whether edits are captured as training data and under what consent.
- Confidence-aware pre-selection. An advanced variant: pre-select only if the model's confidence score is above a threshold; leave low-confidence fields blank. Zalando does not disclose confidence, so its pre-selection is uniform — every suggestion is pre-selected regardless of certainty.
- Edit costs retraining. If the model uses user edits as fine-tuning signal, you're coupled to the UI's edit semantics. Changing the UI later changes the training-signal distribution — a hidden cost.
Related patterns / concepts¶
- concepts/ai-provenance-ui-indicator — the visual marker this pattern makes mandatory.
- patterns/model-agnostic-suggestion-aggregator — the upstream service that makes uniform-across- backends markers possible.
- patterns/human-in-the-loop-quality-sampling — the drift-detection pattern that catches rubber-stamping.
- patterns/low-confidence-to-human-review — the complementary HITL pattern that routes low- confidence outputs to richer review, whereas this pattern pre-selects all outputs uniformly.
- patterns/llm-attribute-extraction-platform — the platform pattern this UX pattern sits on top of.
Seen in¶
- sources/2024-09-17-zalando-content-creation-copilot-ai-assisted-product-onboarding — canonical wiki instance. Zalando's Content Creation Tool pre-selects AI-suggested attribute values and marks them with a purple dot. The design rationale is explicit: shift the human's workload from enrichment-then-QA to QA-only, preserving the four- eyes principle and the auditable AI-provenance story. Displaces ~25% of the content-production timeline in exchange for review-mode-by-default.
Related¶
- systems/zalando-content-creation-tool — the UI
- systems/zalando-content-creation-copilot — upstream aggregator that powers the pre-selection
- concepts/ai-provenance-ui-indicator
- patterns/model-agnostic-suggestion-aggregator
- patterns/human-in-the-loop-quality-sampling
- patterns/llm-attribute-extraction-platform