CONCEPT Cited by 1 source
AI-provenance UI indicator¶
Definition¶
An AI-provenance UI indicator is a visual marker attached to a value in a UI whose sole purpose is to disclose that the value was suggested by an AI system rather than entered by a human. It does not encode the model's confidence, the data source, or any downstream status — only the fact that the value's origin is the AI path, so the human can decide whether to accept, edit, or reject it.
Examples in the wild include Zalando's purple dot next to AI-suggested product attributes (sources/2024-09-17-zalando-content-creation-copilot-ai-assisted-product-onboarding), IDE copilot inline ghost text, and email clients' "Smart Compose" underlines.
Design properties¶
What it is¶
- Binary disclosure. The indicator is present (AI- suggested) or absent (human-entered / system default).
- Uniform across backends. Whether the suggestion came from GPT-4, GPT-4o, a brand data dump, or a fine-tuned model, the indicator is the same marker — the end-user does not need to know which backend in a multi- backend aggregator.
- Visually subtle. A dot, underline, or tinted background — not a banner or modal. The UI remains legible; the indicator recedes once the human has verified.
What it is not¶
- Not a confidence score. It does not communicate "how sure is the model" — that's a separate primitive, and Zalando's copilot explicitly does not disclose one.
- Not a status flag. It doesn't indicate approved / pending / rejected — those are workflow states orthogonal to origin.
- Not a source attribution. In an aggregator setting the indicator is backend-agnostic by design.
Why this matters¶
Preserves the four-eyes principle¶
In regulated or quality-critical workflows (Zalando's content QA is a four-eyes process), the human reviewer must be able to distinguish what they themselves entered from what the system entered, or the four-eyes contract breaks. The indicator is the smallest-possible UI element that preserves the contract.
Enables reversible pre-selection¶
Without an indicator, pre-selecting AI suggestions (see patterns/pre-select-ai-suggestions-with-visual-disclosure) would be indistinguishable from user intent — the user couldn't tell which values they need to re-verify. With the indicator, pre-selection becomes safe: the AI-suggested defaults are visibly marked until the human confirms.
Creates auditable history¶
Downstream systems (audit logs, analytics) can read the indicator state per-field to measure acceptance rate, edit rate, and category-level accuracy — critical data for product-quality monitoring and for tuning which attributes should be pre-selected vs. left empty.
Trade-offs¶
- Visual noise at high density. A form with many AI-suggested fields has many indicators; the marker must be unobtrusive enough that the form remains scannable.
- Marker exhaustion. If the indicator is overloaded to carry more information (colour for source, shape for confidence), it stops being a provenance marker and becomes a compound widget. Zalando's choice of a single purple dot is the disciplined form.
- Persistence contract. Should the indicator disappear once the human clicks the field, or persist until submit? Each has failure modes — disappearing masks late edits, persisting clutters the post-review form. No universal answer.
- Training effect. Users eventually learn to rubber-stamp fields with the indicator. The pre- selection + indicator combination still has value (auditability, spot-review) but users may stop distinguishing "AI pre-filled and I verified" from "AI pre-filled and I glanced at it". Random HITL sampling (patterns/human-in-the-loop-quality-sampling) is the downstream safety net.
Seen in¶
- sources/2024-09-17-zalando-content-creation-copilot-ai-assisted-product-onboarding — canonical wiki instance. Zalando's Content Creation Tool marks AI-suggested attributes with a purple dot. Explicit design rationale: "attributes are already pre- filled and marked with a purple dot to make users aware that these attributes were auto-suggested."
Related¶
- systems/zalando-content-creation-tool — the UI where the indicator lives
- systems/zalando-content-creation-copilot — the upstream suggestion system
- patterns/pre-select-ai-suggestions-with-visual-disclosure — the paired UX pattern (pre-selection is only safe given an indicator)
- patterns/model-agnostic-suggestion-aggregator — why the indicator should be backend-uniform