Skip to content

PATTERN Cited by 1 source

Custom TalkBack actions as gesture alternative

Pattern

Custom TalkBack actions as gesture alternative is the Android pattern of exposing any gesture-only interaction — drag-and-drop, long-press, swipe-to-dismiss, multi-finger manipulation — as a named action invocable from TalkBack's context menu, via AccessibilityNodeInfoCompat.addAction(...). The pattern addresses two distinct user groups: users with motor impairments who cannot perform the gesture physically, and screen-reader users who cannot see the on-screen affordances the gesture depends on.

Why the pattern exists

Touchscreen gestures encode interactions in movement (drag distance, direction, finger count) that assumes both vision (to see drop targets, drag handles, available gestures) and dexterity (to hold, drag, and release smoothly). Three specific failure modes:

  1. Motor impairment — even with vision, users may lack the dexterity to initiate and hold a drag.
  2. Vision impairment — drag handles are often undiscoverable without sight; even with a handle, the drop-target geometry is inaccessible to screen readers.
  3. Discoverability gap — gesture affordances that depend on the user knowing the gesture exists (long-press, swipe-from-edge) are invisible to everyone by default.

The custom-action mechanism solves all three by giving every such interaction a first-class named action that assistive tech can enumerate and invoke.

Canonical instance: Slack workspace switcher rearrangement

From Slack's 2025-11-19 VPAT post, the P2 "Drag and drop in the workspace switcher is inaccessible" theme. Slack's resolution layered three complementary mechanisms:

  1. Edit-mode toggle — a new explicit Edit mode in the workspace switcher; entering Edit mode reveals six-dot drag handles on each workspace row (visual affordance → the before-state had no UI cue that rows were draggable; fixes the sighted-motor-impaired and discoverability dimensions).
  2. Custom TalkBack actions — "Move before" and "Move after" attached to each workspace row in Edit mode, invocable via three-finger tap on the row or by drawing L (up) / r (down) gestures.
  3. Done button — exits Edit mode, hiding the drag handles.

Slack's post verbatim: "we introduced custom actions (Move before and after) for TalkBack users so users can move each row item from the TalkBack context menu by three-finger tap on a row item or L or r drawing gestures."

Structure

For each gesture-only interaction:

  1. Enumerate the atomic operations the gesture accomplishes — for drag-reorder, the atoms are "move up by one", "move down by one".
  2. Expose each atom as a custom AccessibilityAction with a user-facing label. Labels are read verbatim by TalkBack in the context menu.
  3. Attach the actions only when the interaction is available (e.g. Slack gates theirs on Edit-mode being on).
  4. Ship an accompanying visual affordance (drag handle, mode toggle) so sighted motor-impaired users also benefit — the custom-actions path is for screen-reader users, but the visual cue helps users who lack dexterity but have vision.
  5. Regression-test by turning on TalkBack in an automated accessibility test and invoking the action via its name.

Generalisation

The same shape applies to every gesture-only interaction Slack's post flags as a category:

  • Drag-and-drop (canonical Slack instance).
  • Swipe-to-dismiss — expose a "Dismiss" action.
  • Long-press-to-select — expose "Select" / "Enter selection mode" actions.
  • Pinch-to-zoom — expose "Zoom in" / "Zoom out" actions.
  • Multi-finger gestures — expose the semantic atom the gesture invokes.

From the Slack post's conclusion: "we recognized the need to be more diligent in adding TalkBack custom actions for gestures that may not be easily discoverable, like drag-and- drop or swipe-to-dismiss."

When to avoid

  • If the gesture is already accessible via a first-class platform control (e.g. a visible button that does the same thing), an explicit duplicate custom action may be redundant.
  • If the atom-expansion blows up the context-menu size (> ~5-6 actions makes the menu unwieldy), redesign the interaction to a simpler model rather than pile on custom actions.

Cross-platform analogs

  • iOS VoiceOveraccessibilityCustomActions on UIView / UIAccessibilityElement serves the same role; invoked via the VoiceOver rotor and a vertical swipe.
  • Web ARIA — no direct analog; typical response is to expose a visible button that does the atomic action, rather than a custom-action list.

Seen in

Last updated · 470 distilled / 1,213 read