Build Alexa Presentation Language Visuals that Support Screen Readers
An Alexa device with a screen can include screen reader features to help visually impaired users interact with the content on the screen. For example, Echo Show devices have the VoiceView screen reader. Users can tap and swipe on the screen to hear descriptions of the content.
Alexa Presentation Language (APL) supports screen readers. To take advantage of this support and make your skill accessible to more users, provide information about the components in your APL document. The screen reader uses this information to render descriptions to the user.
Screen reader user experience
Echo Show devices include the VoiceView screen reader. To enable VoiceView by voice, say "Alexa, turn on voice view."
When Voiceview is active, Alexa highlights each component displayed on the screen. Tap on a component to hear a description of that item. You can also explore the screen by swiping.
Normal touch interactions for interactive components work differently with VoiceView.
To activate a button or link
- Tap the component to select it.
- Double-tap anywhere on the screen to activate it.
To adjust a slider or swipe to action component
- Tap the component to select it.
- Double-tap and hold anywhere on the screen, and then slide right or left.
For detailed instructions about how to use the screen reader, see Guide to the VoiceView Screen Reader on Your Echo Device.
When building accessibility into your skill, enable VoiceView on your device as you test.
How the screen reader handles APL components
When a user taps on an APL component on the screen, the screen reader constructs and reads the following string:
${accessibilityLabel}. ${role}. ${usageHint}.
For example, tapping a button might render the following spoken string: "Remove Item. Button. Double tap to activate."
This string corresponds to the following parts:
- accessibilityLabel – "Remove Item"
- role – "Button"
- usageHint – "Double tap to activate"
In your APL document, you control portions of this string with the base component properties accessibilityLabel
and role
. The screen reader generates usageHint
based on the specified role
for the component.
The accessibilityLabel describes the component
Set the accessibilityLabel
component property to a string describing the component. For Text
components, this property is often the same text displayed by the component. For an Image
, this property should be a description of the image.
A screen reader might not support markup or inline formatting characters in the accessibilityLabel
property. The text
property of the Text
component does support these characters. Strip these characters out of the string before passing it to accessibilityLabel
. For a list of markup you can use for inline formatting in a Text
component, see text.
The following examples shows a Text
component that sets the text
and accessibilityLabel
properties to different strings provided by the data source.
{
"type": "Text",
"text": "${accessibilityExampleData.mainTextContent}",
"accessibilityLabel": "${accessibilityExampleData.sanitizedAccessibilityString}"
}
Data source with the strings:
{
"accessibilityExampleData": {
"mainTextContent": "This is an informative component that uses some <b>bold formatting</b> in the text string that it displays. It also uses <code>code formatting</code>.",
"sanitizedAccessibilityString": "This is an informative component that uses some bold formatting in the text string that it displays. It also uses code formatting."
}
}
For best practices on when to label different types of components, see Best Practices for Screen Reader Support.
The role describes the purpose and usage for the component
Set the role
component property to a value from a list of possible roles. The role
describes the purpose of the component. For example, the role
of button
means that the component is a button that the user can activate. For the full list of values you can assign to role
, see role.
The screen reader uses the role
to determine the usageHint
text to speak in the accessibility string:
- When
role
describes an interactive component, such asbutton
orlink
, theusageHint
explains how to interact with that component. For most interactive components, theusageHint
is "Double-tap to activate." - When
role
describes a static component, such astext
orimage
, theusageHint
is omitted. Therefore, settingrole
on static, informative components isn't required.
When you don't set accessibilityLabel and role
When you don't set the accessibilityLabel
and role
properties for a component, selecting the component in the screen reader might not give the user any information about the component or how to interact with it. For example, on some devices, tapping a Text
component with no accessibilityLabel
doesn't highlight the component or speak out any description. A TouchWrapper
with no role
might not tell the user that it's a button that can be activated with a double-tap.
A screen reader can use the text assigned to the component, such as the text
property on a Text
component, to render the string. The screen reader also might attempt to provide the usageHint
based on the component type for a touchable component. However, this functionality varies across devices and versions of the assistive software installed on the device. For the best user experience, set the accessibility properties directly in your document. Don't rely on the screen reader interpreting your document correctly.
For best practices on when to label different types of components, see Best Practices for Screen Reader Support.
Example
For example, the following image shows an APL document with a Text
component, a TouchWrapper
containing a text string, and a Frame
with a border and background. Activating the TouchWrapper
changes the border color on the Frame
.
When the document doesn't set the accessibility properties on the components, a device with VoiceView enabled provides the following experience:
User: Taps on the item 'Text component that provides information'.
(Item is not highlighted.)
User: Taps on the item 'TouchWrapper component that changes the border color.'
(Item is highlighted, but Alexa doesn't describe the item.)
User: Double-taps on the screen.
Runs the command and changes the border color from red to blue.
To test this flow yourself, copy the following code to the authoring tool, and then use Send to Device to display the document on a device with VoiceView enabled.
In contrast, when the document sets accessibilityLabel
and role
on all informational and interactive properties, a device with VoiceView enabled provides the following experience:
User: Taps on the item 'Text component that provides information'.
VoiceView: Text component that provides information (Item is highlighted.)
User: Taps on the item 'TouchWrapper component that changes the border color.'
VoiceView: TouchWrapper component that changes the border color. Button. Double-tap to activate (Item is highlighted.)
User: Double-taps on the screen.
Runs the command and changes the border color from red to blue.
To use the same string for both display and the screen reader, use data binding to avoid duplication. The following example uses a data source and data binding to set the text
and accessibilityLabel
properties.
Use the screen reader status in conditional statements
The environment.screenReader
property in the data-binding context returns true
when the screen reader on the device is enabled. You can use this property in conditional logic to change the experience based on the screen reader use.
For example, the following code runs the AutoPage
command when the screen reader is not enabled.
{
"when": "${!environment.screenReader}",
"type": "AutoPage",
"componentId": "pagerId",
"duration": 3000,
"delay": 2000
}
For an example of this scenario, see Avoid the AutoPage command when the screen reader is active
Related topics
Last updated: Nov 28, 2023