Enable User Interactions in Your Visual Content
Alexa Presentation Language (APL) provides multiple ways for users to interact with your visual content, such as tapping buttons on the screen and scrolling through content. You enable these interactions by using specific components, event handlers, and commands.
This document provides an overview of the different user interactions you can enable.
How user interactions work
Most user interactions require an event handler that can capture the user action, and one or more commands that run when the event occurs. APL provides event handlers as properties on components and documents. For example, a touchable component such as a TouchWrapper
has the onPress
property, which generates a press event when the user taps the component. Assign commands to onPress
to define the action to run when the user taps the component.
The commands you run in response to an event can change the presentation on the screen, or can send a request to your skill. For example, use the SetValue
command to change content displayed on the screen in response to a button press. Use the SendEvent
command to send your skill a UserEvent
request with information about the button press, so your skill can do additional processing as needed. Your skill can send commands to the document on screen with the Alexa.Presentation.APL.ExecuteCommands directive.
Finally, some components provide interactivity automatically. For example, a Sequence
automatically lets the user scroll by touch if the displayed content exceeds the size of the component.
Available user interactions
To enable a particular interaction, use a component that supports the interaction, and then define appropriate commands to run in the relevant event handler property.
The following table summarizes the user interactions you can enable in your APL documents. See the sections following the table for details about each interaction.
Interaction | Description | Components and handlers |
---|---|---|
Let users scroll when the content exceeds the size of the component. |
Basic scrolling is automatic with the following components:
To run commands when the content scrolls, use the | |
Display a series of pages that the user can navigate through. |
Paging through content is automatic with the
| |
Let users tap buttons on the screen to initiate actions. |
All touchable components support the following properties:
| |
Let users activate additional touch gestures, such as long press and double-press |
All touchable components support the following gesture handlers:
Use the | |
Run commands when the user hovers the pointer over a component. |
All components support the following properties:
| |
Define a text entry field on the screen to collect user input. |
The
| |
Capture key presses from an external keyboard and run commands. |
All actionable components support the keyboard event handlers:
| |
Let users select items on the screen with utterances such as "Select the third one." or "Select this". |
Use the built-in intent | |
When playing a video, you can provide on-screen controls such as buttons. You can let users control playback with voice intents. |
The |
The remaining sections of this document describe the available types of interactions and the components and handlers you must use to enable them.
Scroll content
Display longer content in a scrollable component to automatically enable scrolling. Users can touch and drag to scroll the content when it exceeds the size of the component. Users can also scroll the content by voice if you enable the scrolling intents.
The following components support scrolling:
ScrollView
– Displays a child component and scrolls vertically. Typically used for longer text content.Sequence
– Displays a list of child items in a scrolling vertical or horizontal list.GridSequence
– Displays a list of child items in a scrolling grid format.
APL handles basic scrolling automatically, so you aren't required to use any event handlers. Optionally, you can do the following:
- Use the
onScroll
property on the scrollable component to run commands when the content scrolls. - Use the
Scroll
,ScrollToIndex
, orScrollToComponent
commands to scroll content programmatically.
Users can scroll your content by voice when you set the id
property for the scrollable component, and add the following built-in intents to the interaction model for your skill:
AMAZON.ScrollDownIntent
AMAZON.ScrollUpIntent
AMAZON.PageUpIntent
AMAZON.PageDownIntent
AMAZON.MoreIntent
You can then use voice commands such as "Alexa, scroll down" to scroll the content.
For details about the built-in intents, see Available standard built-in intents for Alexa-enabled devices with a screen.
Examples of responsive templates that support scrolling content:
Navigate through pages of content
Display a series of pages, such as a slide show, in a Pager component. Users can swipe to navigate through the pages.
APL handles paging through the content automatically, so you aren't required to use any event handlers. Optionally, you can do the following:
- Use the
handlePageMove
handler on thePager
component to create a custom transition between pages. - Use the
onPageChanged
property on thePager
to run commands after the pager has fully changed pages. - Use the
AutoPage
andSetPage
commands to control the pager programmatically.
Paging through a pager by voice isn't automatic. To provide voice paging, you must do the following:
- Create intents for paging. You can create custom intents, or use the built-in
AMAZON.NextIntent
andAMAZON.PreviousIntent
intent. - In your skill code, write request handlers to handle the paging intents. Your handler can respond to the voice request with the Alexa.Presentation.APL.ExecuteCommands directive and the
SetPage
command to page through the pager. Use the visual context in the skill request to get information about the pager displayed on the screen.
For details about the visual context, see APL Visual Context in the Skill Request. For details about built-in intents, see Standard Built-in Intents. For details about writing request handlers, see Handle Requests Sent by Alexa.
Examples of responsive templates that support navigating through a series of pages:
Tap buttons and images
Use a touchable component to create a target users can tap to invoke actions. You can define the commands to run when the user taps a touchable component.
All touchable components support tap targets:
TouchWrapper
– Wraps a child component and makes it touchable. Use aTouchWrapper
to define custom buttons. You can also use aTouchWrapper
as the item to display in aSequence
orGridSequence
, to create a list of items that the user can tap.VectorGraphic
– Displays a scalable vector graphic image defined with Alexa Vector Graphics (AVG) format.
All touchable components have an onPress
handler that defines the commands to run when a press event occurs. A press event occurs when a pointer down event is followed by a pointer up event, such as when the user touches and releases the component. To run commands when users touch and drag the component, use the onDown
, onMove
, and onUp
handlers. For details, see Touchable Component Properties.
You can also use the pressed
state for a component in a style to change the appearance of a component when the user presses the item. For details, see APL Style Definition and Evaluation.
Examples of responsive components that provide tap targets that run commands:
All these examples except Slider
and SliderRadial
provide a primaryAction
property for the commands to run. The responsive component passes the value of the primaryAction
property to onPress
.
Some responsive templates also provide tap targets:
- AlexaDetail includes buttons.
- The list templates all let you define actions to run when the user taps an item in the list:
Use additional touch gestures
Use a touchable component to create a target for additional gestures beyond "press." Supported gestures include double-press, long press, swipe away, and a more restrictive form of tap.
All touchable components support gestures:
TouchWrapper
– Wraps a child component and makes it a target for gestures.VectorGraphic
– Displays a scalable vector graphic image defined with Alexa Vector Graphics (AVG) format.
All touchable components have a gestures
property that takes an array of gesture handlers. A gesture handler defines the type of gesture, such as DoublePress
, and then an array of commands to run when that type of gesture occurs. Use the gesture handlers to override the default onPress
event for a component.
For details about how gestures work, see Gestures. For details about a specific gesture, see the following:
Examples of responsive components that use gestures:
- AlexaSwipeToAction uses the
SwipeAway
gesture - The list items in AlexaTextList use the
SwipeAway
gesture.
Hover over a component
On devices with a cursor or mouse pointer, users can hover the pointer over a component. You can define commands to run when this hover occurs. Many Alexa devices don't have a cursor, so hover isn't always available.
All components can be a hover target. All components have the onCursorEnter
and onCursorExit
event handlers:
onCursorEnter
defines commands to run when a cursor enters the active region for the component.onCursorExit
defines commands to run when a cursor exits the active region for a component.
You can also use the hover
state for a component in a style to change the appearance of the component when the user hovers the cursor over the item. For details, see APL Style Definition and Evaluation.
The responsive components and templates don't use the onCursorEnter
or onCursorExit
handler to run commands, but some components do use the hover
state:
- The list templates, such as AlexaImageList, highlights each list item when the cursor hovers over it.
- When you configure AlexaTextList to show ordinal numbers (
hideOrdinal
isfalse
), each list item number highlights when the cursor hovers over it. - The AlexaCard component highlights the card when the cursor hovers over it.
Enter content with a keyboard
Collect input from users with an EditText component. On devices without a hardware keyboard, the component displays an on-screen keyboard when it gets focus.
The EditText
component has the following event handlers:
onTextChange
defines commands to run when the user enters text in the component.onSubmit
defines commands to run when the user submits the updated text.
Respond to keyboard input
Use keyboard event handlers to capture key presses and run commands.
All actionable components support keyboard event handlers with the handleKeyUp
and handleKeyDown
properties. The following components are actionable:
You can also define these handlers at the document level.
You can capture KeyDown
and KeyUp
events and use when
to run the commands for specific keystrokes.
Select items by voice
Use the AMAZON.SelectIntent built-in intent and a handler in your code to let users select items on the screen by voice. You can enable utterances such as the following:
- For list of items: "Select the third one."
- For any APL document: "Select this"
With these utterances, your skill gets an IntentRequest
that includes information about the content on the screen. Your skill can then take the appropriate action based on the user's request. In the code for your skill, include a handler for the AMAZON.SelectIntent
request that does the following:
- Use the visual context in the request to determine the document that was displayed when the user made the utterance. The visual context is available in the
Alexa.Presentation.APL
property within the top-levelcontext
property in the request. Thetoken
property contains the token you used for the document in theRenderDocument
directive that displayed the document. - Use the slot values provided in the
AMAZON.SelectIntent
to determine the user's request and then take an appropriate action.
The following IntentRequest
example shows an AMAZON.SelectIntent
request. In this example, the device was displaying a list and the user said "Select the third one." The token
that identifies the document is launchRequestDocument
and the ListPosition
shows that the user asked for the third item. For brevity, the example omits several request properties.
The following example shows the IntentRequest
when the user said "Select this" on the same document. The Anaphor
slot shows the value "this." Again, use the token
to determine the document the user was viewing when they made the utterance.
For details about the visual context, see APL Visual Context in the Skill Request. For details about the AMAZON.SelectIntent
APL Support for Item Selection . For details about writing request handlers, see Handle Requests Sent by Alexa.
The AMAZON.SelectIntent
works best with APL documents that display multiple items in an ordinal order. To voice-enable other items, such as buttons or other controls, create custom intents to get the user's request. Use the visual context to determine the content the user could see and the item they referred to in the request.
Control the video player
Use the Video component to play a video or series of videos. You can provide buttons and voice controls to let users control the video playback:
- Set the
autoplay
property totrue
to start playback automatically when the document loads. - Use the PlayMedia command to start playback.
- Use the ControlMedia command to control the playback, with sub-commands such as
play
,pause
, andrewind
.
The Video
component includes several event handlers that run when the player changes state:
onEnd
– Runs when the last video track is finished playing.onPause
– Runs when the video switches from playing to paused.onPlay
– Runs when the video switches from paused to playing.onTimeUpdate
– Runs when the playback position changes.onTrackUpdate
– Runs when the current video track changes.onTrackReady
– Runs when the current track state changes toready
.onTrackFail
– Runs when an error occurs and video player can't play the media.
For example, common use of the onPlay
and onPause
handlers is to update the state of a "Play/Pause" button on the screen. Using the Video
handler ensures that the button stays in sync with the video state, regardless of whether the user tapped a button or used a voice command.
Controlling video by voice isn't automatic. To provide voice control, you must do the following:
- Add intents for video control. There are several built-in intents you can use for common utterances, such as
AMAZON.PauseIntent
,AMAZON.ResumeIntent
, andAMAZON.NextIntent
. - In your skill code, write request handlers to handle the intents. Your handler can respond to the voice request with the Alexa.Presentation.APL.ExecuteCommands directive and the
ControlMedia
command. Use the visual context in the skill request to get information about the video displayed on the screen.
For details about the visual context, see APL Visual Context in the Skill Request. For details about built-in intents, see Standard Built-in Intents. For details about writing request handlers, see Handle Requests Sent by Alexa.
The AlexaTransportControls responsive component provides a set of video player controls. You provide the component the identifier for the Video
component in your document.
Related topics
Last updated: Mar 19, 2024