What Makes Up an APL Visual Response?
The following sections provide an overview of key concepts for incorporating Alexa Presentation Language (APL) into your custom skill built with the Alexa Skills Kit (ASK).
Build a visual template from components and layouts
An APL document is a JSON object that defines a template. The document provides the structure and layout for the response. You build a document from components and layouts:
- An APL component is a primitive UI element that displays on the viewport, such as a text box. Components are the basic building blocks for constructing a document.
- A layout combines visual components into a reusable template and gives it a name. You can then place the layout in your document by referencing its name. Referencing a name results in a more modular design that's easier to maintain.
APL supports others structures to facilitate reuse and modular design in a visual response:
- A style assigns a name to a set of visual characteristics, such as color and font size. You can then assign the style to a component. Use styles to keep your visual design consistent. Styles can also respond to changes in state. For example, you can use a style to define a visual change when the user presses on a button.
- A resource is a named constant you can use rather than hard-coding values. For example, you could create a resource called
myRed
that defines a particular shade of red, and then use that resource name to specify a color for a component. -
A package bundles together all the parts of a document so that you can use them across multiple APL documents. You import the package into your document.
The Alexa Design System for APL provides a set of responsive components and responsive templates. These combine APL components, styles, and resources into modular, responsive layouts you can use to build a document that works well on different viewports. You can also build your own reusable layouts and bundle them into packages to use across multiple documents.
APL documents vary in complexity. The following example shows a document that displays a green background, a header, and the Text
component. The document uses a Container
component to arrange the three components. A Container
is a component that can contain multiple items.
For details about these concepts, see the following:
Provide content to display in a data source
APL supports data-binding, which lets your document retrieve data from a separate data source that you provide. Data binding lets you separate your presentation logic (the APL document) from your source data.
In the earlier "Hello World" document example, the text to display is hard-coded in the document. For easier maintenance, put the data in a separate data source, and then point to that data source from the document. Your skill code provides the data source when you send a directive to display the document. You should always pass content that might change based on user input or other logic within your skill code in a data source instead of the document.
The following example updates the "hello world" document to use a data source. Click the ** tab to see the structure of this data source.
In the document, the value of the Text
component text
property now contains an expression set off with a dollar sign ($
) and curly brackets ({ }
). This syntax is a data-binding expression. In this example, the expression ${helloworldData.properties.helloText}
tells the document to retrieve the text to display from the data source called helloworldData
. The text to display is in the properties.helloText
property of helloworldData
.
For more details about data sources and data binding:
Make the document interactive with commands
APL supports touchable components to make your visuals interactive. You can configure a component to run a command when the user taps the component on the screen. An APL command can do the following:
- Change the visual experience on the screen. For example, you can use a command to change the value of a component property, which then changes the appearance or behavior of the component.
- Send a message to your skill, in the form of a request. Your skill handles the request, similar to the way your code handles
IntentRequest
. Your skill can respond with speech, a new document to display, or a directive to run another APL command.
The following example shows the SetValue
command configured to change the text
property of the component with the ID buttonDescriptionText
.
{
"type": "SetValue",
"componentId": "buttonDescriptionText",
"property": "text",
"value": "You pressed the 'Click me' button!"
}
You can run APL commands in multiple ways:
-
Touchable components have event handlers that can trigger commands. For example, you can assign commands to the
onPress
property on a touchable component. The commands run when the user selects the component on the screen.Several of the responsive components and templates use touchable components. These components provide properties you can use to run commands. For example, the
AlexaButton
responsive component has aprimaryAction
property that specifies the command to use when the user selects the button. - The APL document itself has an
onMount
property to run a command when the document loads. This property is useful for creating welcome animations that play when the user launches your skill. - Each component has an
onMount
property to run a command when that component loads on the screen. - The
ExecuteCommands
directive sends APL commands from your skill code.
For example, you could use the primaryAction
property on an AlexaButton
to trigger the SetValue
command.
The following example displays the AlexaButton
responsive component. When you select the button, the SetValue
command updates the text
property for a Text
component to display a message.
For details about commands, see the following topics:
Build for different devices with conditional logic
You use conditional logic to adapt your APL documents to different situations. You can create an APL document that displays the content in different ways depending on the viewport characteristics or other factors. For example, you could create a list that displays a continuous vertical scrolling list on larger screens, but presents each list item on a single page on small screens.
Every APL component and command has an optional when
property that must be true
or false
. This value determines whether the device displays the component or runs the command. The when
property defaults to true
when you don't provide a value.
To use the when
property, write a data-binding expression that evaluates to true
or false
. For example, the following statement evaluates to true when the device is a small, landscape hub.
"when": "${@viewportProfile == @hubLandscapeSmall}"
In this example, the constants viewportProfile
and hubLandscapeSmall
are resources provided as part of the alexa-viewport-profiles
package. The "at" sign (@
) is the standard syntax to reference a resource.
You can also use data-binding expressions to assign property values on components and commands conditionally. For example, the following expression returns the string "center" for small, round hubs, and "left" for all others.
${@viewportProfile == @hubRoundSmall ? 'center' : 'left'}
Conditional logic is a key ingredient when you write responsive APL documents. A responsive APL document can adjust to the characteristics of the viewport. Because users can invoke skills on devices with a wide variety of screen sizes, shapes, and aspect ratios, responsive APL is critical to creating a good user experience. For more details and recommendations, see Build Responsive APL Documents.
Send your document and data source to Alexa
Your AWS Lambda function or web service communicates APL-related information with the directives and requests defined in the Alexa.Presentation.APL
interface:
- Send the
Alexa.Presentation.APL.RenderDocument
directive in a response to tell the device to display APL content. Include both the document and the associated data source as part of the directive. - Send the
Alexa.Presentation.APL.ExecuteCommands
directive to run commands. These commands target specific parts of the document. For example, theSpeakItem
command tells the device to speak the text defined for a particular component (such as aText
component). - Use the
SendEvent
command in your document to send anAlexa.Presentation.APL.UserEvent
request. The request tells your skill about user actions that take place on the device, such as when the user taps a button. In your code, write handlers to accept and process these types of events.
Use the directives and requests to build a user interface that works with both voice and touch. For example, a trivia game skill could let users select trivia category to play in two ways:
- An APL document displays a list of categories on the screen. The user taps the category they want to play. Tapping the command runs the
SendEvent
command to send a request to the skill to start the game. - The interaction model for the skill has a
ChooseCategoryIntent
intent with utterances like "play the{category}
category." The user speaks the utterance to choose the category and start the game.
The skill code then has a handler that responds to either the Alexa.Presentation.APL.UserEvent
request or the IntentRequest
for ChooseCategoryIntent
. The handler gets the selected category and starts the game.
For more about the directives and requests, see:
- Use APL with the ASK SDK v2
- Alexa.Presentation.APL Interface
- Custom Skill Request and Response JSON Reference
Related topics
Last updated: Nov 28, 2023