Document (APL for Audio)
An APL document is a JSON object that defines a template to create an audio response your skill sends to an Alexa device. The document defines and arranges a set of audio clips. You build these audio clips from text-to-speech and audio files using APL components. For example, the document might specify a series of audio clips to play one after another, or multiple audio clips to mix and play at the same time. You send the document to Alexa with the Alexa.Presentation.APLA.RenderDocument
directive. Alexa renders the audio as part of your skill response.
For APL for Audio, use the document type
APLA instead of APL.
Examples
A trivial APLA document might have a single component like Speech
. This document generates the speech "Hello world".
A richer APLA document might include multiple components to create different audio effects. The following example uses a Sequencer
component to join together multiple Speech
components and play them one after another.
Document properties
An APLA document has the following top-level properties.
Property | Type | Required | Description |
---|---|---|---|
description |
String | No | An optional description of this document. |
mainTemplate |
Composition | Yes | The starting composition. Contains the components to inflate into an audio response. |
type |
"APLA" | Yes | Must be "APLA" |
compositions |
Map of Composition | No | Custom compositions |
resources |
Array of resources | No | Resource definitions |
version |
"0.91" | Yes | Version string of the APLA specification. Currently "0.91" |
mainTemplate
Contains the composition to inflate when Alexa renders the document. The following table shows the mainTemplate
properties.
Property | Type | Required | Description |
---|---|---|---|
parameters |
Array of strings | No | A name you use to map your data sources to your document. Defaults to payload . You use this name in your data-binding expressions to use data from the data source in your document. |
items |
Array of compositions | Yes | The compositions and components to render. When items contains multiple items, Alexa renders the first one where when evaluates to true . |
When you send your document to Alexa with the RenderDocument
directive, you provide an object with your data sources in the datasources
property of the directive. This object is then mapped to the name you provide in the mainTemplate.parameters
property.
For details about the RenderDocument
directive, see Alexa.Presentation.APLA Interface Reference.
For details about data sources, see Data Sources,
compositions
The compositions
property is a string/object map of composition names to composition definitions. A composition combines primitive components and other compositions into a new custom component you can use within the main template of the document. For details about compositions, see Compositions.
resources
Contains an array of resource blocks. Resources are named entities you can access through data-binding and value resolution. For details about resource blocks, see Resources.
version
The version of the APLA specification that the APLA document uses. The APLA rendering engine uses version
to identify required features and render the document accurately. Set this property correctly.
An APLA rendering engine should refuse to render a document if the engine doesn't support the document's version number. APLA rendering engines are backwards compatible. Therefore, a device that supports "0.91" also supports "0.8" documents.
Inflation
Alexa inflates the document into an audio response, using the following steps:
- Construct an initial data-binding context.
- For each
parameter
in themainTemplate
:- Identify a data source with the same name
- Update the data-binding context to set that name to the value in the data source.
- Use the single child inflation approach to convert
mainTemplate
into an audio response.
Related topics
Last updated: Nov 28, 2023