Versions Compared
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Speech markers are triggers that can be included within the Digital Person's text response in order to make some action happen. Speech markers always begin with an `@` symbol and often require one or more parameters to be passed to them.
Info |
---|
Note: Depending on your conversation provider you may need to escape Speech Markers. For example in IBM Watson Assistant |
Speech markers are not spoken by the Digital Person, but they do cause the related action to occur at the point in the speech where the speech marker appears in the text.
The use of the following speech markers in a particular response will disable equivalent autonomous behavior for that response.
Gazing at content
...
@attendObject
Code Block |
---|
@attendObject([object_id: str], [start_time(optional): float], [duration(optional): float]) |
The Digital Person should look towards the on-screen object with the given id. Requires Real-Time Gesturing and a UI that supports Content Awareness.
object_id: string | The id of the on-screen element that the Digital Person should look towards. For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker. For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness]. |
start_time: float | The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached. default: 0 (zero seconds, no delay) |
duration: float | The number of seconds that the Digital Person should continue looking at the on-screen element. default: 1 (one second) |
Gesturing at content
...
@gestureObject
Code Block |
---|
@gestureObject([object_id: str], [start_time(optional): float], [duration(optional): float]) |
The Digital Person should gesture towards the on-screen object with the given id. Requires Real-Time Gesturing and a UI that supports Content Awareness.
The Digital Person will only perform the gesture if the screen aspect ratio is wide enough for the gesture to look natural and if the content does not overlap the Digital Person.
object_id: string | The id of the on-screen element that the Digital Person should gesture towards. For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker. For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness]. |
start_time: float | The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached. default: 0 (zero seconds, no delay) |
duration: float | The number of seconds that the Digital Person should continue looking at the on-screen element. default: 1 (one second) |
@GestureObjectBothSides
Code Block |
---|
@gestureObjectBothSides([object_id_1: str],[object_id_2: str], [start_time(optional): float], [duration(optional): float]) |
The Digital Person should gesture with both hands simultaneously towards two on-screen objects with the given id. Requires Real-Time Gesturing and a UI
...
that supports Content Awareness
The Digital Person will always perform the gesture. A user has to make sure that objects are placed
...
at both sides of
...
Digital Person (left and right-hand side).
object_id_1: string object_id_2: string | The id of the on-screen element that the Digital Person should gesture towards. For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker. For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness]. |
start_time: float | The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached. default: 0 (zero seconds, no delay) |
duration: float | The number of seconds that the Digital Person should continue looking at the on-screen element. default: 1 (one second) |
Pointing at content
...
Pointing
...
speech markers are used to draw attention to on-screen content or gesturing, or a mix of both.
...
The existing
...
cue hand gestures are the automatic default. If
...
you use the pointing command then the
...
Digital Person will point instead of performing a
...
cue gesture.
...
Pointing is enabled on both arms
...
of the Digital Person and will point with the arm on the side of the displayed content, and not reach across
...
their own body.
Autonomous Ponting can be enabled by toggling on the “Real Time Gesturing” toggle in the Digital DNA Studio. Pointing has the highest priority in the animation system, and it will override Real-Time Gesturing
...
. Pointing at on-screen content can also be customized through the corpus. You have a choice of handshape of the Digital Person for pointing at content:
Finger point
@PointObject(x)
Palm half up
@PointObjectPalmUp(x)
Follow the tips below to maximize your experience with the pointing feature:
To use pointing effectively: put the showcards command into the corpus first, and then the point command should be added at least three words later.
Example: We can help you find the perfect present, whether
@showcards(x)
it’s a fun@PointObject(x)
game for children or the latest novel to take on a beach holiday with friends.
To use multiple pointing commands close together: individual pointing commands should be separated by a minimum of three or more words
...
The customer has a choice of hand shape for pointing
Finger point
@PointObject()
Palm half up
@PointObjectPalmUp()
...
. In the below example the two pointing commands are separated by 6 words.
Example: We can help you find the perfect present, whether
@showcards(x)
it’s a fun@PointObject(x)
game for children@showcards(x)
or the latest@PointObject(x)
novel to take on a beach holiday with friends.
To use both pointing and gesturing at content in the same sentence; gesture commands must be manually inserted. This is because the use of a pointing command in a sentence disables the otherwise automatic insertion of a gesture for additional content.
Example: We can help you find the perfect present, whether
@showcards(x)
it’s a fun@PointObject(x)
game for children@showcards(x)
or the latest@GestureObject(x)
novel to take on a beach holiday with friends.
@PointObject
Code Block |
---|
@PointObject([object_id: str], [start_time(optional): float], [duration(optional): float]) |
The Digital Person should point with
...
the finger at the on-screen object with the given id. Requires Real-Time Gesturing and a UI
...
that supports Content Awareness.
The Digital Person will perform pointing with either right or left hand depending on the content placement with respect to the Digital Person.
object_id: string | The id of the on-screen element that the Digital Person should point at. By default, the target of pointing is the center of |
...
"component": "image",
"meta": {
"choiceA-location": {
"v": 0.9,
"u": 0.9
}
},
...
a |
...
content box. For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker. For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness]. | |
start_time: float | The number of seconds delay before the Digital Person should point at the on-screen element. This delay is relative from the point in speech that the speech marker is reached. default: 0 (zero seconds, no delay) |
duration: float | The number of seconds that the Digital Person should continue pointing at the on-screen element. default: 1 (one second) |
@PointObjectPalmUp
Code Block |
---|
@PointObjectPalmUp([object_id: str], [start_time(optional): float], [duration(optional): float]) |
The Digital Person should point with palm halfway up at the on-screen object with the given id. Requires Real-Time Gesturing and a UI that supports Content Awareness.
The Digital Person will perform pointing in the same way as for command @PointObject()
but with a different pointing style. Here, the palm would be turned halfway up and fully open. See parameter description for @PointObject()
command.
Pointing Demo
In the below example, both the @PointObject
and @PointObjectPalmUp
functions are used in the corpus.
Here is a multi-card response
@showcards(choiceA,choiceB)
I can point at the first card@PointObject(choice)
. Also, I can gesture at the second card@GestureObject(choiceB)
.
Sample paylod:
Code Block |
---|
{
"soulmachines": {
"choiceB": {
"component": "image",
"position": "left",
"data": {
"url": "https://placekitten.com/300/300",
"alt": "A cute kitten"
}
},
"choiceA": {
"position": "right",
"component": "image",
"data": {
"url": "https://placekitten.com/300/300",
"alt": "A cute kitten"
}
}
}
} |
Contents
Table of Contents | ||||
---|---|---|---|---|
|