Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Speech markers are triggers that can be included within the Digital Person's text response in order to make some action happen. Speech markers always begin with an `@` symbol and often require one or more parameters to be passed to them.

Note: Depending on your conversation provider you may need to escape Speech Markers. For example in IBM Watson Assistant @attendObject would have to be written as \@attendObject.

Speech markers are not spoken by the Digital Person, but they do cause the related action to occur at the point in the speech where the speech marker appears in the text.

The use of the following speech markers in a particular response will disable equivalent autonomous behavior for that response.

@attendObject

@attendObject([object_id: str], [start_time(optional): float], [duration(optional): float])

The Digital Person should look towards the on-screen object with the given id. Requires Real-Time Gesturing and a UI that supports Content Awareness.

object_id: string
(required)

The id of the on-screen element that the Digital Person should look towards.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue looking at the on-screen element.

default: 1 (one second)

@gestureObject

@gestureObject([object_id: str], [start_time(optional): float], [duration(optional): float])

The Digital Person should gesture towards the on-screen object with the given id. Requires Real-Time Gesturing and a UI which supports Content Awareness.

The Digital Person will only perform the gesture if the screen aspect ratio is wide enough for the gesture to look natural, and if the content does not overlap the Digital Person.

object_id: string
(required)

The id of the on-screen element that the Digital Person should gesture towards.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue looking at the on-screen element.

default: 1 (one second)

Pointing at content.

  • Pointing is enabled for customers via corpus based command.

  • The customer can use pointing to draw attention to on screen content, or gesturing, or a mix of both.

  • The existing CUE hand gestures are the automatic default. If a customer adds a pointing command then the DP will point instead of performing a CUE gesture.

  • Pointing is enabled on both arms. The DP will point with the arm on the side of the displayed content, and not reach across her own body.

  • Pointing has the highest priority in the animation system, and it will override Real Time Gesturing/NLP-TTG animations.

  • The customer has a choice of hand shape for pointing

    • Finger point @PointObject()

    • Palm half up @PointObjectPalmUp()


      @PointObjectPalmUp

      @PointObject([object_id: str], [start_time(optional): float], [duration(optional): float])

      The Digital Person should point with finger  at the on-screen object with the given id. Requires Real-Time Gesturing and a UI which supports Content Awareness.


      The Digital Person will perform pointing  with either right or left hand depending on the content placement with respect to the Digital Person. 


object_id: string
(required)

The id of the on-screen element that the Digital Person should point at. By default, the target of pointing is the center of a content box. If an object is specified with providing UV coordinates, for example,       

"component": "image",

      "meta": {

        "choiceA-location": {

          "v": 0.9,

          "u": 0.9

        }

      },

then the pointing target will be given a UV coordinate. Here, UV coordinate is a local coordinate with respect to the content box.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should point at the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue pointing at the on-screen element.

default: 1 (one second)

@PointObjectPalmUp

@PointObjectPalmUp([object_id: str], [start_time(optional): float], [duration(optional): float])

The Digital Person should point with palm half way up at the on-screen object with the given id. Requires Real-Time Gesturing and a UI which supports Content Awareness.

The Digital Person will perform pointing  in the same way as for command @PointObject() but with a different pointing style. Here, the palm would be turned half way up and fully open. See  parameter description for  @PointObject() command.

  • No labels