Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Digital People direct the attention of users to onscreen content, with glances, hand gestures, and pointing.

How content awareness works

When the @showcards command is used to display onscreen content, content awareness behavior is autonomously triggered.

@showcards triggers:

  • a glance at the content

  • an arm gesture towards the content

@showcards does not trigger:

  • pointing at content (this needs to be triggered using the @PointObject command )

How commands work

Commands that are subsequent to @showcards in a sentence strip the autonomous behaviour out and introduce what a user wants specifically. Available commands to use are:

Glancing:

  • @AttendObject

Gesturing:

  • @GestureObject

  • @GestureObjectBothSides

Pointing:

  • @PointObject

  • @PointObjectPalmUp

Examples:

Here is a picture of my cat, @showCards(cat) her name is Calliope.
  • Using @showcards(cat) on its own displays the image of the cat, and triggers a glance and an arm gesture.

Here is a picture of my cat, @showCards(cat) her @AttendObject(cat) name is Calliope.
  • The subsequent use of @AttendObject(cat) in the sentence refines the autonomous behaviour to include the glance only, deactivating the arm gesture.

Here is a picture of my cat, @showCards(cat) her @PointObject(cat) name is Calliope.
  • The subsequent use of @PointObject(cat) in the sentence refines the autonomous behaviour to include the glance, deactivates the arm gesture, and includes a pointing gesture.

Screen zones

Content awareness behavior is also controlled by zone of the screen in which the onscreen content is placed, the size of the frame, and the location of the Digital Person.

  • Content in the yellow zones triggers glancing.

  • Content in the blue zones triggers glancing and an arm gesture.

  • Content in the pink zone triggers glancing and can trigger a pointing gesture.

How commands and screen zone rules interact

  • If an object is placed inside the pointing zone, both the @PointObject and @GestureObject will result in pointing behavior.

  • If an object is placed in the gesturing zones, both @PointObject and @GestureObject will result in gesturing behavior.

  • If an object is placed in the glancing zones, both @PointObject and @GestureObject are disabled, and glancing is performed.

  • If the frame area is too small to perform gesturing or pointing (not enough space for arm movement, for example in the widget), both @PointObject and @GestureObject are disabled, and glancing is performed.

Behavior hierarchy

Our autonomous animation system is built with a hierarchy, so that all aspects of our gesturing and behavior work seamlessly together.

1

Glancing at onscreen content

Glancing at onscreen content takes precedence. However glancing will be performed simultaneously with either gesturing and emotional behavior from Real-Time Gesturing and Behavior Styles, or Pointing and Gesturing at onscreen content. Glancing also takes precedence over conflicting behavior from Behavior Tags.

2

Pointing and Gesturing at onscreen content

Pointing and Gesturing at onscreen content override arm gestures triggered by Real-Time Gesturing and Behavior Styles within the same sentence. Some head and neck gestures and all emotional behavior from Behavior Tags, Real-Time Gesturing and Behavior Styles and will be performed simultaneously.

3

Behavior Tags

Behavior Tags override some head and neck gestures and all emotional behavior from Real-Time Gesturing and Behavior Styles.

4

Real-Time Gesturing and Behavior Styles

Autonomous emotional and gestural behavior from Real-Time Gesturing and Behavior Styles will be performed at all times when not overridden by glancing, pointing or gesturing at onscreen content, or Behavior Tags.

Glancing at content

@attendObject

@attendObject([object_id: str], [start_time(optional): float], [duration(optional): float])

object_id: string
(required)

The id of the on-screen element that the Digital Person should look towards.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue looking at the on-screen element.

default: 1 (one second)

Gesturing at content

@gestureObject

@gestureObject([object_id: str], [start_time(optional): float], [duration(optional): float])

object_id: string
(required)

The id of the on-screen element that the Digital Person should gesture towards.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue looking at the on-screen element.

default: 1 (one second)

@GestureObjectBothSides

@gestureObjectBothSides([object_id_1: str],[object_id_2: str], [start_time(optional): float], [duration(optional): float])

The developer has to make sure that objects are placed on both sides of Digital Person in the blue zones above, as the Digital Person will always perform the gesture.

object_id_1: string

object_id_2: string
(required)

The id of the on-screen element that the Digital Person should gesture towards.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should look towards the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue looking at the on-screen element.

default: 1 (one second)

Pointing at content

The Digital Person will perform pointing with either their right or left hand depending on the content placement with respect to the Digital Person. They will not reach across their own body.

Pointing has the highest priority in the animation system, overriding Real-Time Gesturing and Behavior Styles. This feature is enabled via the corpus-based command. You have a choice of hands hape of the Digital Person for pointing at content:

  • Finger point @PointObject(x)

  • Palm half up @PointObjectPalmUp(x)

Using pointing effectively

  1. To use pointing effectively put the showcards command into the corpus first, and then the point command should be added at least three words later.

    We can help you find the perfect present, whether @showcards(x) it’s a fun @PointObject(x) game for children or the latest novel to take on a beach holiday with friends.
  2. To use multiple pointing commands close together: individual pointing commands should be separated by a minimum of three or more words. In the below example, the two pointing commands are separated by 6 words.

    Example: We can help you find the perfect present, whether @showcards(x) it’s a fun @PointObject(x) game for children  @showcards(x)  or the latest  @PointObject(x)  novel to take on a beach holiday with friends.
  3. To use both pointing and gesturing at content in the same sentence; gesture commands must be manually inserted. This is because the use of a pointing command in a sentence disables the otherwise automatic insertion of a gesture for additional content.

    Example: We can help you find the perfect present, whether @showcards(x) it’s a fun @PointObject(x) game for children  @showcards(x)  or the latest  @GestureObject(x)  novel to take on a beach holiday with friends.

@PointObject

@PointObject([object_id: str], [start_time(optional): float], [duration(optional): float])

object_id: string
(required)

The id of the on-screen element that the Digital Person should point at. By default, the target of pointing is the center of a content box. If an object is specified with providing UV coordinates, for example:

"component": "image",

      "meta": {

        "choiceA-location": {

          "v": 0.9,

          "u": 0.9

        }

      },

then the pointing target will be given a UV coordinate. Here, UV coordinate is a local coordinate with respect to the content box.

For the Default UI this is supported for content blocks, and should use the same object_id as would be used for the @showcards() speech marker.

For a custom UI this is supported for any element marked up with the data-sm-content="objectId" HTML attribute. See [WebSDK / Content Awareness].

start_time: float

The number of seconds delay before the Digital Person should point at the on-screen element. This delay is relative from the point in speech that the speech marker is reached.

default: 0 (zero seconds, no delay)

duration: float

The number of seconds that the Digital Person should continue pointing at the on-screen element.

default: 1 (one second)

@PointObjectPalmUp

@PointObjectPalmUp([object_id: str], [start_time(optional): float], [duration(optional): float])

The Digital Person will perform pointing in the same way as for command @PointObject() but with a different pointing style. Here, the palm would be turned halfway up and fully open. See parameter description for @PointObject() command.

In the following examples, both the @PointObject and @PointObjectPalmUp functions are used in the corpus.

Pointing without coordinates

Here is a multi-card response @showcards(choiceA,choiceB) I can point at the first card @PointObject(choice). Also, I can gesture at the second card  @GestureObject(choiceB).
pointing-short.mp4

Sample payload for cards:

{
  "soulmachines": {
    "choiceB": {
      "component": "image",
      "position": "left",
      "data": {
        "url": "https://placekitten.com/300/300",
        "alt": "A cute kitten"
      }
    },
    "choiceA": {
      "position": "right",
      "component": "image",
      "data": {
        "url": "https://placekitten.com/300/300",
        "alt": "A cute kitten"
      }
    }
  }
}

Pointing with coordinates

@showcards(choiceA)@showcards(choiceB)This is an example of pointing using two pointing styles. I can point @PointObjectPalmUp(choiceB,0,3) with my right arm palm up. I can point @PointObjectPalmUp(choiceA,0,3) with my left arm palm up.  I can use my finger to point at the bottom left corner  @PointObject(choiceA-child-1,0,3) and bottom right corner @PointObject(choiceA-child-2,2.1,3) of the card.
pointing.mp4

Sample payload for cards:

{
  "soulmachines": {
    "choiceB": {
      "component": "image",
      "position": "left",
      "data": {
        "url": "https://placekitten.com/300/300",
        "alt": "A cute kitten"
      }
    },
    "choiceA": {
      "meta": {
        "choiceA-child-1": {
          "u": 0,
          "v": 0.9
        },
        "choiceA-child-2": {
          "u": 0.9,
          "v": 0.9
        }
      },
      "position": "right",
      "component": "image",
      "data": {
        "url": "https://placekitten.com/300/300",
        "alt": "A cute kitten"
      }
    }
  }
}

Content awareness is tied to Digital people who support Real-Time Gesturing or Behavior Styles, and a UI that support Content Awareness.

  • No labels