Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This section describes the options available in Digital DNA studio that allow you to control and fine-tune the behavior of your Digital Person.

Behavior Adjustments

Note: These used to be called Personality Tags. They are now in the Behavior Adjustments section, as Sentence-Based and Word-Based tags.

Behavior adjustments using tags enable you to add a limited set of facial expressions and gestures to selected words, phrases, and sentences in your digital person's speech. When Real-Time Gesturing is unavailable (due to language choice or Human OS version) Behavior Tags are the primary tool for adding appropriate head and neck gesturing and emotional expressions to your digital person's behavior.

When Real-Time gesturing is available, Behavior Tags can be used to adjust the autonomous behavior of the digital person:

  • You can use Behavior Tags to override Real-Time Gesturing for specific sentences or words spoken by your digital person.

  • You can use the behavior tags to override the autonomous behavior with something more appropriate to your use case. For example, you might want to add brand and product-related words to Smiling Gesture (Word-Based).

  • You can also use Neutral Long (Sentence-Based) and Neutral Short (Word-Based) to define sentences and words that will temporarily deactivate the real-time gesturing entirely.

Other Behavioural Settings

Boost Expressiveness with additional Iconic Gestures toggle

When enabled, the digital person will automatically insert iconic gestures into the conversation. Iconic gestures like a thumbs up, heart sign, and wave were previously only possible if they were manually added to conversations using Gesture Markup. Note these gestures are not triggered based on a single word, but rather based on the context of the conversation.

To see some of these gestures in action in preview mode ask your digital person to repeat some of the following phrases:

  • After so much effort, seeing the project come to life is amazing.

  • You're doing an amazing job!

  • It is best to greet elders with a respectful bow.

  • I'm not quite finished yet.

  • I can't believe you just said that.

Note to see this feature in preview mode, it’s best to select the widest static wide camera view.

Be sure to click update preview after enabling this toggle to see these gestures.

Note this feature is currently in beta and only works on the latest version of Human OS and English language.

My Digital Person should greet me at start

This option makes your Digital Person greet users as it starts up to provide them with a verbal cue that the interaction has commenced. This setting is enabled by default. If this option is turned off the Digital Person will wait for the user to initiate the conversation.

End session after x minutes of inactivity

This option makes your Digital Person automatically end the conversation session with the user, after a specified period of inactivity from the user side, e.g. after 5 minutes. This setting is enabled by default.

Empathetic Reaction to User Facial Expression

Digital people can track a user’s face, and detect, analyze and respond to the user's emotional expressions, in real-time.

React to negative facial expressions toggle

When enabled, this toggle allows the digital person to react to both positive and negative expressions on users' faces. When disabled, the digital person will only react to positive facial expressions. The toggle is enabled by default.

When using Real-Time Gesturing and Behavior Styles, each behavior style has its own, behavior-appropriate reactive style.

Note: This feature is supported for all languages, on a Digital Person running Human OS 2.3+

Note: this feature does not disable the digital person’s ability to perform emotionally appropriately when delivering negative speech, either through Real-Time Gesturing, or if Behavior Tags have been added.

Empathetic Reaction to User Speech

Digital people can now analyse the emotional content of the words a user says to them, and produce an emotionally appropriate facial performance in reaction to those words, in real-time.

Digital People can express a full range of emotions in reaction to emotional user speech. When using Real-Time Gesturing and Behavior Styles, each behavior style has its own, behavior-appropriate reactive style.

Note: This feature is supported in English and Japanese languages, on a Digital Person running Human OS 2.3+, and in Korean on a Digital Person running Human OS 2.4+.

React to negative speech toggle

This toggle allows the digital person to react to both positive and negative user speech. When disabled, the digital person will only react to positive user speech. The toggle is enabled by default.

“React to negative speech” turned on (enabled)

“React to negative speech” turned off (disabled)

Notice how she responds emotionally to both the negative sentences and positive sentences.

react-to-negative-speech-on.mp4

Notice how she does not respond emotionally to the negative sentences, but remains neutral.

react-to-negative-speech-off.mp4

Contents

  • No labels