Human OS
This page lists the release notes for Soul Machines' Human OS platform.
The Soul Machines Human OS Platform features a patented Digital Brain that makes it possible to deliver the goodness of human and machine collaboration. Soul Machines Human OS Autonomously Animates combining the quality of hyper-realistic CGI with a fully autonomous, fully animated CGI or digital character.
Please visit our website: http://soulmachines.com to find out more about our solutions and services. You can also contact our Customer Success Team if you want to know more about how we can help you in your digital transformation journey.
Release Version 2.6 | 29 May 2023
FEATURE
New Thinking Behaviors
We are excited to introduce new variations of thinking behaviors, such as re-engaged eye contact when answering and changing of facial expressions during ‘thinking'. With these enhancements, interactions will feel smoother and more natural, providing a seamless conversational experience and effectively minimising the perception of conversation latency delays.
New BowHandsPressedTogether hashtag gestures
We have introduced new hashtag gestures that are applicable for Digital People running on Human OS 2.6+ and on all languages:
#
BowHandsPressedTogether
(#Wai
)- Bow Hands Pressed Together gesture#
BowHandsPressedTogetherDeep
(#WaiDeep
) - Bow Hands Pressed Together Deep gesture
Note that the animations referred to as #Wai and #WaiDeep are Thai-specific names for the animations known as #BowHandsPressedTogether and #BowHandsPressedTogetherDeep, respectively. Despite the different names, they represent the same animations.
BUG FIXES
The problems with lips jittering after pauses, holding breath, and the start of sentences have been successfully resolved.
Bug fixes that allow for Real-Time Gesturing to continue when used in a sentence that includes a hand and body hashtag. This means you can pop a #HeartSign in and you’ll still see autonomous arm gesturing before and after it.
Release Version 2.5 | 23 January 2023
FEATURE
Gesture Markup
In addition to the Real-Time Gesturing, Behavior Styles and Behavior Tags features in DDNA, your Digital Person’s behavior can now be adjusted by inserting hand and body gesture markup directly into your corpus.
This is a fantastic way to add further fine-tuning to your Digital Person should you require it, or add a fun pop of personality like a heart sign or wave to the perfect moment in your Digital Person’s conversation. Just another great way to bring your Digital Person to life!
We have also included a list of head and face gesture markup for all clients, especially designed to enable clients with older versions of Human OS, or languages not currently supporting Real-Time Gesturing, to build great experiences.
Release Version 2.5 | 14 December 2022
FEATURE
Continuous Gesturing
We are excited to announce that our digital people are now able to perform arm gestures continuously Previously they returned to the ‘arms beside their sides’ pose between each gesture. While they will continue to return to this pose in between sentences, within sentences arm gestures now blend from one to the other, allowing for strings of gestures to be performed, and much more communicative and lively gesturing.
UPDATE
Larger Smiles for Elated and Bubbly
We have added larger, open-mouthed smiles to these two super happy behavior styles, to give them an even bigger boost of positivity. You can see them in the end of the below video.
Release Version 2.5 | 28 November 2022
UPDATE
Improvements to facial emotional analysis
We have improved the accuracy of our detection and classification of facial expressions. Our system is now a composite driven by research-based rules and deep learning techniques. The improvements mean that our Digital People respond to users' facial expressions with a higher degree of accuracy. The improved accuracy also contributes to the overall value of our multi-modal emotional analysis, accessed via the Insights API.
Release Version 2.4 | 14 November 2022
FEATURE
Interrupt your Digital Person by holding up your hand in a ‘stop talking’ gesture.
You are now able to interrupt a digital person using a hand gesture. The user will do this by holding up their hand in a ‘stop’ gesture, for example, or . The feature can be enabled and disabled directly in your conversational corpus using simple high-level commands; #EnableStopTalkingGesture and #DisableStopTalkingGesture.
Release Version 2.4 | 31 October 2022
FEATURE
Elated, an all new Behavior Style for English, Japanese and Korean.
Elated is a new Behavior Style designed with a very high degree of internal happiness. Elated will give a positive boost to your hardworking digital people in roles or industries that require them to be happy and upbeat most of the time. Like all Behavior Styles, Elated will behave appropriately and deliver dialogue with sensitivity. For example; if given something sad to say, they will reflect that in their behavior while speaking, before returning to their truly sunny nature.
BUG FIXES
Fix for abrupt pointing behavior when using Cinematic or Responsive Cameras.
A bug can occur when gesturing at content overlaps with Cinematic or Responsive camera cuts. This can create confusion about where the target is and result in ‘choppy’ or ‘karate like’ behavior. This bug has now been addressed.
Release Version 2.4 | 17 October 2022
UPDATED
Content Awareness
We’ve added a new dialog command: @showcardsNoGesture. This command disables our autonomous content awareness behavior, meaning the Digital Person will not glance, gesture or point at onscreen content. Here’s how to use commands to optimize your conversation, and make the most of content awareness for developers, and conversation designers and engineers.
Release Version 2.4 | 5 October 2022
UPDATED
Content Awareness
Content awareness is Digital People directing the attention of users to onscreen content, with glances, hand gestures, and pointing. Our content awareness behavior has been upgraded to work with the Web Widget.
BUG FIXES
Content awareness in the Web Widget
When content is populated in the browser outside of the widget, the digital person’s glance is now directed at the content correctly.
Release Version 2.4 | September 2022
BUG FIXES
Japanese Behavior Styles
Fix to ensure both Elegant and Dramatic styles have correct character files.
Abrupt Gesturing
Digital People now smoothly transition between gestures again. The bug was introduced in release 8b.
Shock or Surprise reaction
Fix to ensure Digital People do not register shock or surprise in response to changes in their environment, or on startup. This was an intermittent bug.
Release Version 2.4 | July 2022
UPDATED
We have updated our digital people’s back-channeling behavior to include a range of thinking behaviors.
Release Version 2.4 | June 2022
Bug Fixes
Neck and gaze behavior bug fix
Correction to head and neck behavior when gesturing and pointing at content in custom UIs. This includes a minor improvement to gaze behavior overall.
KNOWN BUGS
Pointing behaviour
If pointing has been utilized in your existing conversational experience, we recommend not upgrading at this time. There is a known bug causing the appearance of popping in the animation. The bug fix will be included in our upcoming release.
Release Version 2.4 | May 2022
Feature
Support for Korean language speaking Digital People
Real-Time Gesturing
With Real-Time Gesturing enabled your Korean-speaking digital person to analyze what they’re saying via Natural Language Processing, and adds emotionally appropriate gesturing and behavior to their speech in real-time.
Behavior Styles
Selecting a Behavior Style in Digital DNA Studio allows you to effortlessly match the gestural and emotional behavior of your Korean-speaking digital person with the persona you’ve created and your use case. Behavior Styles affect the overall ‘feel’ of the character and empathy of your digital person. Real-Time Gesturing is a prerequisite for this feature. A selection of eight behavior styles has been enabled for Korean.
Responding Empathetically to User Speech
Korean speaking digital people can now analyze the emotional content of the words a user says to them, and produce an emotionally appropriate facial performance in reaction to those words, in real-time. Learn more.
Toggle: React to Negative Speech
With this toggle you can allow your Korean-speaking digital person to respond to the full range of detected emotions in user speech, or you can choose to allow your digital person to respond only to postive speech. Learn more.
Updated
Behavior Styles for Japanese speaking Digital People expanded
An expanded selection of eight Behavior Styles has been enabled for Japanese, and the style selected by default has been renamed Elegant. This update is compatible with digital people running Human OS 2.3. Learn more.
Release Version 2.3 | April 2022
Feature
Behavior Styles
Selecting a Behavior Style in Digital DNA Studio allows you to effortlessly match the gestural and emotional behavior of your digital person with the persona you’ve created and your use case. Behavior Styles affect the overall ‘feel’ of the character and empathy of your digital person. Real-Time Gesturing is a prerequisite for this feature. Available in English and Japanese.
Real-Time Gesturing for Japanese
With Real-Time Gesturing enabled your Japanese speaking digital person analyzes what it’s saying via Natural Language Processing, and adds emotionally appropriate gesturing and behavior to their speech in real-time. Real-Time Gesturing needs to be enabled in order for Behavior Styles to work.
Responding Empathetically to User Speech
Digital people can now analyse the emotional content of the words a user says to them, and produce an emotionally appropriate facial performance in reaction to those words, in real-time. Learn more.
Toggles: React to Negative Facial Expressions and Negative Speech
With these two toggles, you can allow your digital person to respond to the full range of detected emotions in user facial expressions and user speech, or you can choose to allow your digital person to respond only to positive facial expressions and speech.
A toggle to enable/disable responding to negative user facial expressions
A toggle to enable/disable responding to negative user speech
Behavior Tags
We have renamed the Personality Tags feature, to Behavior Tags. You will find them in the Behavior Adjustments section. We have updated the documentation around how to use them, with and without Real-Time Gesturing.
Updated
In the Camera Behavior section, under Customise your Interaction, we have updated the Default Camera frame to fully show the badge, and allow plenty of space above the digital people’s heads. A preview of the new frame is below.
Bug Fixes
We have resolved an issue where the digital person’s gaze would ‘overshoot’ when performing the autonomous gesturing at content.
Release Version 2.2 | September 2021
Feature
Real-time Gesturing v2
Our last release of Real-Time Gesturing made the lion's share of the gesturing and behavior tuning process autonomous. Customers simply needed to focus on training the domain-specific phrases. The gesturing selection was more accurate and the emotions were optimized per our customer needs.
With 2.2, we have expanded this system to add dozens of new autonomous gestures that make the interaction feel even more life-like. By incorporating speaking gestures, attention guiding gestures, and gestures that are symbolic of the concepts that your Digital Person is trying to share, we’ve created the most life-like autonomous body gesturing system on the market.
And now, for the first time, Digital People can use their hands to guide end-user attention towards the content on the screen!
Content Awareness
Our customers love to use Digital People to emphasize and create deeper engagement with content. To further enable them to create deeper connections with their content, we’ve enabled Digital People to be content-aware.
With Release Version 2.2 Digital People are aware of the digital content around them on your webpage or in your UI. As that content appears, they’ll guide end-user attention towards the content by glancing and gesturing towards it, autonomously! You can also trigger gestures specifically towards the content, easily.
For customers that are using the Auckland theme, this is available out of the box.
For customers using a custom UI, we have also built new functionality into the SDK API allowing developers to easily make their custom UIs enabled for content awareness.
All-New Camera Behaviour
The film, television, and content creation industries have long used changes in perspective to enhance a story and keep viewers engaged.
Now, your Digital People can do the same thing! You can now manually, or autonomously, adjust the framing and perspective of the camera to create the high level of engagement that content creators use to maintain attention and increase understanding.
The Responsive Camera is dynamic, automatically cutting to a different framing to introduce content cards in more compelling ways. This option is great for content-rich interactions.
The Cinematic Camera has all of the features of the Responsive Camera. It will also dynamically change the framing of the Digital Person during extended periods of conversational (no content).
There are 3 camera framings included in the set of cuts: The standard ‘head & shoulders’ shot, a closeup shot for focused attention, and a wide shot to show off autonomous gesturing and body movement.
For customers that are using the Auckland theme, this is available out of the box.
We wanted to make sure everyone could have a Digital Person who was content-aware so for customers using a custom UI, we are also releasing a new API via our SDK that allowing developers to mark up content they care about to easily make their custom UIs enabled for content awareness. We also have developer resources to aid you in setting the optimal framing for your cameras. Manual adjustment of camera behavior is available in our SDK for customers that are using custom UIs, only.
All-New Digital Personality
We’ve always been told that it’s not just about “what” you say, but also “how” you say it. How we gesticulate, how open our gestures and expressions are, and how animated we are all contribute to building trust and driving engagement. Alongside the updates to Real-time Gesturing, we’re also releasing our first Digital Personality.
This personality is tuned to be expressive, engaging, and polite. We really think it’s going to open the door to brand-new use cases and exciting new experiences.
Active Listening
Real-time gesturing is already the most advanced autonomous animation system available. It can pick up on the nuance of what you want your Digital Person to say while also animating appropriately. There is also a simple set of gestures that can be used to train your Digital Person to appropriately animate brand-specific words and phrases. But there is another essential component to communicating, and that is listening.
Now, your Digital People are capable of active listening and non-verbal communication while listening to an end-user to show them that they’re hearing and understanding them. This will drive trust, connection, and solve an age-old problem in communication, where your customer “doesn’t feel heard.” Now, your Digital People will nod reassuringly to create empathy as end-users derive utility from their interactions.
Expanding our Integrated AI Ecosystem
No code no fuss, now Soul Machines natively supports any conversation built-in Dialogflow CX.
We have also added new voices: Microsoft Neural Voices, Google Wavenet Voices, and Amazon Neural Voices,
New Stock Digital People
With this release, we are adding 4 new stock digital people, all running HumanOS2.2 and all auto-built via the new platform.
Rylie | Rua | Anele | Tanaka |
---|---|---|---|
Female | Female | Female | Female |
+ Viola and Gabriella are already available
Release Version 2.0 | June 2021
New Features and Improvements
The following new features and improvements have been added to Human OS platform version 2.0:
1. Real-time Gesturing
Prior to Human OS 2.0, Digital People had to be trained through simple gestures and expressions. End users were required to configure this training through "Personality Tags". Now thanks to the advancements of Human OS 2.0, personality tags are only required to add nuance to your brand and use case-specific terms and phrases.
With the introduction of Real-time Gesturing, Digital People can now analyze the inputs from their changing environment to dynamically change how they deliver a message:
As the content changes, a DP will analyze what it’s saying via Natural Language Processing, and add emotionally appropriate gesturing and behavior to their speech and expressions in real time.
If they're discussing a problem with a user, if their speech expresses concern for the user’s problem, then their behavior will autonomously express concern too.
2. Improved Eye Contact
We’re constantly discovering new ways to deepen the connection between the Digital Person and the user. Eye contact is a critical part of creating that lifelike connection. Now, with improved eye contact, our Digital People will build a deeper trust with everyone they talk to.
3. Enhanced Expressions
The range of expressiveness for our Digital People is constantly expanding to be more lifelike. Now, each of our Digital People can respond to a user’s changing emotions and express a higher fidelity of moods and expressions to allow more depth and responsiveness.
4. Expanding our Integrated AI Ecosystem
No code no fuss, now Soul Machines natively supports any conversation built-in Microsoft Luis.
We have also added some of the richest voices on the market, Microsoft Neural Voices, Amazon Neural Voices, and SoundHound TTS is now natively supported on our platform.
5. Native Mobile App Support (iOS & Android)
Bring your Digital Person to your native mobile applications! Now with an iOS and Android SDK, you can extend the reach of your DP to provide your customers with a delightful omni-channel experience.
Reach out to your Customer Success Manager for more information.
Key Bugs Fixes
Enabled initialization message for orchestration conversation.
Enabled orchestration conversation to work with "control via browser".
Release Version 1.7 | July 2020
The enhancements in this release are focused on providing a more lively and positive experience to users by improving the natural demeanor of the digital people.
The following features and improvements have been added to Human OS platform version 1.7:
1. Personality configuration upgrade
We have improved the natural behavior of the digital people to deliver a more positive experience with stronger smiles and more liveliness to expressions.
2. Better expression of emotions
Digital people learn to even better express their emotions and react to users in a friendly manner.
3. Improved understanding of users expressions
We have boosted our expression detection system to allow our digital people to behave with increased empathy and better understand the feelings of users.
Release Version 0.8 | February 2020
The following features and improvements was added to Human OS platform version 0.8:
1. Improved privacy with Watson speech-to-text
Watson speech-to-text: By default we opt-out from “Watson STT learning” to improve privacy for all of our Digital People.
2. Your Digital Person can use voices and speech-to-text from Amazon
Amazon Polly and Amazon Transcribe: You can now have your Digital People use Amazon services for speech-to-text as well as text-to-speech with neural voice support and custom vocabulary support.
3. If you want better camera synchronisation with UI
Camera state updates: Camera animation information is now included in state updates, allowing the web user interface to be synchronized with camera movement.
4. Upgrade of our powerful emotion detection
Emotion detection: Our emotion detection has improved to give more reliable and performant visual emotion detection improving empathetical reactions and reflection analytics.
5. For even more natural animation
Animation optimisation: This is the result of our continuous focus on making all digital people’s animations more natural.
Release Version 0.7 | October 2019
The following features and improvements was added to Human OS platform version 0.7:
1. Your digital people can now speak Mandarin
Mandarin text-to-speech: Our Mandarin TTS with natural lip-sync animations has been finished giving the digital people the ability to speak in this language.
2. Digital People can now welcome your users
Welcome message support: You can now have your digital people initiate a conversation with your customers to engage them right away instead of waiting for them to speak first.
3. If you want even more natural animation
Animation optimisation: This is the result of our continuous focus on making all digital people’s animations more natural. We focused on improving idle movements and smiling.
4. You can further upgrade our powerful emotion detection
Emotion detection: Our systems have improved to give more reliable and performant visual emotion detection improving empathetical reactions and reflection analytics.
5. We now support Dialogflow API V2
Dialogflow support: We have added support for Google’s Dialogflow API version 2 as Google plans to retire the original version of the API on October 23, 2019.
Are you experiencing any issues?
As always a number of bug fixes have been provided to improve the quality of digital people.
Some digital people may encounter some issues with EQ data not being submitted to the EQ Dashboard system. If you encounter any discrepancies, please contact your Solution Architect.
If you would like to find out more about these features, please contact your Solution Architect.
FAQ
How can I get the new version of Human OS: For our clients, the update will be managed under our standard maintenance process. If you have any questions, please follow up with your Soul Machines solutions architect.
What are the bug fixes: No software development even with the help of artificial intelligence comes without bugs. These usually have a minor impact and we tackle them as part of our quality assurance process. For any bug fixes with a major impact or of specific interest to a client, we will contact you directly.
Contents
- 1 Release Version 2.6 | 29 May 2023
- 2 Release Version 2.5 | 23 January 2023
- 3 Release Version 2.5 | 14 December 2022
- 4 Release Version 2.5 | 28 November 2022
- 5 Release Version 2.4 | 14 November 2022
- 6 Release Version 2.4 | 31 October 2022
- 7 Release Version 2.4 | 17 October 2022
- 8 Release Version 2.4 | 5 October 2022
- 9 Release Version 2.4 | September 2022
- 10 Release Version 2.4 | July 2022
- 11 Release Version 2.4 | June 2022
- 12 Release Version 2.4 | May 2022
- 13 Release Version 2.3 | April 2022
- 14 Release Version 2.2 | September 2021
- 15 Release Version 2.0 | June 2021
- 16 Release Version 1.7 | July 2020
- 17 Release Version 0.8 | February 2020
- 18 Release Version 0.7 | October 2019
- 19 Are you experiencing any issues?
- 20 FAQ