- Created by Jon Borromeo (Deactivated), last modified by Chitra Borker (Deactivated) on Apr 13, 2022
You are viewing an old version of this page. View the current version.
Compare with Current View Page History
« Previous Version 21 Next »
This page lists the release notes for Soul Machines' Human OS platform.
The Soul Machines Human OS Platform features a patented Digital Brain that makes it possible to deliver the goodness of human and machine collaboration. Soul Machines Human OS Autonomously Animates combining the quality of hyper-realistic CGI with a fully autonomous, fully animated CGI or digital character.
Please visit our website: http://soulmachines.com to find out more about our solutions and services. You can also contact our Customer Success Team if you want to know more about how we can help you in your digital transformation journey.
Release Version 2.3 | April 2022
Behavior Styles
Selecting a Behavior Style in Digital DNA Studio allows you to effortlessly match the gestural and emotional behavior of your digital person with the persona you’ve created and your use-case. Behavior Styles affect the overall ‘feel’ of the character and empathy of your digital person. Real-Time Gesturing is a prerequisite for this feature. Available for English and Japanese.
Real-Time Gesturing for Japanese
With Real-Time Gesturing enabled your Japanese speaking digital person analyzes what it’s saying via Natural Language Processing, and adds emotionally appropriate gesturing and behavior to their speech in real-time. Real-Time Gesturing needs to be enabled in order for Behavior Styles to work.
Responding Empathetically to User Speech
Digital people can now analyse the emotional content of the words a user says to them, and produce an emotionally appropriate facial performance in reaction to those words, in real-time. Learn more.
Toggles: React to Negative Facial Expressions and Negative Speech
With these two toggles you can allow your digital person to respond to the full range of detected emotions in user facial expressions and user speech, or you can choose to allow your digital person to respond only to postive facial expressions and speech.
A toggle to enable/disable responding to negative user facial expressions
A toggle to enable/disable responding to negative user speech
Behavior Tags
We have renamed the Personality Tags feature, to Behavior Tags. You will find them in the Behavior Adjustments section. We have updated the documentation around how to use them, with and without Real-Time Gesturing.
In the Camera Behavior section, under Customise your Interaction, we have have updated the Default Camera frame to fully show the badge, and allow plenty of space above the digital people’s heads. A preview of the new frame is below.
We have resolved an issue where the digital person’s gaze would ‘overshoot’ when performing the autonomous gesturing at content.
Release Version 2.2 | September 2021
Real-time Gesturing v2
Our last release of Real-Time Gesturing made the lion's share of the gesturing and behavior tuning process autonomous. Customers simply needed to focus on training the domain-specific phrases. The gesturing selection was more accurate and the emotions were optimized per our customer needs.
With 2.2, we have expanded this system to add dozens of new autonomous gestures that make the interaction feel even more life-like. By incorporating speaking gestures, attention guiding gestures, and gestures that are symbolic of the concepts that your Digital Person is trying to share, we’ve created the most life-like autonomous body gesturing system on the market.
And now, for the first time, Digital People can use their hands to guide end-user attention towards the content on the screen!
Content Awareness
Our customers love to use Digital People to emphasize and create deeper engagement with content. To further enable them to create deeper connections with their content, we’ve enabled Digital People to be content-aware.
With Release Version 2.2 Digital People are aware of the digital content around them on your webpage or in your UI. As that content appears, they’ll guide end-user attention towards the content by glancing and gesturing towards it, autonomously! You can also trigger gestures specifically towards the content, easily.
For customers that are using the Auckland theme, this is available out of the box.
For customers using a custom UI, we have also built new functionality into the SDK API allowing developers to easily make their custom UIs enabled for content awareness.
All-New Camera Behaviour
The film, television, and content creation industries have long used changes in perspective to enhance a story and keep viewers engaged.
Now, your Digital People can do the same thing! You can now manually, or autonomously, adjust the framing and perspective of the camera to create the high level of engagement that content creators use to maintain attention and increase understanding.
The Responsive Camera is dynamic, automatically cutting to a different framing to introduce content cards in more compelling ways. This option is great for content-rich interactions.
The Cinematic Camera has all of the features of the Responsive Camera. It will also dynamically change the framing of the Digital Person during extended periods of conversational (no content).
There are 3 camera framings included in the set of cuts: The standard ‘head & shoulders’ shot, a closeup shot for focused attention, and a wide shot to show off autonomous gesturing and body movement.
For customers that are using the Auckland theme, this is available out of the box.
We wanted to make sure everyone could have a Digital Person who was content-aware so for customers using a custom UI, we are also releasing a new API via our SDK that allowing developers to mark up content they care about to easily make their custom UIs enabled for content awareness. We also have developer resources to aid you in setting the optimal framing for your cameras. Manual adjustment of camera behavior is available in our SDK for customers that are using custom UIs, only.
All-New Digital Personality
We’ve always been told that it’s not just about “what” you say, but also “how” you say it. How we gesticulate, how open our gestures and expressions are, and how animated we are all contribute to building trust and driving engagement. Alongside the updates to Real-time Gesturing, we’re also releasing our first Digital Personality.
This personality is tuned to be expressive, engaging, and polite. We really think it’s going to open the door to brand new use cases and exciting new experiences.
Active Listening
Real-time gesturing is already the most advanced autonomous animation system available. It can pick up on the nuance of what you want your Digital Person to say while also animating appropriately. There is also a simple set of gestures that can be used to train your Digital Person to appropriately animate brand-specific words and phrases. But there is another essential component to communicating, and that is listening.
Now, your Digital People are capable of active listening and non-verbal communication while listening to an end-user to show them that they’re hearing and understanding them. This will drive trust, connection, and solve an age-old problem in communication, where your customer “doesn’t feel heard.” Now, your Digital People will nod reassuringly to create empathy as end-users derive utility from their interactions.
Expanding our Integrated AI Ecosystem
No code no fuss, now Soul Machines natively supports any conversation built-in Dialogflow CX.
We have also added new voices: Microsoft Neural Voices, Google Wavenet Voices, and Amazon Neural Voices,
New Stock Digital People
With this release, we are adding 4 new stock digital people, all running HumanOS2.2 and all auto-built via the new platform.
Rylie | Rua | Anele | Tanaka |
---|---|---|---|
Female | Female | Female | Female |
+ Viola and Gabriella are already available
Release Version 2.0 | June 2021
New Features and Improvements
The following new features and improvements have been added to Human OS platform version 2.0:
1. Real-time Gesturing
Prior to Human OS 2.0, Digital People had to be trained through simple gestures and expressions. End users were required to configure this training through "Personality Tags". Now thanks to the advancements of Human OS 2.0, personality tags are only required to add nuance to your brand and use case-specific terms and phrases.
With the introduction of Real-time Gesturing, Digital People can now analyze the inputs from their changing environment to dynamically change how they deliver a message:
As the content changes, a DP will analyze what it’s saying via Natural Language Processing, and add emotionally appropriate gesturing and behavior to their speech and expressions in real time.
If they're discussing a problem with a user, if their speech expresses concern for the user’s problem, then their behavior will autonomously express concern too.
2. Improved Eye Contact
We’re constantly discovering new ways to deepen the connection between the Digital Person and our customer’s audience. Eye contact is a critical part of creating that lifelike connection. Now, with improved eye contact, our Digital People will build a deeper trust with everyone they talk to.
3. Enhanced Expressions
The range of expressiveness for our Digital People is constantly expanding to be more lifelike. Now, each of our Digital People can respond to a user’s changing emotions and express a higher fidelity of moods and expressions to allow more depth and responsiveness.
4. Expanding our Integrated AI Ecosystem
No code no fuss, now Soul Machines natively supports any conversation built-in Microsoft Luis.
We have also added some of the richest voices on the market, Microsoft Neural Voices, Amazon Neural Voices, and SoundHound TTS is now natively supported on our platform.
5. Native Mobile App Support (iOS & Android)
Bring your Digital Person to your native mobile applications! Now with an iOS and Android SDK, you can extend the reach of your DP to provide your customers with a delightful omni-channel experience.
Reach out to your Customer Success Manager for more information.
Key Bugs Fixes
Enabled initialization message for orchestration conversation.
Enabled orchestration conversation to work with "control via browser".
Release Version 1.7 | July 2020
The enhancements in this release are focused on providing a more lively and positive experience to users by improving the natural demeanor of the digital people.
The following features and improvements have been added to Human OS platform version 1.7:
1. Personality configuration upgrade
We have improved the natural behavior of the digital people to deliver a more positive experience with stronger smiles and more liveliness to expressions.
2. Better expression of emotions
Digital people learn to even better express their emotions and react to users in a friendly manner.
3. Improved understanding of users expressions
We have boosted our expression detection system to allow our digital people to behave with increased empathy and better understand the feelings of users.
Release Version 0.8 | February 2020
The following features and improvements was added to Human OS platform version 0.8:
1. Improved privacy with Watson speech-to-text
Watson speech-to-text: By default we opt-out from “Watson STT learning” to improve privacy for all of our Digital People.
2. Your Digital Person can use voices and speech-to-text from Amazon
Amazon Polly and Amazon Transcribe: You can now have your Digital People use Amazon services for speech-to-text as well as text-to-speech with neural voice support and custom vocabulary support.
3. If you want better camera synchronisation with UI
Camera state updates: Camera animation information is now included in state updates, allowing the web user interface to be synchronised with camera movement.
4. Upgrade of our powerful emotion detection
Emotion detection: Our emotion detection has improved to give more reliable and performant visual emotion detection improving empathetical reactions and reflection analytics.
5. For even more natural animation
Animation optimisation: This is the result of our continuous focus on making all digital people’s animations more natural.
Release Version 0.7 | October 2019
The following features and improvements was added to Human OS platform version 0.7:
1. Your digital people can now speak Mandarin
Mandarin text-to-speech: Our Mandarin TTS with natural lip-sync animations has been finished giving the digital people the ability to speak in this language.
2. Digital People can now welcome your users
Welcome message support: You can now have your digital people initiate a conversation with your customers to engage them right away instead of waiting for them to speak first.
3. If you want even more natural animation
Animation optimisation: This is the result of our continuous focus on making all digital people’s animations more natural. We focused on improving idle movements and smiling.
4. You can further upgrade our powerful emotion detection
Emotion detection: Our systems have improved to give more reliable and performant visual emotion detection improving empathetical reactions and reflection analytics.
5. We now support Dialogflow API V2
Dialogflow support: We have added support for Google’s Dialogflow API version 2 as Google plans to retire the original version of the API on October 23, 2019.
Are you experiencing any issues?
As always a number of bug fixes have been provided to improve the quality of digital people.
Some digital people may encounter some issues with EQ data not being submitted to the EQ Dashboard system. If you encounter any discrepancies, please contact your Solution Architect.
If you would like to find out more about these features, please contact your Solution Architect.
FAQ
How can I get the new version of Human OS: For our clients, the update will be managed under our standard maintenance process. If you have any questions, please follow up with your Soul Machines solutions architect.
What are the bug fixes: No software development even with the help of artificial intelligence comes without bugs. These usually have a minor impact and we tackle them as part of our quality assurance process. For any bug fixes with major impact or of specific interest to a client, we will contact you directly.
Contents
- No labels