Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Soul Machines uses a bi-directional WebSocket connection to send a message back and forth between our servers and your Orchestration Server.  Each message is JSON encoded and sent as a web socket ‘text’ message over HTTPS.  Each JSON encoded message includes some standard fields for the kind of message communicated.  

There are three kinds of messages: event, request and response. In this document, we only cover the messages relevant to connecting to the NLP platform.

recognizeResults

Each time a user speaks to a Digital Hero, your STT service transcribes their utterance from audio to text.  The output from your STT service for each utterance is a series of intermediate recognizeResults messages, as well as one final recognizeResults message.  These messages are sent from the Soul Machines Servers to your Orchestration Server via the WebSocket connection.

Below is an example of a “final”recognizeResults message.  The final messages can be identified by those messages where the “final” attribute is set to “true.” The transcript text from each “final”message must be sent to your NLP for a response, all other recognizeResults messages received (e.g. where “final” = ”false”) can be ignored.

{
  "category":"scene",
  "kind":"event",
  "name":"recognizeResults”,
  "body":{
  "results":[{
    "alternatives":[{
      "confidence":0.8000,
      "transcript":
        "tell me a joke"}],
      "final":true }],
    "status":0}
}

startSpeaking Request

To instruct your Digital Hero to speak a response from your NLP, you need to send a startSpeaking command from your Orchestration Server via the WebSocket connection.  

Below is an example of startSpeaking command message.

  • No labels