Hi Team,
I am rather new to Unity and was plannning to build an android AR application using mutiple sdks with Unity. I would require TTS & STT and lipsync for the same. I plan to use Vuforia sdk, IBM Watson sdk for TTS and a model developed using adobe fuse along with Salsa lipsync and random eyes.
I want to know if there will be any compatibility issues while using salsa with Fuse model, Salsa and IBM watson. Please share your thoughts.
Thanks in advance.
Regards,
Krishna Chandran
As you see, I have output when the "play" variable is set to true, this will run the procedure in the update () method. This is the reality in which the letter of welcome / welcome takes place. The "server listener" output is where the speech ends and the microphone is now active and listening. Displays me "[DEBUG] telling me a joke" what you know the TTS service, and then will pass it to the Watson Assistant service. C2090-616 VCE As I say, this is a good way to know the output of each step and to analyze the information in more detail. If you select a line in the DEBUG output, you see that there is a small window at the bottom of the panel that shows you more in-depth information - this is really useful for reviewing the contents of JSON messages that are passed back and forth.