Looks like it's possible. We have to add the language to the speechRecognizer
instance rathre than the speechTranslationConfig
instance.
Microsoft Cognitive Speech SDK supporting multiple language translations
DevUser2048
0
Reputation points
Hi,
I'm using Microsoft Cognitive Speech SDK node package to do the speech translation. With our requirement, we wanted to add support translations to multiple target language. I see it's working when we add the target language while creating the instance of SpeechTranslationConfig
.
But in our case, the target languages are decided at a later point of time based on who are the various users who join the discussion. So instead of recreating the SpeechTranslationConfig
instance everytime, can we add the target language after the instance is created?
const speechTranslationConfig = SpeechTranslationConfig.fromAuthorizationToken(token,region);
speechTranslationConfig.speechRecognitionLanguage = "source language";
speechTranslationConfig.addTargetLanguage('targetLocale1');
speechTranslationConfig.addTargetLanguage('targetLocale2');
const audioConfig = getAudioConfig();
const speechRecognizer = new TranslationRecognizer(speechTranslationConfig, audioConfig);
Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
2,069 questions
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,621 questions
1 answer
Sort by: Most helpful
-
DevUser2048 0 Reputation points
2023-10-29T05:22:42.29+00:00