Can 'microsoft-cognitiveservices-speech-sdk' handle two speech recognition languages at the same time?

Erik Nguyen 0 Reputation points
2024-08-15T14:49:32.6833333+00:00

Context is that I'm building a translation app.

I want a single speech translation recognizer to be able to understand two different input languages, just like a real interpreter.

Right now, you are setting the speech config like so. It would be interesting if you can add multiple languages as speech recognition.

Or is it some other way to achieve this?

    const speechTranslationConfig = SpeechTranslationConfig.fromSubscription(
      SUBSCRIPTION_KEY,
      REGION
    );
    speechTranslationConfig.speechRecognitionLanguage = SPEECH_LANGUAGE;
    speechTranslationConfig.addTargetLanguage(TARGET_LANGUAGE);
Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
2,070 questions
{count} votes

1 answer

Sort by: Most helpful
  1. VasaviLankipalle-MSFT 18,676 Reputation points Moderator
    2024-08-16T04:48:25.6366667+00:00

    Hello @Erik Nguyen , Thanks for using Microsoft Q&A Platform.

    Using language identification you can detect up to 10 possible input languages and automatically translate to your target languages.

    auto_detect_source_language_config = \
        speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
    

    For a complete code sample, please refer to language identification.

    Is this something you are looking for?

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.