cannot construct SpeechConfig with the given arguments

BOB_HUANG (黃俊穎) 0 Reputation points
2024-02-05T09:49:27.8966667+00:00

Hi, I'm encountering an error " cannot construct SpeechConfig with the given arguments" after running the program. I'm currently following the quickstart steps from the speech SDK and speech translation guide. However, the above issue keeps popping out. I don't know how to solve it. I'm currently in Taiwan and the resource is created in the southeast region. Thank you.

Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,555 questions
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
2,645 questions
{count} votes

1 answer

Sort by: Most helpful
  1. dupammi 8,035 Reputation points Microsoft Vendor
    2024-02-07T08:51:58.3+00:00

    Hi @BOB_HUANG (黃俊穎)

    Thank you for giving the program details.

    I tried to repro from my end using the southeastasia region. See below code.

    import os
    import azure.cognitiveservices.speech as speechsdk
    def recognize_from_microphone():
        # This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        speech_key = "PASTE_YOUR_SPEECH_KEY_HERE"
        service_region = "southeastasia"
        speech_translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region)
        speech_translation_config.speech_recognition_language="en-US"
        target_language="it"
        speech_translation_config.add_target_language(target_language)
        audio_config = speechsdk.audio.AudioConfig(use_default_microphone=True)
        translation_recognizer = speechsdk.translation.TranslationRecognizer(translation_config=speech_translation_config, audio_config=audio_config)
        print("Speak into your microphone.")
        translation_recognition_result = translation_recognizer.recognize_once_async().get()
        if translation_recognition_result.reason == speechsdk.ResultReason.TranslatedSpeech:
            print("Recognized: {}".format(translation_recognition_result.text))
            print("""Translated into '{}': {}""".format(
                target_language, translation_recognition_result.translations[target_language]))
        elif translation_recognition_result.reason == speechsdk.ResultReason.NoMatch:
            print("No speech could be recognized: {}".format(translation_recognition_result.no_match_details))
        elif translation_recognition_result.reason == speechsdk.ResultReason.Canceled:
            cancellation_details = translation_recognition_result.cancellation_details
            print("Speech Recognition canceled: {}".format(cancellation_details.reason))
            if cancellation_details.reason == speechsdk.CancellationReason.Error:
                print("Error details: {}".format(cancellation_details.error_details))
                print("Did you set the speech resource key and region values?")
    recognize_from_microphone()
    

    And I got the below result successfully from above code execution. Output
    User's image

    So, for debugging purpose, you please try above code and give your key and region directly. With this you will be able to understand if environment variables were not properly set in your window's local environment.

    Regarding your latest query about the Azure AI Services -> Speech Services. Yes, you need to create speech services.

    Please see below
    User's image

    Please use below CREATE button to create speech resource

    User's image

    And then copy paste the key and region in the code, after successful creation of the speech service.

    I hope you understand. Thank you.


    Please do not forget to click Accept Answer and Yes for was this answer helpful, wherever the information provided helps you. This can be beneficial to other community members.

    0 comments No comments