แก้ไข

แชร์ผ่าน


Quickstart: Recognize and translate speech to text

Reference documentation | Package (NuGet) | Additional samples on GitHub

In this quickstart, you run an application to translate speech from one language to text in another language.

Tip

Try out the Azure AI Speech Toolkit to easily build and run samples on Visual Studio Code.

Prerequisites

  • An Azure subscription. You can create one for free.
  • Create a Speech resource in the Azure portal.
  • Get the Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys.

Set up the environment

The Speech SDK is available as a NuGet package and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the SDK installation guide for any more requirements.

Set environment variables

You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.

Important

We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

To set the environment variables for your Speech resource key and region, open a console window, and follow the instructions for your operating system and development environment.

  • To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource.
  • To set the SPEECH_REGION environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region

Note

If you only need to access the environment variables in the current console, you can set the environment variable with set instead of setx.

After you add the environment variables, you might need to restart any programs that need to read the environment variables, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.

Translate speech from a microphone

Follow these steps to create a new console application and install the Speech SDK.

  1. Open a command prompt where you want the new project, and create a console application with the .NET CLI. The Program.cs file should be created in the project directory.

    dotnet new console
    
  2. Install the Speech SDK in your new project with the .NET CLI.

    dotnet add package Microsoft.CognitiveServices.Speech
    
  3. Replace the contents of Program.cs with the following code.

    using System;
    using System.IO;
    using System.Threading.Tasks;
    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    using Microsoft.CognitiveServices.Speech.Translation;
    
    class Program 
    {
        // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY");
        static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION");
    
        static void OutputSpeechRecognitionResult(TranslationRecognitionResult translationRecognitionResult)
        {
            switch (translationRecognitionResult.Reason)
            {
                case ResultReason.TranslatedSpeech:
                    Console.WriteLine($"RECOGNIZED: Text={translationRecognitionResult.Text}");
                    foreach (var element in translationRecognitionResult.Translations)
                    {
                        Console.WriteLine($"TRANSLATED into '{element.Key}': {element.Value}");
                    }
                    break;
                case ResultReason.NoMatch:
                    Console.WriteLine($"NOMATCH: Speech could not be recognized.");
                    break;
                case ResultReason.Canceled:
                    var cancellation = CancellationDetails.FromResult(translationRecognitionResult);
                    Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");
    
                    if (cancellation.Reason == CancellationReason.Error)
                    {
                        Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                        Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                        Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
                    }
                    break;
            }
        }
    
        async static Task Main(string[] args)
        {
            var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(speechKey, speechRegion);        
            speechTranslationConfig.SpeechRecognitionLanguage = "en-US";
            speechTranslationConfig.AddTargetLanguage("it");
    
            using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
            using var translationRecognizer = new TranslationRecognizer(speechTranslationConfig, audioConfig);
    
            Console.WriteLine("Speak into your microphone.");
            var translationRecognitionResult = await translationRecognizer.RecognizeOnceAsync();
            OutputSpeechRecognitionResult(translationRecognitionResult);
        }
    }
    
  4. To change the speech recognition language, replace en-US with another supported language. Specify the full locale with a dash (-) separator. For example, es-ES for Spanish (Spain). The default language is en-US if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.

  5. To change the translation target language, replace it with another supported language. With few exceptions you only specify the language code that precedes the locale dash (-) separator. For example, use es for Spanish (Spain) instead of es-ES. The default language is en if you don't specify a language.

Run your new console application to start speech recognition from a microphone:

dotnet run

Speak into your microphone when prompted. What you speak should be output as translated text in the target language:

Speak into your microphone.
RECOGNIZED: Text=I'm excited to try speech translation.
TRANSLATED into 'it': Sono entusiasta di provare la traduzione vocale.

Remarks

Now that you've completed the quickstart, here are some additional considerations:

  • This example uses the RecognizeOnceAsync operation to transcribe utterances of up to 30 seconds, or until silence is detected. For information about continuous recognition for longer audio, including multi-lingual conversations, see How to translate speech.
  • To recognize speech from an audio file, use FromWavFileInput instead of FromDefaultMicrophoneInput:
    using var audioConfig = AudioConfig.FromWavFileInput("YourAudioFile.wav");
    
  • For compressed audio files such as MP4, install GStreamer and use PullAudioInputStream or PushAudioInputStream. For more information, see How to use compressed input audio.

Clean up resources

You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.

Reference documentation | Package (NuGet) | Additional samples on GitHub

In this quickstart, you run an application to translate speech from one language to text in another language.

Tip

Try out the Azure AI Speech Toolkit to easily build and run samples on Visual Studio Code.

Prerequisites

  • An Azure subscription. You can create one for free.
  • Create a Speech resource in the Azure portal.
  • Get the Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys.

Set up the environment

The Speech SDK is available as a NuGet package and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the SDK installation guide for any more requirements

Set environment variables

You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.

Important

We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

To set the environment variables for your Speech resource key and region, open a console window, and follow the instructions for your operating system and development environment.

  • To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource.
  • To set the SPEECH_REGION environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region

Note

If you only need to access the environment variables in the current console, you can set the environment variable with set instead of setx.

After you add the environment variables, you might need to restart any programs that need to read the environment variables, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.

Translate speech from a microphone

Follow these steps to create a new console application and install the Speech SDK.

  1. Create a new C++ console project in Visual Studio Community 2022 named SpeechTranslation.

  2. Install the Speech SDK in your new project with the NuGet package manager.

    Install-Package Microsoft.CognitiveServices.Speech
    
  3. Replace the contents of SpeechTranslation.cpp with the following code:

    #include <iostream> 
    #include <stdlib.h>
    #include <speechapi_cxx.h>
    
    using namespace Microsoft::CognitiveServices::Speech;
    using namespace Microsoft::CognitiveServices::Speech::Audio;
    using namespace Microsoft::CognitiveServices::Speech::Translation;
    
    std::string GetEnvironmentVariable(const char* name);
    
    int main()
    {
        // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        auto speechKey = GetEnvironmentVariable("SPEECH_KEY");
        auto speechRegion = GetEnvironmentVariable("SPEECH_REGION");
    
        auto speechTranslationConfig = SpeechTranslationConfig::FromSubscription(speechKey, speechRegion);
        speechTranslationConfig->SetSpeechRecognitionLanguage("en-US");
        speechTranslationConfig->AddTargetLanguage("it");
    
        auto audioConfig = AudioConfig::FromDefaultMicrophoneInput();
        auto translationRecognizer = TranslationRecognizer::FromConfig(speechTranslationConfig, audioConfig);
    
        std::cout << "Speak into your microphone.\n";
        auto result = translationRecognizer->RecognizeOnceAsync().get();
    
        if (result->Reason == ResultReason::TranslatedSpeech)
        {
            std::cout << "RECOGNIZED: Text=" << result->Text << std::endl;
            for (auto pair : result->Translations)
            {
                auto language = pair.first;
                auto translation = pair.second;
                std::cout << "Translated into '" << language << "': " << translation << std::endl;
            }
        }
        else if (result->Reason == ResultReason::NoMatch)
        {
            std::cout << "NOMATCH: Speech could not be recognized." << std::endl;
        }
        else if (result->Reason == ResultReason::Canceled)
        {
            auto cancellation = CancellationDetails::FromResult(result);
            std::cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;
    
            if (cancellation->Reason == CancellationReason::Error)
            {
                std::cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
                std::cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
                std::cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
            }
        }
    }
    
    std::string GetEnvironmentVariable(const char* name)
    {
    #if defined(_MSC_VER)
        size_t requiredSize = 0;
        (void)getenv_s(&requiredSize, nullptr, 0, name);
        if (requiredSize == 0)
        {
            return "";
        }
        auto buffer = std::make_unique<char[]>(requiredSize);
        (void)getenv_s(&requiredSize, buffer.get(), requiredSize, name);
        return buffer.get();
    #else
        auto value = getenv(name);
        return value ? value : "";
    #endif
    }
    
  4. To change the speech recognition language, replace en-US with another supported language. Specify the full locale with a dash (-) separator. For example, es-ES for Spanish (Spain). The default language is en-US if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.

  5. To change the translation target language, replace it with another supported language. With few exceptions you only specify the language code that precedes the locale dash (-) separator. For example, use es for Spanish (Spain) instead of es-ES. The default language is en if you don't specify a language.

Build and run your new console application to start speech recognition from a microphone.

Speak into your microphone when prompted. What you speak should be output as translated text in the target language:

Speak into your microphone.
RECOGNIZED: Text=I'm excited to try speech translation.
Translated into 'it': Sono entusiasta di provare la traduzione vocale.

Remarks

Now that you've completed the quickstart, here are some additional considerations:

  • This example uses the RecognizeOnceAsync operation to transcribe utterances of up to 30 seconds, or until silence is detected. For information about continuous recognition for longer audio, including multi-lingual conversations, see How to translate speech.
  • To recognize speech from an audio file, use FromWavFileInput instead of FromDefaultMicrophoneInput:
    auto audioInput = AudioConfig::FromWavFileInput("YourAudioFile.wav");
    
  • For compressed audio files such as MP4, install GStreamer and use PullAudioInputStream or PushAudioInputStream. For more information, see How to use compressed input audio.

Clean up resources

You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.

Reference documentation | Package (Go) | Additional samples on GitHub

The Speech SDK for Go doesn't support speech translation. Please select another programming language or the Go reference and samples linked from the beginning of this article.

Reference documentation | Additional samples on GitHub

In this quickstart, you run an application to translate speech from one language to text in another language.

Tip

Try out the Azure AI Speech Toolkit to easily build and run samples on Visual Studio Code.

Prerequisites

  • An Azure subscription. You can create one for free.
  • Create a Speech resource in the Azure portal.
  • Get the Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys.

Set up the environment

Before you can do anything, you need to install the Speech SDK. The sample in this quickstart works with the Java Runtime.

  1. Install Apache Maven. Then run mvn -v to confirm successful installation.
  2. Create a new pom.xml file in the root of your project, and copy the following into it:
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <groupId>com.microsoft.cognitiveservices.speech.samples</groupId>
        <artifactId>quickstart-eclipse</artifactId>
        <version>1.0.0-SNAPSHOT</version>
        <build>
            <sourceDirectory>src</sourceDirectory>
            <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.7.0</version>
                <configuration>
                <source>1.8</source>
                <target>1.8</target>
                </configuration>
            </plugin>
            </plugins>
        </build>
        <dependencies>
            <dependency>
            <groupId>com.microsoft.cognitiveservices.speech</groupId>
            <artifactId>client-sdk</artifactId>
            <version>1.40.0</version>
            </dependency>
        </dependencies>
    </project>
    
  3. Install the Speech SDK and dependencies.
    mvn clean dependency:copy-dependencies
    

Set environment variables

You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.

Important

We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

To set the environment variables for your Speech resource key and region, open a console window, and follow the instructions for your operating system and development environment.

  • To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource.
  • To set the SPEECH_REGION environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region

Note

If you only need to access the environment variables in the current console, you can set the environment variable with set instead of setx.

After you add the environment variables, you might need to restart any programs that need to read the environment variables, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.

Translate speech from a microphone

Follow these steps to create a new console application for speech recognition.

  1. Create a new file named SpeechTranslation.java in the same project root directory.

  2. Copy the following code into SpeechTranslation.java:

    import com.microsoft.cognitiveservices.speech.*;
    import com.microsoft.cognitiveservices.speech.audio.AudioConfig;
    import com.microsoft.cognitiveservices.speech.translation.*;
    
    import java.util.concurrent.ExecutionException;
    import java.util.concurrent.Future;
    import java.util.Map;
    
    public class SpeechTranslation {
        // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        private static String speechKey = System.getenv("SPEECH_KEY");
        private static String speechRegion = System.getenv("SPEECH_REGION");
    
        public static void main(String[] args) throws InterruptedException, ExecutionException {
            SpeechTranslationConfig speechTranslationConfig = SpeechTranslationConfig.fromSubscription(speechKey, speechRegion);
            speechTranslationConfig.setSpeechRecognitionLanguage("en-US");
    
            String[] toLanguages = { "it" };
            for (String language : toLanguages) {
                speechTranslationConfig.addTargetLanguage(language);
            }
    
            recognizeFromMicrophone(speechTranslationConfig);
        }
    
        public static void recognizeFromMicrophone(SpeechTranslationConfig speechTranslationConfig) throws InterruptedException, ExecutionException {
            AudioConfig audioConfig = AudioConfig.fromDefaultMicrophoneInput();
            TranslationRecognizer translationRecognizer = new TranslationRecognizer(speechTranslationConfig, audioConfig);
    
            System.out.println("Speak into your microphone.");
            Future<TranslationRecognitionResult> task = translationRecognizer.recognizeOnceAsync();
            TranslationRecognitionResult translationRecognitionResult = task.get();
    
            if (translationRecognitionResult.getReason() == ResultReason.TranslatedSpeech) {
                System.out.println("RECOGNIZED: Text=" + translationRecognitionResult.getText());
                for (Map.Entry<String, String> pair : translationRecognitionResult.getTranslations().entrySet()) {
                    System.out.printf("Translated into '%s': %s\n", pair.getKey(), pair.getValue());
                }
            }
            else if (translationRecognitionResult.getReason() == ResultReason.NoMatch) {
                System.out.println("NOMATCH: Speech could not be recognized.");
            }
            else if (translationRecognitionResult.getReason() == ResultReason.Canceled) {
                CancellationDetails cancellation = CancellationDetails.fromResult(translationRecognitionResult);
                System.out.println("CANCELED: Reason=" + cancellation.getReason());
    
                if (cancellation.getReason() == CancellationReason.Error) {
                    System.out.println("CANCELED: ErrorCode=" + cancellation.getErrorCode());
                    System.out.println("CANCELED: ErrorDetails=" + cancellation.getErrorDetails());
                    System.out.println("CANCELED: Did you set the speech resource key and region values?");
                }
            }
    
            System.exit(0);
        }
    }
    
  3. To change the speech recognition language, replace en-US with another supported language. Specify the full locale with a dash (-) separator. For example, es-ES for Spanish (Spain). The default language is en-US if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.

  4. To change the translation target language, replace it with another supported language. With few exceptions you only specify the language code that precedes the locale dash (-) separator. For example, use es for Spanish (Spain) instead of es-ES. The default language is en if you don't specify a language.

Run your new console application to start speech recognition from a microphone:

javac SpeechTranslation.java -cp ".;target\dependency\*"
java -cp ".;target\dependency\*" SpeechTranslation

Speak into your microphone when prompted. What you speak should be output as translated text in the target language:

Speak into your microphone.
RECOGNIZED: Text=I'm excited to try speech translation.
Translated into 'it': Sono entusiasta di provare la traduzione vocale.

Remarks

Now that you've completed the quickstart, here are some additional considerations:

  • This example uses the RecognizeOnceAsync operation to transcribe utterances of up to 30 seconds, or until silence is detected. For information about continuous recognition for longer audio, including multi-lingual conversations, see How to translate speech.
  • To recognize speech from an audio file, use fromWavFileInput instead of fromDefaultMicrophoneInput:
    AudioConfig audioConfig = AudioConfig.fromWavFileInput("YourAudioFile.wav");
    
  • For compressed audio files such as MP4, install GStreamer and use PullAudioInputStream or PushAudioInputStream. For more information, see How to use compressed input audio.

Clean up resources

You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.

Reference documentation | Package (npm) | Additional samples on GitHub | Library source code

In this quickstart, you run an application to translate speech from one language to text in another language.

Tip

Try out the Azure AI Speech Toolkit to easily build and run samples on Visual Studio Code.

Prerequisites

  • An Azure subscription. You can create one for free.
  • Create a Speech resource in the Azure portal.
  • Get the Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys.

Set up the environment

Before you can do anything, you need to install the Speech SDK for JavaScript. If you just want the package name to install, run npm install microsoft-cognitiveservices-speech-sdk. For guided installation instructions, see the SDK installation guide.

Set environment variables

You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.

Important

We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

To set the environment variables for your Speech resource key and region, open a console window, and follow the instructions for your operating system and development environment.

  • To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource.
  • To set the SPEECH_REGION environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region

Note

If you only need to access the environment variables in the current console, you can set the environment variable with set instead of setx.

After you add the environment variables, you might need to restart any programs that need to read the environment variables, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.

Translate speech from a file

Follow these steps to create a Node.js console application for speech recognition.

  1. Open a command prompt where you want the new project, and create a new file named SpeechTranslation.js.

  2. Install the Speech SDK for JavaScript:

    npm install microsoft-cognitiveservices-speech-sdk
    
  3. Copy the following code into SpeechTranslation.js:

    const fs = require("fs");
    const sdk = require("microsoft-cognitiveservices-speech-sdk");
    
    // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
    const speechTranslationConfig = sdk.SpeechTranslationConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
    speechTranslationConfig.speechRecognitionLanguage = "en-US";
    
    var language = "it";
    speechTranslationConfig.addTargetLanguage(language);
    
    function fromFile() {
        let audioConfig = sdk.AudioConfig.fromWavFileInput(fs.readFileSync("YourAudioFile.wav"));
        let translationRecognizer = new sdk.TranslationRecognizer(speechTranslationConfig, audioConfig);
    
        translationRecognizer.recognizeOnceAsync(result => {
            switch (result.reason) {
                case sdk.ResultReason.TranslatedSpeech:
                    console.log(`RECOGNIZED: Text=${result.text}`);
                    console.log("Translated into [" + language + "]: " + result.translations.get(language));
    
                    break;
                case sdk.ResultReason.NoMatch:
                    console.log("NOMATCH: Speech could not be recognized.");
                    break;
                case sdk.ResultReason.Canceled:
                    const cancellation = sdk.CancellationDetails.fromResult(result);
                    console.log(`CANCELED: Reason=${cancellation.reason}`);
    
                    if (cancellation.reason == sdk.CancellationReason.Error) {
                        console.log(`CANCELED: ErrorCode=${cancellation.ErrorCode}`);
                        console.log(`CANCELED: ErrorDetails=${cancellation.errorDetails}`);
                        console.log("CANCELED: Did you set the speech resource key and region values?");
                    }
                    break;
            }
            translationRecognizer.close();
        });
    }
    fromFile();
    
  4. In SpeechTranslation.js, replace YourAudioFile.wav with your own WAV file. This example only recognizes speech from a WAV file. For information about other audio formats, see How to use compressed input audio. This example supports up to 30 seconds audio.

  5. To change the speech recognition language, replace en-US with another supported language. Specify the full locale with a dash (-) separator. For example, es-ES for Spanish (Spain). The default language is en-US if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.

  6. To change the translation target language, replace it with another supported language. With few exceptions you only specify the language code that precedes the locale dash (-) separator. For example, use es for Spanish (Spain) instead of es-ES. The default language is en if you don't specify a language.

Run your new console application to start speech recognition from a file:

node.exe SpeechTranslation.js

The speech from the audio file should be output as translated text in the target language:

RECOGNIZED: Text=I'm excited to try speech translation.
Translated into [it]: Sono entusiasta di provare la traduzione vocale.

Remarks

Now that you've completed the quickstart, here are some additional considerations:

This example uses the recognizeOnceAsync operation to transcribe utterances of up to 30 seconds, or until silence is detected. For information about continuous recognition for longer audio, including multi-lingual conversations, see How to translate speech.

Note

Recognizing speech from a microphone is not supported in Node.js. It's supported only in a browser-based JavaScript environment.

Clean up resources

You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.

Reference documentation | Package (download) | Additional samples on GitHub

The Speech SDK for Objective-C does support speech translation, but we haven't yet included a guide here. Please select another programming language to get started and learn about the concepts, or see the Objective-C reference and samples linked from the beginning of this article.

Reference documentation | Package (download) | Additional samples on GitHub

The Speech SDK for Swift does support speech translation, but we haven't yet included a guide here. Please select another programming language to get started and learn about the concepts, or see the Swift reference and samples linked from the beginning of this article.

Reference documentation | Package (PyPi) | Additional samples on GitHub

In this quickstart, you run an application to translate speech from one language to text in another language.

Tip

Try out the Azure AI Speech Toolkit to easily build and run samples on Visual Studio Code.

Prerequisites

  • An Azure subscription. You can create one for free.
  • Create a Speech resource in the Azure portal.
  • Get the Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys.

Set up the environment

The Speech SDK for Python is available as a Python Package Index (PyPI) module. The Speech SDK for Python is compatible with Windows, Linux, and macOS.

Install a version of Python from 3.7 or later. First check the SDK installation guide for any more requirements

Set environment variables

You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.

Important

We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

To set the environment variables for your Speech resource key and region, open a console window, and follow the instructions for your operating system and development environment.

  • To set the SPEECH_KEY environment variable, replace your-key with one of the keys for your resource.
  • To set the SPEECH_REGION environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region

Note

If you only need to access the environment variables in the current console, you can set the environment variable with set instead of setx.

After you add the environment variables, you might need to restart any programs that need to read the environment variables, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.

Translate speech from a microphone

Follow these steps to create a new console application.

  1. Open a command prompt where you want the new project, and create a new file named speech_translation.py.

  2. Run this command to install the Speech SDK:

    pip install azure-cognitiveservices-speech
    
  3. Copy the following code into speech_translation.py:

    import os
    import azure.cognitiveservices.speech as speechsdk
    
    def recognize_from_microphone():
        # This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
        speech_translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION'))
        speech_translation_config.speech_recognition_language="en-US"
    
        to_language ="it"
        speech_translation_config.add_target_language(to_language)
    
        audio_config = speechsdk.audio.AudioConfig(use_default_microphone=True)
        translation_recognizer = speechsdk.translation.TranslationRecognizer(translation_config=speech_translation_config, audio_config=audio_config)
    
        print("Speak into your microphone.")
        translation_recognition_result = translation_recognizer.recognize_once_async().get()
    
        if translation_recognition_result.reason == speechsdk.ResultReason.TranslatedSpeech:
            print("Recognized: {}".format(translation_recognition_result.text))
            print("""Translated into '{}': {}""".format(
                to_language, 
                translation_recognition_result.translations[to_language]))
        elif translation_recognition_result.reason == speechsdk.ResultReason.NoMatch:
            print("No speech could be recognized: {}".format(translation_recognition_result.no_match_details))
        elif translation_recognition_result.reason == speechsdk.ResultReason.Canceled:
            cancellation_details = translation_recognition_result.cancellation_details
            print("Speech Recognition canceled: {}".format(cancellation_details.reason))
            if cancellation_details.reason == speechsdk.CancellationReason.Error:
                print("Error details: {}".format(cancellation_details.error_details))
                print("Did you set the speech resource key and region values?")
    
    recognize_from_microphone()
    
  4. To change the speech recognition language, replace en-US with another supported language. Specify the full locale with a dash (-) separator. For example, es-ES for Spanish (Spain). The default language is en-US if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.

  5. To change the translation target language, replace it with another supported language. With few exceptions you only specify the language code that precedes the locale dash (-) separator. For example, use es for Spanish (Spain) instead of es-ES. The default language is en if you don't specify a language.

Run your new console application to start speech recognition from a microphone:

python speech_translation.py

Speak into your microphone when prompted. What you speak should be output as translated text in the target language:

Speak into your microphone.
Recognized: I'm excited to try speech translation.
Translated into 'it': Sono entusiasta di provare la traduzione vocale.

Remarks

Now that you've completed the quickstart, here are some additional considerations:

  • This example uses the recognize_once_async operation to transcribe utterances of up to 30 seconds, or until silence is detected. For information about continuous recognition for longer audio, including multi-lingual conversations, see How to translate speech.
  • To recognize speech from an audio file, use filename instead of use_default_microphone:
    audio_config = speechsdk.audio.AudioConfig(filename="YourAudioFile.wav")
    
  • For compressed audio files such as MP4, install GStreamer and use PullAudioInputStream or PushAudioInputStream. For more information, see How to use compressed input audio.

Clean up resources

You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.

Speech to text REST API reference | Speech to text REST API for short audio reference | Additional samples on GitHub

The REST API doesn't support speech translation. Please select another programming language or tool from the top of this page.

In this quickstart, you run an application to translate speech from one language to text in another language.

Tip

Try out the Azure AI Speech Toolkit to easily build and run samples on Visual Studio Code.

Prerequisites

  • An Azure subscription. You can create one for free.
  • Create a Speech resource in the Azure portal.
  • Get the Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys.

Set up the environment

Follow these steps and see the Speech CLI quickstart for other requirements for your platform.

  1. Run the following .NET CLI command to install the Speech CLI:

    dotnet tool install --global Microsoft.CognitiveServices.Speech.CLI
    
  2. Run the following commands to configure your Speech resource key and region. Replace SUBSCRIPTION-KEY with your Speech resource key and replace REGION with your Speech resource region.

    spx config @key --set SUBSCRIPTION-KEY
    spx config @region --set REGION
    

Translate speech from a microphone

Run the following command to translate speech from the microphone from English to Italian:

spx translate --source en-US --target it --microphone

Speak into the microphone, and you see the transcription of your translated speech in real-time. The Speech CLI stops after a period of silence, 30 seconds, or when you press Ctrl+C.

Connection CONNECTED...
TRANSLATING into 'it': Sono (from 'I'm')
TRANSLATING into 'it': Sono entusiasta (from 'I'm excited to')
TRANSLATING into 'it': Sono entusiasta di provare la parola (from 'I'm excited to try speech')
TRANSLATED into 'it': Sono entusiasta di provare la traduzione vocale. (from 'I'm excited to try speech translation.')

Remarks

Now that you've completed the quickstart, here are some additional considerations:

  • To get speech from an audio file, use --file instead of --microphone. For compressed audio files such as MP4, install GStreamer and use --format. For more information, see How to use compressed input audio.
    spx translate --source en-US --target it --file YourAudioFile.wav
    spx translate --source en-US --target it --file YourAudioFile.mp4 --format any
    
  • To improve recognition accuracy of specific words or utterances, use a phrase list. You include a phrase list in-line or with a text file:
    spx translate --source en-US --target it --microphone --phrases "Contoso;Jessie;Rehaan;"
    spx translate --source en-US --target it --microphone --phrases @phrases.txt
    
  • To change the speech recognition language, replace en-US with another supported language. Specify the full locale with a dash (-) separator. For example, es-ES for Spanish (Spain). The default language is en-US if you don't specify a language.
    spx translate --microphone --source es-ES
    
  • To change the translation target language, replace it with another supported language. With few exceptions you only specify the language code that precedes the locale dash (-) separator. For example, use es for Spanish (Spain) instead of es-ES. The default language is en if you don't specify a language.
    spx translate --microphone --target es
    
  • For continuous recognition of audio longer than 30 seconds, append --continuous:
    spx translate --source en-US --target it --microphone --continuous
    

Run this command for information about additional speech translation options such as file input and output:

spx help translate

Clean up resources

You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.

Next steps