實作語言識別

根據支援的語言清單比較識別音訊中的口說語言時,會使用語言識別。

語言識別 (LID) 使用案例包括:

  • 語音轉換文字辨識當您需要識別音訊來源中的語言,然後轉錄為文字時。
  • 當您需要識別音訊來源中的語言,然後將它翻譯成其他語言時,語音翻譯。

就語音辨識而言,因為需要語言識別,初始延遲更長。 您只應視需要包含此選擇性功能。

設定組態選項

無論語言識別是搭配語音轉換文字,還是搭配語音翻譯使用,都有一些共通的概念和設定選項。

接著,您會對語音服務發出一 次辨識或連續辨識 要求。

重要

語音 SDK 1.25 版和更新版本可簡化語言識別 API。 SpeechServiceConnection_SingleLanguageIdPrioritySpeechServiceConnection_ContinuousLanguageIdPriority 屬性已移除。 單一屬性 SpeechServiceConnection_LanguageIdMode 會取代它們。 您不再需要在低延遲和高精確度之間設定優先順序。 針對連續語音辨識或翻譯,您只需要選取是否要在啟動時執行或連續語言識別。

本文提供代碼段來描述概念。 提供每個使用案例的完整範例連結。

候選語言

您可以使用物件提供候選語言 AutoDetectSourceLanguageConfig 。 您預期至少有一個候選專案在音訊中。 開始時 LID 最多可以包含 4 種語言,連續 LID 最多包含 10 種語言。 語音服務會傳回其中一個提供的候選語言,即使這些語言不在音訊中也一樣。 例如,如果 fr-FR 提供 (法文) 和 en-US (英文) 作為候選專案,但會說出德文,則服務會傳 fr-FR 回 或 en-US

您必須提供完整的地區設定與虛線 (-) 分隔符,但語言識別只會使用每個基底語言的一個地區設定。 請勿包含相同語言的多個地區設定,例如 en-USen-GB

var autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
auto autoDetectSourceLanguageConfig = 
    AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
auto_detect_source_language_config = \
    speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE", "zh-CN"));
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages([("en-US", "de-DE", "zh-CN"]);
NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
    [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];

如需詳細資訊,請參閱 支援的語言

開始和連續語言識別

語音支持啟動時和連續語言識別 (LID)。

注意

只有 C#、C++、Java 中的語音 SDK 支援連續語言識別(僅適用於語音轉換文字)、JavaScript(僅適用於語音轉換文字),以及 Python。

  • 開始的 LID 會在音訊的前幾秒內識別語言一次。 如果音訊中的語言未變更,請使用啟動時的 LID。 使用開始時 LID 時,可在不到 5 秒的時間內檢測並傳回單一語言。
  • 連續 LID 可以在音訊期間識別多種語言。 如果音訊中的語言可能變更,請使用連續 LID。 連續 LID 不支援在相同的句子內變更語言。 例如,如果您主要講西班牙文並插入一些英文單字,則不會偵測每個單字的語言變更。

您可以呼叫方法來 辨識一次或連續的 LID,以實作啟動時的 LID 或連續 LID。 只有連續辨識才支援連續 LID。

辨識一次或連續

使用辨識物件和作業完成語言識別。 要求語音服務辨識音訊。

注意

請勿混淆辨識與識別。 辨識可以搭配或不使用語言識別來使用。

呼叫「辨識一次」方法,或啟動和停止連續辨識方法。 您可以選擇:

  • 辨識一次搭配開始時 LID。 辨識一次不支援連續 LID。
  • 搭配啟動時的 LID 使用連續辨識。
  • 搭配連續 LID 使用連續辨識。

連續 LID 僅需要 SpeechServiceConnection_LanguageIdMode 屬性。 如果沒有它,語音服務預設為啟動時的 LID。 支援的值適用於 AtStart 開始的 LID 或 Continuous 連續 LID。

// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
var result = await recognizer.RecognizeOnceAsync();

// Start and stop continuous recognition with At-start LID
await recognizer.StartContinuousRecognitionAsync();
await recognizer.StopContinuousRecognitionAsync();

// Start and stop continuous recognition with Continuous LID
speechConfig.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
await recognizer.StartContinuousRecognitionAsync();
await recognizer.StopContinuousRecognitionAsync();
// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
auto result = recognizer->RecognizeOnceAsync().get();

// Start and stop continuous recognition with At-start LID
recognizer->StartContinuousRecognitionAsync().get();
recognizer->StopContinuousRecognitionAsync().get();

// Start and stop continuous recognition with Continuous LID
speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
recognizer->StartContinuousRecognitionAsync().get();
recognizer->StopContinuousRecognitionAsync().get();
// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
SpeechRecognitionResult  result = recognizer->RecognizeOnceAsync().get();

// Start and stop continuous recognition with At-start LID
recognizer.startContinuousRecognitionAsync().get();
recognizer.stopContinuousRecognitionAsync().get();

// Start and stop continuous recognition with Continuous LID
speechConfig.setProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
recognizer.startContinuousRecognitionAsync().get();
recognizer.stopContinuousRecognitionAsync().get();
# Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
result = recognizer.recognize_once()

# Start and stop continuous recognition with At-start LID
recognizer.start_continuous_recognition()
recognizer.stop_continuous_recognition()

# Start and stop continuous recognition with Continuous LID
speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
recognizer.start_continuous_recognition()
recognizer.stop_continuous_recognition()

使用語音轉換文字

當您需要識別音訊來源中的語言,然後轉錄為文字時,請使用語音轉換文字辨識。 如需詳細資訊,請參閱語音轉換文字概觀

注意

C#、C++、Python、Java、JavaScript 和 Objective-C 的語音 SDK 支援語音轉換文字辨識搭配開始時語言識別。 只有 C#、C++、Java、JavaScript 和 Python 的語音 SDK 才支援語音轉換文字辨識搭配連續語言識別。

目前,對於語音轉換文字辨識搭配連續語言識別,您必須從 wss://{region}.stt.speech.microsoft.com/speech/universal/v2 端點字串建立 SpeechConfig,如程式碼範例所示。 在未來的 SDK 版本中,您不需要設定它。

請參閱 GitHub 上更多的語音轉換文字辨識搭配語言識別範例。

using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;

var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey","YourServiceRegion");

var autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig.FromLanguages(
        new string[] { "en-US", "de-DE", "zh-CN" });

using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
using (var recognizer = new SpeechRecognizer(
    speechConfig,
    autoDetectSourceLanguageConfig,
    audioConfig))
{
    var speechRecognitionResult = await recognizer.RecognizeOnceAsync();
    var autoDetectSourceLanguageResult =
        AutoDetectSourceLanguageResult.FromResult(speechRecognitionResult);
    var detectedLanguage = autoDetectSourceLanguageResult.Language;
}

請參閱 GitHub 上更多的語音轉換文字辨識搭配語言識別範例。

using namespace std;
using namespace Microsoft::CognitiveServices::Speech;
using namespace Microsoft::CognitiveServices::Speech::Audio;

auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey","YourServiceRegion");

auto autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });

auto recognizer = SpeechRecognizer::FromConfig(
    speechConfig,
    autoDetectSourceLanguageConfig
    );

speechRecognitionResult = recognizer->RecognizeOnceAsync().get();
auto autoDetectSourceLanguageResult =
    AutoDetectSourceLanguageResult::FromResult(speechRecognitionResult);
auto detectedLanguage = autoDetectSourceLanguageResult->Language;

請參閱 GitHub 上更多的語音轉換文字辨識搭配語言識別範例。

AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE"));

SpeechRecognizer recognizer = new SpeechRecognizer(
    speechConfig,
    autoDetectSourceLanguageConfig,
    audioConfig);

Future<SpeechRecognitionResult> future = recognizer.recognizeOnceAsync();
SpeechRecognitionResult result = future.get(30, TimeUnit.SECONDS);
AutoDetectSourceLanguageResult autoDetectSourceLanguageResult =
    AutoDetectSourceLanguageResult.fromResult(result);
String detectedLanguage = autoDetectSourceLanguageResult.getLanguage();

recognizer.close();
speechConfig.close();
autoDetectSourceLanguageConfig.close();
audioConfig.close();
result.close();

請參閱 GitHub 上更多的語音轉換文字辨識搭配語言識別範例。

auto_detect_source_language_config = \
        speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE"])
speech_recognizer = speechsdk.SpeechRecognizer(
        speech_config=speech_config, 
        auto_detect_source_language_config=auto_detect_source_language_config, 
        audio_config=audio_config)
result = speech_recognizer.recognize_once()
auto_detect_source_language_result = speechsdk.AutoDetectSourceLanguageResult(result)
detected_language = auto_detect_source_language_result.language
NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
        [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
SPXSpeechRecognizer* speechRecognizer = \
        [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
                           autoDetectSourceLanguageConfiguration:autoDetectSourceLanguageConfig
                                              audioConfiguration:audioConfig];
SPXSpeechRecognitionResult *result = [speechRecognizer recognizeOnce];
SPXAutoDetectSourceLanguageResult *languageDetectionResult = [[SPXAutoDetectSourceLanguageResult alloc] init:result];
NSString *detectedLanguage = [languageDetectionResult language];
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages(["en-US", "de-DE"]);
var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, autoDetectSourceLanguageConfig, audioConfig);
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) => {
        var languageDetectionResult = SpeechSDK.AutoDetectSourceLanguageResult.fromResult(result);
        var detectedLanguage = languageDetectionResult.language;
},
{});

語音轉換文字自訂模型

注意

使用自訂模型的語言偵測只能與即時語音轉換文字和語音翻譯搭配使用。 批次轉譯僅支援預設基底模型的語言偵測。

此範例示範如何搭配自定義端點使用語言偵測。 如果偵測到的語言為 en-US,則此範例會使用預設模型。 如果偵測到的語言為 fr-FR,則此範例會使用自定義模型端點。 如需詳細資訊,請參閱 部署自定義語音模型

var sourceLanguageConfigs = new SourceLanguageConfig[]
{
    SourceLanguageConfig.FromLanguage("en-US"),
    SourceLanguageConfig.FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR")
};
var autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig.FromSourceLanguageConfigs(
        sourceLanguageConfigs);

此範例示範如何搭配自定義端點使用語言偵測。 如果偵測到的語言為 en-US,則此範例會使用預設模型。 如果偵測到的語言為 fr-FR,則此範例會使用自定義模型端點。 如需詳細資訊,請參閱 部署自定義語音模型

std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
sourceLanguageConfigs.push_back(
    SourceLanguageConfig::FromLanguage("en-US"));
sourceLanguageConfigs.push_back(
    SourceLanguageConfig::FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));

auto autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig::FromSourceLanguageConfigs(
        sourceLanguageConfigs);

此範例示範如何搭配自定義端點使用語言偵測。 如果偵測到的語言為 en-US,則此範例會使用預設模型。 如果偵測到的語言為 fr-FR,則此範例會使用自定義模型端點。 如需詳細資訊,請參閱 部署自定義語音模型

List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
sourceLanguageConfigs.add(
    SourceLanguageConfig.fromLanguage("en-US"));
sourceLanguageConfigs.add(
    SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));

AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
    AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs(
        sourceLanguageConfigs);

此範例示範如何搭配自定義端點使用語言偵測。 如果偵測到的語言為 en-US,則此範例會使用預設模型。 如果偵測到的語言為 fr-FR,則此範例會使用自定義模型端點。 如需詳細資訊,請參閱 部署自定義語音模型

 en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
 fr_language_config = speechsdk.languageconfig.SourceLanguageConfig("fr-FR", "The Endpoint Id for custom model of fr-FR")
 auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
        sourceLanguageConfigs=[en_language_config, fr_language_config])

此範例示範如何搭配自定義端點使用語言偵測。 如果偵測到的語言為 en-US,則此範例會使用預設模型。 如果偵測到的語言為 fr-FR,則此範例會使用自定義模型端點。 如需詳細資訊,請參閱 部署自定義語音模型

SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
SPXSourceLanguageConfiguration* frLanguageConfig = \
        [[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"fr-FR"
                                                     endpointId:@"The Endpoint Id for custom model of fr-FR"];
NSArray *languageConfigs = @[enLanguageConfig, frLanguageConfig];
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
        [[SPXAutoDetectSourceLanguageConfiguration alloc]initWithSourceLanguageConfigurations:languageConfigs];
var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US");
var frLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR");
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs([enLanguageConfig, frLanguageConfig]);

執行語音翻譯

當您需要識別音訊來源中的語言,然後將它翻譯成另一種語言時,請使用語音翻譯。 如需詳細資訊,請參閱 語音翻譯概觀

注意

只有 C#、C++、JavaScript 和 Python 中的語音 SDK 才支援具有語言辨識的語音翻譯。 目前,針對具有語言識別的 wss://{region}.stt.speech.microsoft.com/speech/universal/v2 語音翻譯,您必須從端點字串建立SpeechConfig,如程式碼範例所示。 在未來的 SDK 版本中,您不需要設定它。

請參閱 GitHub具有語言辨識的語音翻譯更多範例。

using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
using Microsoft.CognitiveServices.Speech.Translation;

public static async Task RecognizeOnceSpeechTranslationAsync()
{
    var region = "YourServiceRegion";
    // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
    var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
    var endpointUrl = new Uri(endpointString);

    var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");

    // Source language is required, but currently ignored. 
    string fromLanguage = "en-US";
    speechTranslationConfig.SpeechRecognitionLanguage = fromLanguage;

    speechTranslationConfig.AddTargetLanguage("de");
    speechTranslationConfig.AddTargetLanguage("fr");

    var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });

    using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();

    using (var recognizer = new TranslationRecognizer(
        speechTranslationConfig, 
        autoDetectSourceLanguageConfig,
        audioConfig))
    {

        Console.WriteLine("Say something or read from file...");
        var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);

        if (result.Reason == ResultReason.TranslatedSpeech)
        {
            var lidResult = result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);

            Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={result.Text}");
            foreach (var element in result.Translations)
            {
                Console.WriteLine($"    TRANSLATED into '{element.Key}': {element.Value}");
            }
        }
    }
}

請參閱 GitHub具有語言辨識的語音翻譯更多範例。

auto region = "YourServiceRegion";
// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");

auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE" });

// Sets source and target languages
// The source language will be detected by the language detection feature. 
// However, the SpeechRecognitionLanguage still need to set with a locale string, but it will not be used as the source language.
// This will be fixed in a future version of Speech SDK.
auto fromLanguage = "en-US";
config->SetSpeechRecognitionLanguage(fromLanguage);
config->AddTargetLanguage("de");
config->AddTargetLanguage("fr");

// Creates a translation recognizer using microphone as audio input.
auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig);
cout << "Say something...\n";

// Starts translation, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognized text as well as the translation.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
auto result = recognizer->RecognizeOnceAsync().get();

// Checks result.
if (result->Reason == ResultReason::TranslatedSpeech)
{
    cout << "RECOGNIZED: Text=" << result->Text << std::endl;

    for (const auto& it : result->Translations)
    {
        cout << "TRANSLATED into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
    }
}
else if (result->Reason == ResultReason::RecognizedSpeech)
{
    cout << "RECOGNIZED: Text=" << result->Text << " (text could not be translated)" << std::endl;
}
else if (result->Reason == ResultReason::NoMatch)
{
    cout << "NOMATCH: Speech could not be recognized." << std::endl;
}
else if (result->Reason == ResultReason::Canceled)
{
    auto cancellation = CancellationDetails::FromResult(result);
    cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;

    if (cancellation->Reason == CancellationReason::Error)
    {
        cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
        cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
        cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
    }
}

請參閱 GitHub具有語言辨識的語音翻譯更多範例。

import azure.cognitiveservices.speech as speechsdk
import time
import json

speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
weatherfilename="en-us_zh-cn.wav"

# set up translation parameters: source language and target languages
# Currently the v2 endpoint is required. In a future SDK release you won't need to set it. 
endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
translation_config = speechsdk.translation.SpeechTranslationConfig(
    subscription=speech_key,
    endpoint=endpoint_string,
    speech_recognition_language='en-US',
    target_languages=('de', 'fr'))
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)

# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])

# Creates a translation recognizer using and audio file as input.
recognizer = speechsdk.translation.TranslationRecognizer(
    translation_config=translation_config, 
    audio_config=audio_config,
    auto_detect_source_language_config=auto_detect_source_language_config)

# Starts translation, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. The task returns the recognition text as result.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
result = recognizer.recognize_once()

# Check the result
if result.reason == speechsdk.ResultReason.TranslatedSpeech:
    print("""Recognized: {}
    German translation: {}
    French translation: {}""".format(
        result.text, result.translations['de'], result.translations['fr']))
elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
    print("Recognized: {}".format(result.text))
    detectedSrcLang = result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
    print("Detected Language: {}".format(detectedSrcLang))
elif result.reason == speechsdk.ResultReason.NoMatch:
    print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
    print("Translation canceled: {}".format(result.cancellation_details.reason))
    if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
        print("Error details: {}".format(result.cancellation_details.error_details))

執行並使用容器

語音容器會提供 Websocket 型查詢端點 API,其可透過語音 SDK 和語音 CLI 來存取。 根據預設,語音 SDK 和語音 CLI 會使用公用語音服務。 若要使用容器,您必須變更初始化方法。 使用容器主機 URL,而不是金鑰和區域。

當您在容器中執行語言識別碼時,請使用 SourceLanguageRecognizer 物件,而不是 SpeechRecognizerTranslationRecognizer

如需容器的詳細資訊,請參閱語言辨識語音容器操作指南。

實作語音轉換文字批次轉譯

若要使用 Batch 轉譯 REST API 來識別語言,請在Transcriptions_Create要求主體中使用 languageIdentification 屬性。

警告

批次轉譯僅支援預設基底模型的語言識別。 如果在轉譯要求中同時指定語言識別和自定義模型,服務會回復為使用指定候選語言的基底模型。 這可能會導致非預期的辨識結果。

如果您的語音轉換文字案例需要語言識別和自訂模型,請使用即時語音轉換文字,而不是批次謄寫。

下列範例顯示具有四個候選語言的 languageIdentification 屬性使用方式。 如需要求屬性的詳細資訊,請參閱 建立批次轉譯

{
    <...>
    
    "properties": {
    <...>
    
        "languageIdentification": {
            "candidateLocales": [
            "en-US",
            "ja-JP",
            "zh-CN",
            "hi-IN"
            ]
        },	
        <...>
    }
}