共用方式為


SpeechRecognitionEngine.RecognizeAsync 方法

定義

啟動非同步語音辨識作業。

多載

RecognizeAsync()

執行單一、非同步語音辨識作業。

RecognizeAsync(RecognizeMode)

執行一項或多項非同步語音辨識作業。

備註

這些方法會執行單一或多個非同步辨識作業。 辨識器會針對載入和啟用的語音辨識文法執行每個作業。

在呼叫此方法期間,辨識器可以引發下列事件:

若要擷取非同步辨識作業的結果,請將事件處理常式附加至辨識器的事件 SpeechRecognized 。 每當辨識器成功完成同步或非同步辨識作業時,就會引發這個事件。 如果辨識未成功, Result 您可以在 事件的處理常式 RecognizeCompleted 中存取的物件屬性 RecognizeCompletedEventArgs 會是 null

非同步辨識作業可能會因為下列原因而失敗:

RecognizeAsync()

來源:
SpeechRecognitionEngine.cs
來源:
SpeechRecognitionEngine.cs
來源:
SpeechRecognitionEngine.cs

執行單一、非同步語音辨識作業。

public:
 void RecognizeAsync();
public void RecognizeAsync();
member this.RecognizeAsync : unit -> unit
Public Sub RecognizeAsync ()

範例

下列範例顯示示範基本非同步語音辨識的主控台應用程式的一部分。 此範例會 DictationGrammar 建立 、將它載入至處理中的語音辨識器,並執行一個非同步辨識作業。 包含事件處理常式,以示範辨識器在作業期間引發的事件。

using System;
using System.Globalization;
using System.Speech.Recognition;
using System.Threading;

namespace AsynchronousRecognition
{
  class Program
  {
    // Indicate whether asynchronous recognition is complete.
    static bool completed;

    static void Main(string[] args)
    {
      // Create an in-process speech recognizer.
      using (SpeechRecognitionEngine recognizer =
        new SpeechRecognitionEngine(new CultureInfo("en-US")))
      {
        // Create a grammar for choosing cities for a flight.
        Choices cities = new Choices(new string[]
        { "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });

        GrammarBuilder gb = new GrammarBuilder();
        gb.Append("I want to fly from");
        gb.Append(cities);
        gb.Append("to");
        gb.Append(cities);

        // Construct a Grammar object and load it to the recognizer.
        Grammar cityChooser = new Grammar(gb);
        cityChooser.Name = ("City Chooser");
        recognizer.LoadGrammarAsync(cityChooser);

        // Attach event handlers.
        recognizer.SpeechDetected +=
          new EventHandler<SpeechDetectedEventArgs>(
            SpeechDetectedHandler);
        recognizer.SpeechHypothesized +=
          new EventHandler<SpeechHypothesizedEventArgs>(
            SpeechHypothesizedHandler);
        recognizer.SpeechRecognitionRejected +=
          new EventHandler<SpeechRecognitionRejectedEventArgs>(
            SpeechRecognitionRejectedHandler);
        recognizer.SpeechRecognized +=
          new EventHandler<SpeechRecognizedEventArgs>(
            SpeechRecognizedHandler);
        recognizer.RecognizeCompleted +=
          new EventHandler<RecognizeCompletedEventArgs>(
            RecognizeCompletedHandler);

        // Assign input to the recognizer and start an asynchronous
        // recognition operation.
        recognizer.SetInputToDefaultAudioDevice();

        completed = false;
        Console.WriteLine("Starting asynchronous recognition...");
        recognizer.RecognizeAsync();

        // Wait for the operation to complete.
        while (!completed)
        {
          Thread.Sleep(333);
        }
        Console.WriteLine("Done.");
      }

      Console.WriteLine();
      Console.WriteLine("Press any key to exit...");
      Console.ReadKey();
    }

    // Handle the SpeechDetected event.
    static void SpeechDetectedHandler(object sender, SpeechDetectedEventArgs e)
    {
      Console.WriteLine(" In SpeechDetectedHandler:");
      Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
    }

    // Handle the SpeechHypothesized event.
    static void SpeechHypothesizedHandler(
      object sender, SpeechHypothesizedEventArgs e)
    {
      Console.WriteLine(" In SpeechHypothesizedHandler:");

      string grammarName = "<not available>";
      string resultText = "<not available>";
      if (e.Result != null)
      {
        if (e.Result.Grammar != null)
        {
          grammarName = e.Result.Grammar.Name;
        }
        resultText = e.Result.Text;
      }

      Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
        grammarName, resultText);
    }

    // Handle the SpeechRecognitionRejected event.
    static void SpeechRecognitionRejectedHandler(
      object sender, SpeechRecognitionRejectedEventArgs e)
    {
      Console.WriteLine(" In SpeechRecognitionRejectedHandler:");

      string grammarName = "<not available>";
      string resultText = "<not available>";
      if (e.Result != null)
      {
        if (e.Result.Grammar != null)
        {
          grammarName = e.Result.Grammar.Name;
        }
        resultText = e.Result.Text;
      }

      Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
        grammarName, resultText);
    }

    // Handle the SpeechRecognized event.
    static void SpeechRecognizedHandler(
      object sender, SpeechRecognizedEventArgs e)
    {
      Console.WriteLine(" In SpeechRecognizedHandler.");

      string grammarName = "<not available>";
      string resultText = "<not available>";
      if (e.Result != null)
      {
        if (e.Result.Grammar != null)
        {
          grammarName = e.Result.Grammar.Name;
        }
        resultText = e.Result.Text;
      }

      Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
        grammarName, resultText);
    }

    // Handle the RecognizeCompleted event.
    static void RecognizeCompletedHandler(
      object sender, RecognizeCompletedEventArgs e)
    {
      Console.WriteLine(" In RecognizeCompletedHandler.");

      if (e.Error != null)
      {
        Console.WriteLine(
          " - Error occurred during recognition: {0}", e.Error);
        return;
      }
      if (e.InitialSilenceTimeout || e.BabbleTimeout)
      {
        Console.WriteLine(
          " - BabbleTimeout = {0}; InitialSilenceTimeout = {1}",
          e.BabbleTimeout, e.InitialSilenceTimeout);
        return;
      }
      if (e.InputStreamEnded)
      {
        Console.WriteLine(
          " - AudioPosition = {0}; InputStreamEnded = {1}",
          e.AudioPosition, e.InputStreamEnded);
      }
      if (e.Result != null)
      {
        Console.WriteLine(
          " - Grammar = {0}; Text = {1}; Confidence = {2}",
          e.Result.Grammar.Name, e.Result.Text, e.Result.Confidence);
        Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
      }
      else
      {
        Console.WriteLine(" - No result.");
      }

      completed = true;
    }
  }
}

備註

這個方法會執行單一非同步辨識作業。 辨識器會針對載入和啟用的語音辨識文法執行作業。

在呼叫此方法期間,辨識器可以引發下列事件:

若要擷取非同步辨識作業的結果,請將事件處理常式附加至辨識器的事件 SpeechRecognized 。 每當辨識器成功完成同步或非同步辨識作業時,就會引發這個事件。 如果辨識未成功, Result 您可以在 事件的處理常式 RecognizeCompleted 中存取的物件屬性 RecognizeCompletedEventArgs 會是 null

若要執行同步辨識,請使用其中 Recognize 一種方法。

這個方法會儲存在工作中,它會傳回方法同步對應專案可以擲回的所有非使用狀況例外狀況。 如果例外狀況儲存在傳回的工作中,則會在等候工作時擲回該例外狀況。 使用狀況例外狀況,例如 ArgumentException ,仍會同步擲回。 如需預存的例外狀況,請參閱 所 Recognize() 擲回的例外狀況。

另請參閱

適用於

RecognizeAsync(RecognizeMode)

來源:
SpeechRecognitionEngine.cs
來源:
SpeechRecognitionEngine.cs
來源:
SpeechRecognitionEngine.cs

執行一項或多項非同步語音辨識作業。

public:
 void RecognizeAsync(System::Speech::Recognition::RecognizeMode mode);
public void RecognizeAsync(System.Speech.Recognition.RecognizeMode mode);
member this.RecognizeAsync : System.Speech.Recognition.RecognizeMode -> unit
Public Sub RecognizeAsync (mode As RecognizeMode)

參數

mode
RecognizeMode

指出是否要執行一或多個辨識作業。

範例

下列範例顯示示範基本非同步語音辨識的主控台應用程式的一部分。 此範例會 DictationGrammar 建立 、將它載入至處理中的語音辨識器,並執行多個非同步辨識作業。 非同步作業會在 30 秒後取消。 包含事件處理常式,以示範辨識器在作業期間引發的事件。

using System;
using System.Globalization;
using System.Speech.Recognition;
using System.Threading;

namespace AsynchronousRecognition
{
  class Program
  {
    // Indicate whether asynchronous recognition is complete.
    static bool completed;

    static void Main(string[] args)
    {
      // Create an in-process speech recognizer.
      using (SpeechRecognitionEngine recognizer =
        new SpeechRecognitionEngine(new CultureInfo("en-US")))
      {
        // Create a grammar for choosing cities for a flight.
        Choices cities = new Choices(new string[] { "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });

        GrammarBuilder gb = new GrammarBuilder();
        gb.Append("I want to fly from");
        gb.Append(cities);
        gb.Append("to");
        gb.Append(cities);

        // Construct a Grammar object and load it to the recognizer.
        Grammar cityChooser = new Grammar(gb);
        cityChooser.Name = ("City Chooser");
        recognizer.LoadGrammarAsync(cityChooser);

        // Attach event handlers.
        recognizer.SpeechDetected +=
          new EventHandler<SpeechDetectedEventArgs>(
            SpeechDetectedHandler);
        recognizer.SpeechHypothesized +=
          new EventHandler<SpeechHypothesizedEventArgs>(
            SpeechHypothesizedHandler);
        recognizer.SpeechRecognitionRejected +=
          new EventHandler<SpeechRecognitionRejectedEventArgs>(
            SpeechRecognitionRejectedHandler);
        recognizer.SpeechRecognized +=
          new EventHandler<SpeechRecognizedEventArgs>(
            SpeechRecognizedHandler);
        recognizer.RecognizeCompleted +=
          new EventHandler<RecognizeCompletedEventArgs>(
            RecognizeCompletedHandler);

        // Assign input to the recognizer and start asynchronous
        // recognition.
        recognizer.SetInputToDefaultAudioDevice();

        completed = false;
        Console.WriteLine("Starting asynchronous recognition...");
        recognizer.RecognizeAsync(RecognizeMode.Multiple);

        // Wait 30 seconds, and then cancel asynchronous recognition.
        Thread.Sleep(TimeSpan.FromSeconds(30));
        recognizer.RecognizeAsyncCancel();

        // Wait for the operation to complete.
        while (!completed)
        {
          Thread.Sleep(333);
        }
        Console.WriteLine("Done.");
      }

      Console.WriteLine();
      Console.WriteLine("Press any key to exit...");
      Console.ReadKey();
    }

    // Handle the SpeechDetected event.
    static void SpeechDetectedHandler(object sender, SpeechDetectedEventArgs e)
    {
      Console.WriteLine(" In SpeechDetectedHandler:");
      Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
    }

    // Handle the SpeechHypothesized event.
    static void SpeechHypothesizedHandler(
      object sender, SpeechHypothesizedEventArgs e)
    {
      Console.WriteLine(" In SpeechHypothesizedHandler:");

      string grammarName = "<not available>";
      string resultText = "<not available>";
      if (e.Result != null)
      {
        if (e.Result.Grammar != null)
        {
          grammarName = e.Result.Grammar.Name;
        }
        resultText = e.Result.Text;
      }

      Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
        grammarName, resultText);
    }

    // Handle the SpeechRecognitionRejected event.
    static void SpeechRecognitionRejectedHandler(
      object sender, SpeechRecognitionRejectedEventArgs e)
    {
      Console.WriteLine(" In SpeechRecognitionRejectedHandler:");

      string grammarName = "<not available>";
      string resultText = "<not available>";
      if (e.Result != null)
      {
        if (e.Result.Grammar != null)
        {
          grammarName = e.Result.Grammar.Name;
        }
        resultText = e.Result.Text;
      }

      Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
        grammarName, resultText);
    }

    // Handle the SpeechRecognized event.
    static void SpeechRecognizedHandler(
      object sender, SpeechRecognizedEventArgs e)
    {
      Console.WriteLine(" In SpeechRecognizedHandler.");

      string grammarName = "<not available>";
      string resultText = "<not available>";
      if (e.Result != null)
      {
        if (e.Result.Grammar != null)
        {
          grammarName = e.Result.Grammar.Name;
        }
        resultText = e.Result.Text;
      }

      Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
        grammarName, resultText);
    }

    // Handle the RecognizeCompleted event.
    static void RecognizeCompletedHandler(
      object sender, RecognizeCompletedEventArgs e)
    {
      Console.WriteLine(" In RecognizeCompletedHandler.");

      if (e.Error != null)
      {
        Console.WriteLine(
          " - Error occurred during recognition: {0}", e.Error);
        return;
      }
      if (e.InitialSilenceTimeout || e.BabbleTimeout)
      {
        Console.WriteLine(
          " - BabbleTimeout = {0}; InitialSilenceTimeout = {1}",
          e.BabbleTimeout, e.InitialSilenceTimeout);
        return;
      }
      if (e.InputStreamEnded)
      {
        Console.WriteLine(
          " - AudioPosition = {0}; InputStreamEnded = {1}",
          e.AudioPosition, e.InputStreamEnded);
      }
      if (e.Result != null)
      {
        Console.WriteLine(
          " - Grammar = {0}; Text = {1}; Confidence = {2}",
          e.Result.Grammar.Name, e.Result.Text, e.Result.Confidence);
        Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
      }
      else
      {
        Console.WriteLine(" - No result.");
      }

      completed = true;
    }
  }
}

備註

如果 modeMultiple ,則辨識器會繼續執行非同步辨識作業,直到 RecognizeAsyncCancel 呼叫 或 RecognizeAsyncStop 方法為止。

在呼叫此方法期間,辨識器可以引發下列事件:

若要擷取非同步辨識作業的結果,請將事件處理常式附加至辨識器的事件 SpeechRecognized 。 每當辨識器成功完成同步或非同步辨識作業時,就會引發這個事件。 如果辨識未成功, Result 您可以在 事件的處理常式 RecognizeCompleted 中存取的物件屬性 RecognizeCompletedEventArgs 會是 null

非同步辨識作業可能會因為下列原因而失敗:

  • 在 或 InitialSilenceTimeout 屬性的逾時間隔到期 BabbleTimeout 之前,不會偵測到語音。

  • 辨識引擎會偵測語音,但在任何已載入和啟用 Grammar 的物件中找不到相符專案。

若要執行同步辨識,請使用其中 Recognize 一種方法。

另請參閱

適用於