Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

SpeechRecognitionEngine.Recognize Method

Performs a synchronous speech recognition operation.

Namespace:  Microsoft.Speech.Recognition
Assembly:  Microsoft.Speech (in Microsoft.Speech.dll)

Syntax

'Declaration
Public Function Recognize As RecognitionResult
'Usage
Dim instance As SpeechRecognitionEngine
Dim returnValue As RecognitionResult

returnValue = instance.Recognize()
public RecognitionResult Recognize()

Return Value

Type: Microsoft.Speech.Recognition.RecognitionResult
The recognition result for the input, or a null reference (Nothing in Visual Basic) if the operation is not successful.

Remarks

This method performs a single, synchronous recognition operation. The recognizer performs this operation against its loaded and enabled speech recognition grammars.

During a call to this method, the recognizer can raise the following events:

The recognizer does not raise the RecognizeCompleted event when using this method.

The Recognize() method returns a RecognitionResult object, or a null reference (Nothing in Visual Basic) if the operation is not successful.

A synchronous recognition operation can fail for the following reasons:

  • Speech is not detected before the timeout intervals expire for the BabbleTimeout or InitialSilenceTimeout properties.

  • The recognition engine detects speech but finds no matches in any of its loaded and enabled Grammar objects.

To perform asynchronous recognition, use one of the RecognizeAsync methods.

Examples

The following example shows part of a console application that demonstrates basic speech recognition. The example creates a speech recognition grammar for choosing cities for a flight. It then constructs a Grammar object from the grammar, loads it into the SpeechRecognitionEngine object, and performs one recognition operation.

using System;
using Microsoft.Speech.Recognition;

namespace SynchronousRecognition
{
  class Program
  {
    static void Main(string[] args)
    {
      // Create an in-process speech recognizer for the en-US locale.
      using (SpeechRecognitionEngine recognizer =
        new SpeechRecognitionEngine(
          new System.Globalization.CultureInfo("en-US")))
      {

        // Create a grammar for choosing cities for a flight.
        Choices cities = new Choices(new string[] 
        { "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });

        GrammarBuilder gb = new GrammarBuilder();
        gb.Append("I want to fly from");
        gb.Append(cities);
        gb.Append("to");
        gb.Append(cities);

        // Construct a Grammar object and load it to the recognizer.
        Grammar cityChooser = new Grammar(gb);
        cityChooser.Name = ("City Chooser");
        recognizer.LoadGrammarAsync(cityChooser);

        // Configure input to the speech recognizer.
        recognizer.SetInputToDefaultAudioDevice();

        // Start synchronous speech recognition.
        RecognitionResult result = recognizer.Recognize();

        if (result != null)
        {
          Console.WriteLine("Recognized text = {0}", result.Text);
        }
        else
        {
          Console.WriteLine("No recognition result available.");
        }
      }

      Console.WriteLine();
      Console.WriteLine("Press any key to continue...");
      Console.ReadKey();
    }
  }
}

See Also

Reference

SpeechRecognitionEngine Class

SpeechRecognitionEngine Members

Recognize Overload

Microsoft.Speech.Recognition Namespace