System.Speech.Recognition Namespace

Contains Windows Desktop Speech technology types for implementing speech recognition.



Provides data for the AudioLevelUpdated event of the SpeechRecognizer or the SpeechRecognitionEngine class.


Provides data for the AudioSignalProblemOccurred event of a SpeechRecognizer or a SpeechRecognitionEngine.


Provides data for the AudioStateChanged event of the SpeechRecognizer or the SpeechRecognitionEngine class.


Represents a set of alternatives in the constraints of a speech recognition grammar.


Represents a speech recognition grammar used for free text dictation.


Provides data for the EmulateRecognizeCompleted event of the SpeechRecognizer and SpeechRecognitionEngine classes.


A runtime object that references a speech recognition grammar, which an application can use to define the constraints for speech recognition.


Provides a mechanism for programmatically building the constraints for a speech recognition grammar.


Provides data for the LoadGrammarCompleted event of a SpeechRecognizer or SpeechRecognitionEngine object.


Provides information about speech recognition events.


Contains detailed information about input that was recognized by instances of SpeechRecognitionEngine or SpeechRecognizer.


Provides data for the RecognizeCompleted event raised by a SpeechRecognitionEngine or a SpeechRecognizer object.


Represents audio input that is associated with a RecognitionResult.


Contains detailed information, generated by the speech recognizer, about the recognized input.


Provides the atomic unit of recognized speech.


Represents information about a SpeechRecognizer or SpeechRecognitionEngine instance.


Returns data from a RecognizerUpdateReached or a RecognizerUpdateReached event.


Contains information about a speech normalization procedure that has been performed on recognition results.


Associates a key string with SemanticResultValue values to define SemanticValue objects.


Represents a semantic value and optionally associates the value with a component of a speech recognition grammar.


Represents the semantic organization of a recognized phrase.


Returns data from SpeechDetected or SpeechDetected events.


Returns notification from SpeechHypothesized or SpeechHypothesized events.

This class supports the .NET Framework infrastructure and is not intended to be used directly from application code.


Provides the means to access and manage an in-process speech recognition engine.


Provides information for the SpeechRecognitionRejected and SpeechRecognitionRejected events.


Provides information for the SpeechRecognized, SpeechRecognized, and SpeechRecognized events.


Provides access to the shared speech recognition service available on the Windows desktop.


Provides text and status information on recognition operations to be displayed in the Speech platform user interface.


Returns data from the StateChanged event.



Contains a list of possible problems in the audio signal coming in to a speech recognition engine.


Contains a list of possible states for the audio input to a speech recognition engine.


Lists the options that the SpeechRecognitionEngine object can use to specify white space for the display of a word or punctuation mark.


Enumerates values of the recognition mode.


Enumerates values of the recognizer's state.


Enumerates values of subset matching mode.


The Windows Desktop Speech Technology software offers a basic speech recognition infrastructure that digitizes acoustical signals, and recovers words and speech elements from audio input.

Applications use the System.Speech.Recognition namespace to access and extend this basic speech recognition technology by defining algorithms for identifying and acting on specific phrases or word patterns, and by managing the runtime behavior of this speech infrastructure.

Create Grammars

You create grammars, which consist of a set of rules or constraints, to define words and phrases that your application will recognize as meaningful input. Using a constructor for the Grammar class, you can create a grammar object at runtime from GrammarBuilder or SrgsDocument instances, or from a file, a string, or a stream that contains a definition of a grammar.

Using the GrammarBuilder and Choices classes, you can programmatically create grammars of low to medium complexity that can be used to perform recognition for many common scenarios. To create grammars programmatically that conform to the Speech Recognition Grammar Specification 1.0 (SRGS) and take advantage of the authoring flexibility of SRGS, use the types of the System.Speech.Recognition.SrgsGrammar namespace. You can also create XML-format SRGS grammars using any text editor and use the result to create GrammarBuilder, SrgsDocument , or Grammar objects.

In addition, the DictationGrammar class provides a special-case grammar to support a conventional dictation model.

See Create Grammars in the System Speech Programming Guide for .NET Framework for more information and examples.

Manage Speech Recognition Engines

Instances of SpeechRecognizer and SpeechRecognitionEngine supplied with Grammar objects provide the primary access to the speech recognition engines of the Windows Desktop Speech Technology.

You can use the SpeechRecognizer class to create client applications that use the speech recognition technology provided by Windows, which you can configure through the Control Panel. Such applications accept input through a computer's default audio input mechanism.

For more control over the configuration and type of recognition engine, build an application using SpeechRecognitionEngine, which runs in-process. Using the SpeechRecognitionEngine class, you can also dynamically select audio input from devices, files, or streams.

See Initialize and Manage a Speech Recognition Engine in the System Speech Programming Guide for .NET Framework for more information.

Respond to Events

SpeechRecognizer and SpeechRecognitionEngine objects generate events in response to audio input to the speech recognition engine. The AudioLevelUpdated, AudioSignalProblemOccurred, AudioStateChanged events are raised in response to changes in the incoming signal. The SpeechDetected event is raised when the speech recognition engine identifies incoming audio as speech. The speech recognition engine raises the SpeechRecognized event when it matches speech input to one of its loaded grammars, and raises the SpeechRecognitionRejected when speech input does not match any of its loaded grammars.

Other types of events include the LoadGrammarCompleted event which a speech recognition engine raises when it has loaded a grammar. The StateChanged is exclusive to the SpeechRecognizer class, which raises the event when the state of Windows Speech Recognition changes.

You can register to be notified for events that the speech recognition engine raises and create handlers using the EventsArgs classes associated with each of these events to program your application's behavior when an event is raised.

See Using Speech Recognition Events in the System Speech Programming Guide for .NET Framework for more information.

See also