How to specify the speech recognizer language
Learn how to select an installed language to use for speech recognition.
Here, we enumerate the languages installed on a system, identify which is the default language, and select a different language for recognition.
What you need to know
Technologies
Prerequisites
This topic builds on Quickstart: Speech recognition. You should have a basic understanding of speech recognition and recognition constraints.
To complete this tutorial, have a look through these topics to get familiar with the technologies discussed here:
- Install Microsoft Visual Studio.
- Get a developer license. For instructions, see Develop using Visual Studio 2013.
- Create your first app using JavaScript.
- Roadmap for Windows Store apps using JavaScript
- Learn about events with Quickstart: adding HTML controls and handling events
- See Speech design guidelines for Windows Phone for helpful tips on designing a useful and engaging speech-enabled app.
Instructions
Step 1: Identify the default language
A speech recognizer uses the system speech language as its default recognition language. This language is set by the user on the device Settings > System > Speech > Speech Language screen.
We identify the default language by checking the systemSpeechLanguage static property.
var language = SpeechRecognizer.systemSpeechLanguage;
Step 2: Confirm an installed language
Installed languages can vary between devices. You should verify the existence of a language if you depend on it for a particular constraint.
Note A reboot is required after a new language pack is installed. An exception with error code SPERR_NOT_FOUND (0x8004503a) is raised if the specified language is not supported or has not finished installing.
Determine the supported languages on a device by checking one of two static properties of the SpeechRecognizer class:
supportedTopicLanguages—The collection of Language objects used with predefined dictation and web search grammars.
supportedGrammarLanguages—The collection of Language objects used with a list constraint or a Speech Recognition Grammar Specification (SRGS) file.
Step 3: Specify a language
To specify a language, pass a Language object in the SpeechRecognizer constructor.
Here, we specify "en-US" as the recognition language.
var language = new Windows.Globalization.Language(“en-US”);
var recognizer = new SpeechRecognizer(language);
Remarks
A topic constraint can be configured by adding a SpeechRecognitionTopicConstraint to the constraints collection of the SpeechRecognizer and then calling compileConstraintsAsync. A speechRecognitionResultStatus of TopicLanguageNotSupported is returned if the recognizer is not initialized with a supported topic language.
A list constraint is configured by adding a speechRecognitionListConstraint to the constraints collection of the SpeechRecognizer and then calling compileConstraintsAsync. You cannot specify the language of a custom list directly. Instead the list will be processed using the language of the recognizer.
An SRGS grammar is an open-standard XML format represented by the SpeechRecognitionGrammarFileConstraint class. Unlike custom lists, you can specify the language of the grammar in the SRGS markup. compileConstraintsAsync fails with a SpeechRecognitionResultStatus of TopicLanguageNotSupported if the recognizer is not initialized to the same language as the SRGS markup.
Related topics
Responding to speech interactions
Designers