Koolitus
Moodul
Add Azure AI services to your mixed reality project - Training
This course explores the use of Azure speech services by integrating to a hololens2 application. You can also deploy your project to a HoloLens.
Seda brauserit enam ei toetata.
Uusimate funktsioonide, turbevärskenduste ja tehnilise toe kasutamiseks võtke kasutusele Microsoft Edge.
The Windows Speech package adds WindowsKeywordRecognitionSubsystem
to your project, which offers keyword recognition capabilities on Windows and UWP platforms. As a MRTK KeywordRecognitionSubsystem, the subsystem can work with SpeechInteractor
to trigger select events on StatefulInteractable's
based on the settings of the interactables. You can also register arbitrary UnityAction's
to a keyword of your choice so that the action will be invoked when such word is said.
For general information on KeywordRecognitionSubsystem
in MRTK, refer to the documentation.
Please refer to the Setup and Using KeywordRecognitionSubsystem sections of the KeywordRecognitionSubsystem
article. For WindowsKeywordRecognitionSubsystem
, a configuration asset is used to set the Confidence Level
for interpreting speech. The only capability needed in player settings is "microphone".
The Windows Speech package also adds WindowsDictationSubsystem
to your project, which offers dictation capabilities on Windows and UWP platforms. As a MRTK DictationSubsystem, the subsystem allows you to start and stop a dictation session, and provides the following events:
Recognizing
is triggered when the recognizer is processing the input and returns a tentative result.Recognized
is triggered when the recognizer recognized the input and returns a final result.RecognitionFinished
is triggered when the recognition session is finished and returns a reason.RecognitionFaulted
is triggered when the recognition is faulted (i.e. error occurred) and returns a reason.For general information on DictationSubsystem
in MRTK, refer to the documentation.
Please refer to the Setup and Using DictationSubsystem sections of the DictationSubsystem
article. For WindowsDictationSubsystem
, a configuration asset is used to set the Confidence Level
for interpreting speech and the Auto Silence Timeout
and Initial Silence Timeout
for a dictation session. The only capability needed in player settings is "microphone".
Lastly, the Windows Speech package adds WindowsTextToSpeechSubsystem
to your project, which offers the capability to synthesize and speak a text phrase on Windows and UWP platforms. For general information on TextToSpeechSubsystem
in MRTK, refer to the documentation.
Please refer to the Setup and Using TextToSpeechSubsystem sections of the TextToSpeechSubsystem
article. For WindowsTextToSpeechSubsystem
, a configuration asset is used to select the Voice
used. No capabilities are required in player settings.
Märkus
As the names suggest, WindowsKeywordRecognitionSubsystem
, WindowsDictationSubsystem
, and WindowsTextToSpeechSubsystem
only work on the Windows standalone and UWP platforms.
Koolitus
Moodul
Add Azure AI services to your mixed reality project - Training
This course explores the use of Azure speech services by integrating to a hololens2 application. You can also deploy your project to a HoloLens.
Dokumentatsioon
Core Subsystem KeywordRecognitionSubsystem - MRTK3
Info on the MRTK3 subsystem responsible for keyword recognition
Speech input in MRTK3
Core Subsystem TextToSpeechSubsystem - MRTK3
Info on the MRTK3 subsystem responsible for text to speech