การฝึกอบรม
โมดูล
Add Azure AI services to your mixed reality project - Training
This course explores the use of Azure speech services by integrating to a hololens2 application. You can also deploy your project to a HoloLens.
เบราว์เซอร์นี้ไม่ได้รับการสนับสนุนอีกต่อไป
อัปเกรดเป็น Microsoft Edge เพื่อใช้ประโยชน์จากคุณลักษณะล่าสุด เช่น การอัปเดตความปลอดภัยและการสนับสนุนด้านเทคนิค
Speech input providers, like Windows Speech Input, don't create any controllers but instead allow you to define keywords that will raise speech input events when recognized. The Speech Commands Profile in the Input System Profile is where you configure the keywords to recognize. For each command you can also:
The Speech Input Handler
script can be added to a GameObject to handle speech commands using UnityEvents. It automatically shows the list of the defined keywords from the Speech Commands Profile.
Assign optional SpeechConfirmationTooltip.prefab to display animated confirmation tooltip label on recognition.
Alternatively, developers can implement the IMixedRealitySpeechHandler
interface in a custom script component to handle speech input events.
The SpeechInputExample scene, in MRTK/Examples/Demos/Input/Scenes/Speech
, shows how to use speech. You can also listen to speech command events directly in your own script by implementing IMixedRealitySpeechHandler
(see table of event handlers).
การฝึกอบรม
โมดูล
Add Azure AI services to your mixed reality project - Training
This course explores the use of Azure speech services by integrating to a hololens2 application. You can also deploy your project to a HoloLens.