Hi @Cristian Camilo Bonilla Tellez , Thanks for using Microsoft Q&A Platform.
I'm not sure about the method you have used. If you have followed QuickStart, then I believe this might be reason due to recognize_once_async
method. As recognize_once_async
operation transcribe utterances of up to 30 seconds, or until silence is detected.
Try to you use continuous recognition when you want to control when to stop recognizing. It requires you to connect to EventSignal
to get the recognition results.
To stop recognition, you must call stop_continuous_recognition(). As mentioned here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-recognize-speech?pivots=programming-language-python#use-continuous-recognition
Here's an Github sample example for your references: https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py
I hope this helps. Please try these or share more information.
Regards,
Vasavi