Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,777 questions
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Hi, I'm creating a real-time voice chatbot. For speech recognition and synthesis I am using Azure Speech. What I do is recognize the voice, then send to an LLM to get a response, and then synthesize the response into audio in real time. My goal is that while the speech synthesis is running, it can continue recognizing at the same time so that when it recognizes audio from the microphone, the speech synthesis ends (synthesizer.stop_speaking()). I currently use one thread for speech recognition and another thread for speech synthesis. The code works well, but the problem I have is that when the voice synthesis is running and at the same time the voice is being recognized by the microphone, sometimes, part of the audio from the voice synthesis is recognized by the microphone. I would like to know if there is any property of speech_recognizer, where I can lower the audio pickup level so that there is no interference or if I should do something different. Thank you very much in advance.
Hello Gerardo,
Thanks for reaching out to us, if you can provide more details about your case, we can provide more information.
To answer your question generally first- Physical Setup and Acoustic Treatments
Microphone Placement:
Speakers Placement:
Room Acoustics:
Electronic and Software Solutions
Echo Cancellation:
Advanced Techniques
Use of Separate Audio Channels:
Noise Gate:
Adaptive Filtering:
I hope this helps, let us know if you need more information.
Regards,
Yutong
-Please kindly accept the answer the question if you feel helpful to support the community, thanks a lot.