zh-CN-XiaochenNeural Abnormal timbre
zh-CN-XiaochenNeural, abnormal timbre. The same problem occurred in October last year. https://learn.microsoft.com/en-us/answers/questions/1431823/the-timbre-of-the-voice-of-zh-cn-xiaochenneural-ha —————————————————————— How long will it take to recover…
Speech Recognition Live transcription not detecting any other language instead of English
Hi, I am using Speech Recognition resource in my application for live transcription. It's perfectly going with English language but when I am trying to say in Hindi then it's not detecting. I want to create my application for multiple languages used in…
Azure Pronuciation Assessment recognition offset lag
I'm using the Pronunciation Assessment with the recognizeOnceAsync method. We are presenting a word for assessment and measuring the response time. Sometimes the offset returned with the recognition corresponds closely with the time reported from the…
Android uses TTS SDK and 3 errors occur
Hello, our App Android version has used Microsoft's TTS SDK "com.microsoft.cognitiveservices.speech:client-sdk:1.34.0" But 3 errors appear frequently: Error 1: {CancellationReason:Error ErrorCode: ServiceTimeout ErrorDetails:USP error: timeout…
speech Synthesis Language hebrew not working
hey I am reaching out to address an issue I have encountered with the speech Synthesis Language( microsoft.cognitiveservices.speech.sdk ) functionality in JavaScript. I have noticed that when attempting to use the Hebrew language code (he-IL) for…
Do I have to be on GovCloud in order to connect/use Azure Speech Services hosted on GovCloud US Virginia?
Hi. I am working with a cloud providers solution that is located in Amazon us-east2 region. I am hoping you can help confirm if the Azure Cognitive STT and TTS integration will/should work with Azure Speech Services hosted on GovCloud US Virginia? …
Azure speech service bot working in firefox
Firefox can’t establish a connection to the server at wss://centralindia.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US&format=simple&Ocp-Apim-Subscription-
Error when calling "Audio Content Creation" in Speech Service
I (global admin) have assigned "Cognitive Service Speech Contributor" Role to our Developer in Speech Service. When he chooses "Audio Content Creation" he gets the message "The role you've assigned for the ressource [...] has not…
Connection to azure cognitive service failed with Firefox
based on Firefox 84, we have a voice-assistant app, it works fine until last Saturday(4.11)。 Azure API from https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-recognize-speech?pivots=programming-language-javascript the api…
Increase whisper quota
Hello, I have raised a request to increase our quota for the whisper model for the OAI one, would like to get guidance on how to increase the quota? Thanks in advance Ivan
Is it possible to specify in Speech SDK to always use "lbs" instead of "£" when "pounds" is recognized?
Hi, is it possible somehow to configure speech sdk in a way when word "pound" is detected that it is always meant to be lbs, not £, for example when I say, "99 pounds" it is detected as "99 lbs", but if I said, "100…
Do Text to Speech containers TTS provide visemes and blendshapes like the API?
I'm currently using the Speech API and consuming the visemes and blendshapes that are returned. In an effort to reduce latency I would like to run the speech services locally via the text to speech container. Does the response of the container STT…
Speech_SegmentationSilenceTimeoutMs and speech segmentation
Dear Azure Technical Support, I'm using the Azure Speech Service for continuous speech recognition and I've encountered a behavior that I'd like to clarify. Historically, when using the continuous recognition mode, the service segmented the audio into…
How can I fix WS_OPEN_ERROR_UNDERLYING_IO_OPEN_FAILED?
I have a FastAPI project which uses uvicorn server to run my application. speechsdk is used for Speech-to-Text operations, the endpoint I am using is…
How to increase the time for which the Microsoft Speech Service SDK listens in a single go?
I am using MS speech service sdk for speech to text conversion. When I speak, my speech is converted to text after 60 seconds even if I haven't stopped speaking. It basically considers it one chunk and starts processing it. What can I do to increase this…
Include custom audio files for keyword recognition training process
I am leveraging Azure Keyword Recognition service, it works pretty nice except some false wakeup. We've collected a bunch of false waking up audio files, and I was wondering whether there is some approach that we can include these false audio files into…
Enabling Voice Interaction for Azure Health Bot website
I took my azure health bot and deployed it to a custom website and used the health bot container sample they have on GitHub. It says for Google Chrome, voice interaction should be enabled but the little microphone within the chat does not pop up (even if…
I am happy with the results in "Speech Studio" for a sample wav file. How do I scale this up to longer files?
I have run a 1-minute wav file through the Speech Studio sample process and am pleased with the result. I can't figure out how to move forward in the system to process larger speech files. One branch seems to take me into a training setting where I…
Personal Voice : error 403
Hi, I have acces to the preview of Personal Voice I have test the demo I'm trying to create a real voice to use it in my application. I'm able to create the project :…
Speech service with custom endpoints.
When we were using Public Endpoints previously, we were able to start up to 80 concurrent connections per subscription key, and have not experienced any issues. However, when we start using Custom DNS Public Endpoints with whitelisted IP addresses, we…