Connection to azure cognitive service failed with Firefox
based on Firefox 84, we have a voice-assistant app, it works fine until last Saturday(4.11)。 Azure API from https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-recognize-speech?pivots=programming-language-javascript the api…
speechsdk.SpeechRecognizer only works from ipynb notebook, cancels when run from .py script
I have been closely following MS speech recognition code examples but I ryun into an inconsistency in Azure Speech API behavior... When I run code below from .ipynb notebook it works and happily churns out recognition results. import os import time #…
Connection to azure cognitive service failed with Firefox 124
The latest version of Firefox (124) has enabled the HTTP2 protocol by default for WebSocket connections. This change causes connections to Azure Cognitive Service API (for example, speech recognition) to fail with a 404 error. The servers are not capable…
while using ASR getting 429 errors with custom endpoints
1.While using ASR with custom endpoints getting 429 errors (many requests). If we request too many calls, it will get these 429 errors. Could you please share the document on this. Is it same for public endpoints? if it is different from custom…
Android uses TTS SDK and 3 errors occur
Hello, our App Android version has used Microsoft's TTS SDK "com.microsoft.cognitiveservices.speech:client-sdk:1.34.0" But 3 errors appear frequently: Error 1: {CancellationReason:Error ErrorCode: ServiceTimeout ErrorDetails:USP error: timeout…
speech Synthesis Language hebrew not working
hey I am reaching out to address an issue I have encountered with the speech Synthesis Language( microsoft.cognitiveservices.speech.sdk ) functionality in JavaScript. I have noticed that when attempting to use the Hebrew language code (he-IL) for…
iOS version is using Microsoft TTS SDK occurs an error
Hello, our iOS version is using Microsoft TTS SDK, the version is: pod 'MicrosoftCognitiveServicesSpeech-iOS', '~> 1.35.0' When calling the official demo, an error occurred, specifically: func synthesisToSpeaker() { var speechConfig:…
I am receiving "Internal Server Error" on all batch speech to text requests
The Azure batch speech to text service has been working for us for some time, but today all of our requests started receiving "internal server error" responses. { "properties": { .... "error": { …
zh-CN-XiaochenNeural Abnormal timbre
zh-CN-XiaochenNeural, abnormal timbre. The same problem occurred in October last year. https://learn.microsoft.com/en-us/answers/questions/1431823/the-timbre-of-the-voice-of-zh-cn-xiaochenneural-ha —————————————————————— How long will it take to recover…
Text to speech S0 standard tier - 0.5 Million characters free, where?
Hello, I am using the cognitive text to speech service API with the 0 standard tier. I am not understanding why I am not getting the 0.5 million free credits per month that the free tier offers. Based on the support I spoke to over the chat, I should be…
When i using text to speech with Viseme it works perfectly with proper lip sync for the blend shapes but don't know why at night the lip sync's blend shapes are off??
Hi guys - I am seeing this weird issue with the quality of blend shapes received from TTS service. For some reason, the blend shape is not in sync with the audio at night whereas it works perfectly in the day time. I am using the southeast Asia region. …
Increase whisper quota
Hello, I have raised a request to increase our quota for the whisper model for the OAI one, would like to get guidance on how to increase the quota? Thanks in advance Ivan
dedicated pool of ASR engines (100 – 200) on standby
The customer is using real-time speech transcription by using custom endpoints and customer is requesting for is a dedicated pool of ASR engines (100 – 200) on standby, specific to judiciary’s usage and not for any other customer’s usage. The customer…
Azure speech to text giving WebSocket error in docker
I'm making a website using Next.js 14, and I want to add speech-to-text functionality to it. I followed this sample code as a guide: https://github.com/Azure-Samples/AzureSpeechReactSample. Locally, it's working as expected; the token is being fetched…
When will new voices support blendshape output?
Hello, we are using the text-to-speech service and are relying on blendshapes for facial animations. However, some voices do not support blendshapes and this doesn't seem to be documented. In the voices overview…
Processing customer service calls in Hebrew
How can I transcribe and extract a to-do list from phone calls to a car service company in Hebrew? I need to transcribe the call, summarize the call, create a to-do list for the salesperson, and identify any necessary business procedures that should…
message: Acoustic data import failed: Zero transcriptions could be parsed from the given input.
In the Speech Studio, I'm trying to train a custom model. I'm using this folder as the template for my zip file. This is the error I get: Number of success: 0 Number of failure: 1 Error message: [ { message: Acoustic data import…
How to fix azure cognitive speech services error 0x38
I'm making a python applications with four scripts, everything works fine in vscode, but when I use the onefile command with all necessary libaries and stuff, it doesn't work it gives me 0x38. I'm using azure's functions to turn speech into text. Here's…
Custom list phrase / vocabulary on batch transcriptions?
Hi, I need the ability to provide a custom list of phrases for every transcription depending on the customer who will be transcribing a file. Consequently, I need something like this …
Is it possible to implement using NodeJS Microsoft SDK, real-time streaming and viseme events?
Hi all, I would like to know is it possible to implement a Microsoft SDK/NodeJS based app for text-to-speech using reali-time streaming (meaning that the server/client starts playback as soon as the first chunk is received) and having access to viseme…