In, e.g., 0001.sentence.json, quotation marks present in the original sentence are dropped, if that quotation mark occurs at the beginning or end of the detected sentence. Is this expected behavior?
This is mostly in the title. Initially, I suspected this was a bug in the JSON serialization since JSON also uses " to delimit its fields, and these also have to be escaped in SSML. Upon further investigation, however, i found it also affects…
TTS繁體中文國語發音錯誤
「重考」發音應該是 ㄔㄨㄥˊ ㄎㄠˇ 「假期」發音應該是 ㄐㄧㄚˋ ㄑㄧˊ TTS 是收費服務,因此請儘快修正。 謝謝
Bug Report: Mispronunciation of Welsh Contraction "i’w" in Azure Neural TTS
Subject: Bug Report: Mispronunciation of Welsh Contraction "i’w" in Azure Neural TTS Description: The Azure Neural TTS system is mispronouncing the Welsh contraction "i’w." Instead of producing the correct pronunciation…
Will Azure AI Speech generate styles such as "happy", "cheerful", "excited" automatically from the data given?
I've added data with about 750 utterances. 80% are normal sentences, while 10% are questions and the other 10% are exclamations. What will Speech Studio need to generate styles such as Happy, Cheerful, etc? Do I have to give it more data? Or will…
Emotion Detection and Recognition from Text
What are some potential applications of Emotion Detection and Recognition from text, which aims to identify specific feelings like anger, disgust, fear, happiness, sadness, and surprise?
Azure AI - Speech Studio - Error Message
Hi there, I receive this error message today. "为资源 xiaoshuoyuedu1 分配的角色尚未生效。 请让资源管理员配置__自定义子域__并启用 VNet 以使你的角色正常工作。" "The role assigned to resource xiaoshuoyuedu1 has not taken effect yet. Please have the resource administrator configure…
azure prononciation assessment async assessment
i'am using azure speech recognizer sdk , to do the prononciation assessment of an audio file. the problem when the speech is in french the results are always low , and no expressive const language = await detectSingleSpeechLanguage(text) …
Speech recognition service is not working correctly
Hi, I'm using your speech service to recognize phrases spoken by a user in real time and evaluate their pronunciation. However, I am facing the following issues If I pass the reference text and set EnableMiscue =true, then all the wrong words the user…
Error while trying to train a 202240228 Whisper Large v2 baseline model
When trying to train a custom speech model using a dataset containing an audio file and its transcript, the model failed to train due to an internal error. Can anyone provide any insights on how to troubleshoot this issue?
Azure speech to text batch stucked on "Running" status and no percentage
this is the request: "azureRequest": { "displayName": "job_title...", "description": "job_title...", "locale": "it-it", "contentUrls": [ "{url of a wave…
Handling connection errors in Speech SDK
Hi, we are using Speech SDK (version 1.35.0, C++) for "speech to text". We use SpeechRecognizer->StartKeywordRecognitionAsync. While running the application, we lose connection sometimes and sometimes internet connection is okay, but we get…
Sample Data for different styles of Custom Neural Voices (happy, excited, sad).
I could find individual utterances for neutral speech, questions, and exclamations here: https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/Sample%20Data/Individual%20utterances%20%2B%20matching%20script/SampleScript.txt To…
Do we need to close/suspend built-in AI voices (Ava, Andrew, Emma, Brian, etc) after using them to create a file in Audio Content Creation?
Hello, I understand that Custom Neural Voices need to be suspended after use due to their per-hour pricing. Do we also need to suspend anything after using Microsoft's built-in AI voices? I couldn't find specific information on this and want to avoid…
How to estimate the time needed to train a custom STT model?
Hey! I'm thinking about fine-tuning a STT model with Audio + human-labeled transcript data in Speech Studio. However, as I read through the docs, I can see that "If you switch to a base model that supports customization with audio data, the training…
How can I make Microsoft consider adding Faroese language to Speech Services
I need text-to-speech services for Faroese in Speech Services. How would I go about getting Microsoft to consider this request? Is there any way for me myself to train a custom voice, for a language that doesn't yet exist in Microsoft's repository of…
How do you do pronunciation
Recently I had a script for a programming video, and I needed the word GUID, or goo id. I tried typing many different ways, and the only way I could get the word GUID, was to type goo hid, and use an audio editor and get rid of the H sound. Azure Speech…
400 Bad request using whisper with AzureCliCredentials
I'm trying to use Whisper using the AzureCliCredential and i always get an error as follow { code: 'Request is badly formated', message: 'Resource Id is badly formed: NA' } my very simple code is : import * as fs from "fs"; import {…
training with mixed language in custom-stt(English & Korean)
Hi, I am working on training korean custom-stt, but in the training data , there are a few english words mixed in it. Some of them are well processed and accepted as train data but others get rejected such as winder, insulator, gripper, rewinding. below…
Can I re-train an already deployed custom voice model with newly added data without undergoing the entire training time again (approximately 24 hours)?
Here’s the context: We set up a voice talent, added training data, trained the model, and deployed it. We've now updated the dataset with more audios and transcripts, increasing the number of utterances from 1300 to 1500. When I try to train this voice…
Why is the Isabella Multilingual voice available only in Clipchamp?
Hello, I noticed that the Isabella Multilingual voice for Thai Text to Speech is available in Clipchamp but not in Audio Content Creation. I'm interested in using this voice for my projects. I was wondering if there are any specific reasons why this…