@Geoff Surtees Welcome to Microsoft Q&A Forum, Thank you for posting your query here!
I understand that your question is related to using International Phonetic alphabetic translation in azure text to speech to come out with a near perfect speech? If so, how?
.
Yes, you can use the International Phonetic Alphabet (IPA) for phonetic pronunciation in Azure Text to Speech. Azure AI services allow you to specify the phonetic pronunciation of words using the Universal Phone Set (UPS) in a structured text data file. The UPS is a machine-readable phone set that is based on the IPA. See here for more details.
Here’s how you can do it:
- Prepare a structured text data file where you specify the phonetic pronunciation of words using the UPS.
- UPS pronunciations consist of a string of UPS phonemes, each separated by whitespace.
- UPS phoneme labels are all defined using ASCII character strings.
- You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file.
- The Speech service doesn’t support training a model with both of those datasets as input.
.
Please note that structured text phonetic pronunciation data is separate from pronunciation data, and they cannot be used together. The first one is “sounds-like” or spoken-form data, and is input as a separate file, and trains the model what the spoken form sounds like.
For more detailed steps on implementing UPS, you can refer to the Structured text phonetic pronunciation guide provided by Microsoft. Structured-text data for training is in public preview.
.
This should help you achieve a near-perfect speech output with Azure Text to Speech. Remember, the quality of the speech output will also depend on the accuracy of your phonetic transcriptions.
.
Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.
**
Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.