How to let original custom neural voice speak.

マイケル山口 5 Reputation points
2023-03-15T03:18:48.4+00:00

I've tried to let my deployed custom neural voice speak according to "Quickstart: Convert text to speech", but it didn't work. The quick start program works normally when using existing voice name like "en-US-JennyNeural" or "ja-JP-NanamiNeural". But when I try my end point like "speechConfig.EndpointId= xxxxxxxxx", it get an error of "Bad Request". Does anyone know how to specify end point parameter and let custom voice speak? PLS let me know.

Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
2,078 questions
Developer technologies | C#
{count} votes

1 answer

Sort by: Most helpful
  1. romungi-MSFT 48,916 Reputation points Microsoft Employee Moderator
    2023-03-16T04:18:06.23+00:00

    マイケル山口 Do you see the endpoint URL details in your deploy model pane or tab? It should look like below:

    Screenshot of custom endpoint app settings in Speech Studio.

    Depending on the region of your resource the base URL should start with https://<region>.voice.speech.microsoft.com/cognitiveservices/v1 and then append the deployment id as a query parameter.

    speechConfig.EndpointId= "https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId=<your_deploymentid>"

    The voice name should be the same as the one used in the studio and should also be updated in your SSML.

    Did you get a chance to refer the documentation on using custom voice?

    1 person found this answer helpful.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.