How to let original custom neural voice speak.

マイケル山口 5 Reputation points

I've tried to let my deployed custom neural voice speak according to "Quickstart: Convert text to speech", but it didn't work. The quick start program works normally when using existing voice name like "en-US-JennyNeural" or "ja-JP-NanamiNeural". But when I try my end point like "speechConfig.EndpointId= xxxxxxxxx", it get an error of "Bad Request". Does anyone know how to specify end point parameter and let custom voice speak? PLS let me know.

Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,445 questions
An object-oriented and type-safe programming language that has its roots in the C family of languages and includes support for component-oriented programming.
10,390 questions
{count} votes

1 answer

Sort by: Most helpful
  1. romungi-MSFT 42,786 Reputation points Microsoft Employee

    マイケル山口 Do you see the endpoint URL details in your deploy model pane or tab? It should look like below:

    Screenshot of custom endpoint app settings in Speech Studio.

    Depending on the region of your resource the base URL should start with https://<region> and then append the deployment id as a query parameter.

    speechConfig.EndpointId= "<your_deploymentid>"

    The voice name should be the same as the one used in the studio and should also be updated in your SSML.

    Did you get a chance to refer the documentation on using custom voice?

    1 person found this answer helpful.
    0 comments No comments