This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
What two core capabilities does the Azure Speech MCP server expose to agents?
Language translation and text summarization.
Speech-to-text recognition and text-to-speech synthesis.
Named entity recognition and sentiment analysis.
Why does the Azure Speech MCP server require an Azure Storage account?
To store the agent's instructions and configuration settings.
To store input audio files and output audio files generated by the speech tools.
To cache the MCP server's tool definitions for faster discovery.
What credentials are needed when connecting the Azure Speech MCP server to a Foundry agent?
An OAuth 2.0 token and a managed identity endpoint URL.
A Foundry resource key and a SAS URL for a blob container.
A client certificate and the Azure subscription ID.
How can you specify a particular voice when using the text-to-speech tool through the agent?
By configuring the voice in the MCP server settings before connecting.
By including the voice name in your natural language prompt to the agent.
By setting an environment variable in the client application code.
You must answer all questions before checking your work.
Was this page helpful?
Need help with this topic?
Want to try using Ask Learn to clarify or guide you through this topic?