Hi Andrew,
After enrolling in the private preview program and being in direct contact with Microsoft, I got it working. I'm working with the now public preview code:
- NuGet: Azure.Communication.CallAutomation version 1.1.0-beta.1
- CallAutomationClientOptions.ServiceVersion.V2023_06_15_Preview
Important to know is that I needed a Multi Service Azure Cognitive Service instead of a single Speech Service, and then use the endpoint of that one when establishing the call. The endpoint of that Multi Service service now includes the name of it, instead of the general "<location>.api.cognitive.microsoft.com/sts/v1.0/issuetoken".
Note that the documentation on how to link ACS to an Azure Cognitive Service has been updated when going from private to public preview: https://learn.microsoft.com/en-us/azure/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration
Please see some of my code snippets below.
Initialization of the CallAutomationClient (Setting the CallAutomationClientOptions is optional, it also works when not passing it to the constructor):
new CallAutomationClient(currentConfig["ACSConnectionString"], BuildCallAutomationClientOptions())
protected CallAutomationClientOptions BuildCallAutomationClientOptions()
{
CallAutomationClientOptions ret = new CallAutomationClientOptions();
ret.AddPolicy(new Catch429Policy(), Azure.Core.HttpPipelinePosition.PerRetry);
return ret;
}
Establishing the call:
var identifierFrom = new PhoneNumberIdentifier(Args.PhoneNumberFrom);
var identifierTo = new PhoneNumberIdentifier(Args.PhoneNumberTo);
var callInvite = new CallInvite(identifierTo, identifierFrom);
CreateCallOptions createCallOption = new(callInvite, callbackUri)
{
CognitiveServicesEndpoint = new Uri(cognitiveServicesEndPointString)
};
await callAutomationClient.CreateCallAsync(createCallOption);
Note: cognitive services endpoint string = https://<my-multi-service-account-cognitive-service-name>.cognitiveservices.azure.com/
Later, upon callback, when the call is connected and I want to use text-to-speech to prompt an audio message, I do this:
TextSource playSource = new TextSource("text to play in the audio message")
{
VoiceName = "en-US-NancyNeural"
};
PlayToAllOptions playToAllOptions = new PlayToAllOptions(playSource)
{
Loop = false
};
await callConnectionMedia.PlayToAllAsync(playToAllOptions);