Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Answer Machine Detection (AMD) helps contact centers identify whether a call is answered by a human or an answering machine. This article describes how to implement an AMD solution using Dual-Tone Multi-Frequency (DTMF) tones with Azure Communication Services existing play and recognize APIs.
To achieve this goal, developers can implement logic that uses the call connected event and plays an automated message. This message requests the callee to press a specific key to verify they're human before connecting them to an agent or playing a more specific message.
Step-by-step guide
- Create an outbound call. For more information about creating outbound calls, see Make an outbound call using Call Automation.
- Once the call is answered, you get a
CallConnectedevent. This event lets your application know that the call is answered. At this stage, it could be a human or an answer machine. - After receiving the
CallConnectedevent your application should use the recognize API and play a message to the callee requesting them to press a number on their dial pad to validate they're human, for example, your application might say "This is a call from [your company name] regarding [reason for call]. Press 1 to be connected to an agent." - If the user presses a key on the dialpad, Azure Communication Services sends a
RecognizeCompletedevent to your application. This indicates that a human answered the call and you should continue with your regular workflow. - If no DTMF input is received, Azure Communication Services sends a
RecognizeFailedevent to your application. This indicates that this call went to voicemail and you should follow your voicemail flow for this call.
Example code
//... rest of your code
var ttsMessage = "This is a call from [your company name] regarding [reason for call]. Please press 1 to be connected to an agent.";
var playSource = new TextSource(ttsMessage)
{
PlaySourceId = "playSourceId"
};
var playOptions = new PlayOptions
{
Loop = false
};
callConnection.Play(playSource, playOptions);
var recognizeOptions = new RecognizeOptions(new DtmfOptions(new[] { DtmfTone.One }))
{
InterruptPrompt = false,
InitialSilenceTimeout = TimeSpan.FromSeconds(5),
PlayPrompt = playSource
};
var recognizeResult = callConnection.Recognize(recognizeOptions);
// Handle the recognition result
if (recognizeResult.Status == RecognizeResultStatus.Recognized && recognizeResult.RecognizedTone == DtmfTone.One)
{
// Connect the call to an agent
Console.WriteLine("Human detected. Connecting to an agent...");
// Add your logic to connect the call to an agent
}
else
{
// Classify the call as an answering machine
Console.WriteLine("No response detected. Classifying as an answering machine...");
// Add your logic to handle answering machine
}
//... rest of your code
Next steps
- Learn more about Call Automation and its features.
- Learn more about Play action.
- Learn more about Recognize action.