Events
May 19, 6 PM - May 23, 12 AM
Calling all developers, creators, and AI innovators to join us in Seattle @Microsoft Build May 19-22.
Register todayThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Azure AI Language and Azure AI Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation (PII), summarize the transcription, and detect the sentiment.
Some example scenarios for the implementation of Azure AI services in call and contact centers are:
Tip
Try the Language Studio or Speech Studio for a demonstration on how to use the Language and Speech services to analyze call center conversations.
To deploy a call center transcription solution to Azure with a no-code approach, try the Ingestion Client.
A holistic call center implementation typically incorporates technologies from the Language and Speech services.
Audio data typically used in call centers generated through landlines, mobile phones, and radios are often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
Once you transcribe your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
The Speech service offers the following features that can be used for call center use cases:
The Speech service works well with prebuilt models. However, you might want to further customize and tune the experience for your product or environment. Typical examples for Speech customization include:
Speech customization | Description |
---|---|
Custom speech | A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
Custom neural voice | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. |
The Language service offers the following features that can be used for call center use cases:
While the Language service works well with prebuilt models, you might want to further customize and tune models to extract more information from your data. Typical examples for Language customization include:
Language customization | Description |
---|---|
Custom NER (named entity recognition) | Improve the detection and extraction of entities in transcriptions. |
Custom text classification | Classify and label transcribed utterances with either single or multiple classifications. |
You can find an overview of all Language service features and customization options here.
Events
May 19, 6 PM - May 23, 12 AM
Calling all developers, creators, and AI innovators to join us in Seattle @Microsoft Build May 19-22.
Register today