An active Log Analytics Workspace, also known as Azure Monitor Logs. See End of Call Survey Logs.
To conduct a survey with custom questions using free form text, you need an App Insight resource.
Important
End of Call Survey is available starting on the version 1.13.1 of the Calling SDK. Make sure to use that version or later when trying the instructions.
Node.js active Long Term Support(LTS) versions are recommended.
Sample of API usage
The End of Call Survey feature should be used after the call ends. Users can rate any kind of VoIP call, 1:1, group, meeting, outgoing and incoming. Once a user's call ends, your application can show a UI to the end user allowing them to choose a rating score, and if needed, pick issues they’ve encountered during the call from our predefined list.
The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices.
We encourage you to use the default rating scale. However, you can submit a survey with custom rating scale. You can check out the sample application for the sample API usage.
The API returns the following error messages if data validation fails or the survey can't be submitted.
At least one survey rating is required.
In default scale X should be 1 to 5. - where X is either of:
overallRating.score
audioRating.score
videoRating.score
ScreenshareRating.score
{propertyName}: {rating.score} should be between {rating.scale?.lowerBound} and {rating.scale?.upperBound}.
{propertyName}: {rating.scale?.lowScoreThreshold} should be between {rating.scale?.lowerBound} and {rating.scale?.upperBound}.
{propertyName} lowerBound: {rating.scale?.lowerBound} and upperBound: {rating.scale?.upperBound} should be between 0 and 100.
Please try again [ACS failed to submit survey, due to network or other error].
We'll return any error codes with a message.
Error code 400 (bad request) for all the error messages except one.
{ message: validationErrorMessage, code: 400 }
One 408 (timeout) when event discarded:
{ message: "Please try again.", code: 408 }
All possible values
Default survey API configuration
API Rating Categories
Cutoff Value*
Input Range
Comments
Overall Call
2
1 - 5
Surveys a calling participant’s overall quality experience on a scale of 1-5. A response of 1 indicates an imperfect call experience and 5 indicates a perfect call. The cutoff value of 2 means that a customer response of 1 or 2 indicates a less than perfect call experience.
Audio
2
1 - 5
A response of 1 indicates an imperfect audio experience and 5 indicates no audio issues were experienced.
Video
2
1 - 5
A response of 1 indicates an imperfect video experience and 5 indicates no video issues were experienced.
Screenshare
2
1 - 5
A response of 1 indicates an imperfect screen share experience and 5 indicates no screen share issues were experienced.
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
You can choose to collect each of the four API values or only the ones
you find most important. For example, you can choose to only ask
customers about their overall call experience instead of asking them
about their audio, video, and screen share experience. You can also
customize input ranges to suit your needs. The default input range is 1
to 5 for Overall Call, Audio, Video, and
Screenshare. However, each API value can be customized from a minimum of
0 to maximum of 100.
Customization examples
API Rating Categories
Cutoff Value*
Input Range
Overall Call
0 - 100
0 - 100
Audio
0 - 100
0 - 100
Video
0 - 100
0 - 100
Screenshare
0 - 100
0 - 100
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
Custom questions
In addition to using the End of Call Survey API, you can create your own survey questions and incorporate them with the End of Call Survey results. Below you find steps to incorporate your own customer questions into a survey and query the results of the End of Call Survey API and your own survey questions.
Build a UI in your application that serves custom questions to the user and gather their input, lets assume that your application gathered responses as a string in the improvementSuggestion variable
Submit survey results to ACS and send user response using App Insights:
JavaScript
currentCall.feature(SDK.Features.CallSurvey).submitSurvey(survey).then(res => {
// `improvementSuggestion` contains custom, user responseif (improvementSuggestion !== '') {
appInsights.trackEvent({
name: "CallSurvey", properties: {
// Survey ID to correlate the survey
id: res.id,
// Other custom properties as key value pair
improvementSuggestion: improvementSuggestion
}
});
}
});
appInsights.flush();
User responses that were sent using AppInsights are available under your App Insights workspace. You can use Workbooks to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data:
Create new Workbooks (Your ACS Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your ACS resource.
Add new query (+Add -> Add query)
Make sure Data source is Logs and Resource type is Communication
You can rename the query (Advanced Settings -> Step name [example: call-survey])
Be aware that it could require a maximum of 2 hours before the survey data becomes visible in the Azure portal. Query the call rating data-
KQL
ACSCallSurvey
| where TimeGenerated > now(-24h)
Add another query to get data from App Insights (+Add -> Add query)
Make sure Data source is Logs and Resource type is Application Insights
Query the custom events-
KQL
customEvents
| where timestamp > now(-24h)
| where name == 'CallSurvey'
| extend d=parse_json(customDimensions)
| project SurveyId = d.id, ImprovementSuggestion = d.improvementSuggestion
You can rename the query (Advanced Settings -> Step name [example: custom-call-survey])
Finally merge these two queries by surveyId. Create new query (+Add -> Add query).
Make sure the Data source is Merge and select Merge type as needed
title: Azure Communication Services End of Call Survey
titleSuffix: An Azure Communication Services tutorial document
description: Learn how to use the End of Call Survey to collect user feedback.
author: viniciusl-msft
ms.author: viniciusl
manager: gaobob
services: azure-communication-services
ms.date: 7/30/2024
ms.topic: tutorial
ms.service: azure-communication-services
ms.subservice: calling
Important
End of Call Survey is available starting on the version 2.10.0 of the Android Calling SDK. Make sure to use that version or later when trying the instructions.
Sample of API usage
The End of Call Survey feature should be used after the call ends. Users can rate any kind of VoIP call, 1:1, group, meeting, outgoing and incoming. Once a user's call ends, your application can show a UI to the end user allowing them to choose a rating score, and if needed, pick issues they’ve encountered during the call from our predefined list.
The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices.
We encourage you to use the default rating scale, which is the five star rating (between 1-5). However, you can submit a survey with custom rating scale.
Start a survey
You create a CallSurvey object by starting a survey. This records a survey intent. In case this particular CallSurvey object isn't submitted afterwards, it means that the survey was skipped or ignored by the end customer.
When rating calls, you must respect values defined on the scale field. The lowerBound value denotes the worst experience possible, while the upperBound value means the perfect experience. Both values are inclusive.
OverallRating is a required category for all surveys.
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
For more information on suggested survey use, see Survey Concepts
Rate call only - no custom scale
Java
SurveyScore overall = new SurveyScore();
overall.setScore(5);
callSurvey.setOverallScore(overall);
Rate call only - with custom scale and issues
Java
// configuring scale
CallSurveyRatingScale ratingScale = new CallSurveyRatingScale();
ratingScale.setLowerBound(0);
ratingScale.setLowerBound(1);
ratingScale.setLowScoreThreshold(0);
SurveyScore overall = new SurveyScore();
overall.setScale(ratingScale);
// setting score according to scale
overall.setScore(1);
callSurvey.setOverallScore(overall);
// reporting one or more issues
callSurvey.setCallIssues(CallIssues.HAD_TO_REJOIN);
Rate overall, audio, and video with a sample issue
Java
SurveyScore overall = new SurveyScore();
overall.setScore(3);
SurveyScore audio = new SurveyScore();
audio.setScore(4);
SurveyScore video = new SurveyScore();
video.setScore(3);
callSurvey.setOverallScore(overall);
callSurvey.setAudioScore(audio);
callSurvey.setVideoScore(video);
callSurvey.setVideoIssues(VideoIssues.FREEZES);
The submitSurvey API can return an error in the following scenarios:
Overall survey rating is required.
CallSurveyRatingScale bounds must be within 0 and 100. LowerBound should be less than UpperBound. LowScoreThreshold should be within bounds.
Any of the scores must respect the bounds defined by the CallSurveyRatingScale. All values in the CallSurveyRatingScale object are inclusive. Using the default scale, the score value should be between 1 and 5.
Survey can't be submitted because of network/service error.
Available survey tags
Overall call
Tag
Description
CannotJoin
Customer wasn't able to join a call
CannotInvite
Customer wasn't able to add a new participant on call
HadToRejoin
Customer left and joined again the call as a workaround for an issue
CallEndedUnexpectedly
Customer's call ended with no apparent reason
OtherIssues
Any issue that doesn't fit previous descriptions
Audio issues
Tag
Description
NoLocalAudio
No Audio on the customer machine from the call, inability to hear anyone in the call
NoRemoteAudio
Missing audio from a specific participant
Echo
Echo being perceived in the call
AudioNoise
Audio received with unintended noise
LowVolume
Audio too low
AudioStoppedUnexpectedly
Audio stopped with no clear reason (e.g. no one is muted)
DistortedSpeech
A participant's voice is distorted, different from their expected voice
AudioInterruption
Customer experiences audio interruptions, voice cuts, etc.
OtherIssues
Any issue that doesn't fit previous descriptions
Video issues
Tag
Description
NoVideoReceived
Customer doesn't receive video from a participant
NoVideoSent
Customer starts video but no one in the call is able to see it
LowQuality
Low quality video
Freezes
Video Freezes
StoppedUnexpectedly
Screen Share stops with no clear reason (e.g camera is on and video calling is on)
DarkVideoReceived
Video is being sent but participant sees only a dark box (or another single color)
AudioVideoOutOfSync
Video and Audio don't seem to be in sync
OtherIssues
Any issue that doesn't fit previous descriptions
Screen share issues
Tag
Description
NoContentLocal
Customer doesn't receive screen share from a participant that is sharing
NoContentRemote
Customer is sharing screen, but other one or more participants are unable to see it
CannotPresent
Unable to start screen share
LowQuality
Low quality on screen share video, e.g unable to read
Freezes
Screen Share freezes during presentation
StoppedUnexpectedly
Screen Share stops with no clear reason (e.g screen share wasn't stopped by customer)
LargeDelay
Perceived delay between what is being shown and what is seen
OtherIssues
Any issue that doesn't fit previous descriptions
Customization options
You can choose to collect each of the four API values or only the ones
you find most important. For example, you can choose to only ask
customers about their overall call experience instead of asking them
about their audio, video, and screen share experience. You can also
customize input ranges to suit your needs. The default input range is 1
to 5 for Overall Call, Audio, Video, and
Screen share. However, each API value can be customized from a minimum of
0 to maximum of 100.
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
Custom questions
In addition to using the End of Call Survey API, you can create your own survey questions and incorporate them with the End of Call Survey results.
The result payload of SubmitSurvey operation provides data that you can use to correlate ACS Survey data with your own custom data and storage. CallSurveyResult class have the SurveyId field that denotes a unique identifier for the survey and CallId denotes an identifier for the call where the survey was generated. Saving these identifiers along with your customized data allow data to be associated uniquely.
Important
End of Call Survey is available starting on the version 2.10.0 of the iOS Calling SDK. Make sure to use that version or later when trying the instructions.
Sample of API usage
The End of Call Survey feature should be used after the call ends. Users can rate any kind of VoIP call, 1:1, group, meeting, outgoing and incoming. Once a user's call ends, your application can show a UI to the end user allowing them to choose a rating score, and if needed, pick issues they’ve encountered during the call from our predefined list.
The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices.
We encourage you to use the default rating scale, which is the five star rating (between 1-5). However, you can submit a survey with custom rating scale.
Start a survey
You create a CallSurvey object by starting a survey. This records a survey intent. In case this particular CallSurvey object isn't submitted afterwards, it means that the survey was skipped or ignored by the end customer.
Swift
var surveyCallFeature = self.call.feature(Features.survey)
do {
tryself.callSurvey = await surveyFeature.startSurvey()
} catch {
print("Failure to start survey")
}
General usage
When rating calls, you must respect values defined on the scale field. The lowerBound value denotes the worst experience possible, while the upperBound value means the perfect experience. Both values are inclusive.
OverallRating is a required category for all surveys.
For more information on suggested survey use, see Survey Concepts
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
Rate call only - no custom scale
Swift
let overallScore = SurveyScore()
overallScore.score = Int32(5)
callSurvey.overallScore = overallScore
Rate call only - with custom scale and issues
Java
// configuring scalevar ratingScale = new CallSurveyRatingScale()
ratingScale.lowerBound = 0;
ratingScale.upperBound = 1;
ratingScale.lowScoreThreshold = 0;
SurveyScore overall = new SurveyScore();
overall.scale = ratingScale;
// setting score according to scale
overall.score = 1;
callSurvey.overallScore = overall;
// reporting one or more issues
callSurvey.callIssues = [ CallIssues.hadToRejoin ];
Rate overall, audio, and video with a sample issue
Swift
let overall = SurveyScore();
overall.score = 3;
let audio = SurveyScore();
audio.score = 4;
let video = SurveyScore();
video.score = 3;
callSurvey.overallScore = overall;
callSurvey.audioScore = audio;
callSurvey.videoScore = video;
callSurvey.videoIssues = [ VideoIssues.freezes ];
Submit Survey and handle errors the SDK can send
Swift
do {
var result = try await self.surveyFeature!.submit(survey: callSurvey)
} catchlet error asNSError {
print("==> Survey Not Submitted " + error.localizedDescription)
}
Find different types of errors
Failures while submitting survey:
The submitSurvey API can return an error in the following scenarios:
Overall survey rating is required.
CallSurveyRatingScale bounds must be within 0 and 100. LowerBound should be less than UpperBound. LowScoreThreshold should be within bounds.
Any of the scores must respect the bounds defined by the CallSurveyRatingScale. All values in the CallSurveyRatingScale object are inclusive. Using the default scale, the score value should be between 1 and 5.
Survey can't be submitted because of network/service error.
Available Survey tags
Overall Call
Tag
Description
CannotJoin
Customer wasn't able to join a call
CannotInvite
Customer wasn't able to add a new participant on call
HadToRejoin
Customer left and joined again the call as a workaround for an issue
CallEndedUnexpectedly
Customer's call ended with no apparent reason
OtherIssues
Any issue that does not fit previous descriptions
Audio issues
Tag
Description
NoLocalAudio
No Audio on the customer machine from the call, inability to hear anyone in the call
NoRemoteAudio
Missing audio from a specific participant
Echo
Echo being perceived in the call
AudioNoise
Audio received with unintended noise
LowVolume
Audio too low
AudioStoppedUnexpectedly
Audio stopped with no clear reason (e.g. no one is muted)
DistortedSpeech
A participant's voice is distorted, different from their expected voice
AudioInterruption
Customer experiences audio interruptions, voice cuts, etc.
OtherIssues
Any issue that does not fit previous descriptions
Video issues
Tag
Description
NoVideoReceived
Customer doesn't receive video from a participant
NoVideoSent
Customer starts video but no one in the call is able to see it
LowQuality
Low quality video
Freezes
Video Freezes
StoppedUnexpectedly
Screen Share stops with no clear reason (e.g camera is on and video calling is on)
DarkVideoReceived
Video is being sent but participant sees only a dark box (or another single color)
AudioVideoOutOfSync
Video and Audio do not seem to be in sync
OtherIssues
Any issue that does not fit previous descriptions
Screen share issues
Tag
Description
NoContentLocal
Customer doesn't receive screen share from a participant that is sharing
NoContentRemote
Customer is sharing screen, but other one or more participants are unable to see it
CannotPresent
Unable to start screen share
LowQuality
Low quality on screen share video, e.g unable to read
Freezes
Screen Share freezes during presentation
StoppedUnexpectedly
Screen Share stops with no clear reason (e.g screen share was not stopped by customer)
LargeDelay
Perceived delay between what is being shown and what is seen
OtherIssues
Any issue that does not fit previous descriptions
Customization options
You can choose to collect each of the four API values or only the ones
you find most important. For example, you can choose to only ask
customers about their overall call experience instead of asking them
about their audio, video, and screen share experience. You can also
customize input ranges to suit your needs. The default input range is 1
to 5 for Overall Call, Audio, Video, and
Screen share. However, each API value can be customized from a minimum of
0 to maximum of 100.
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
Custom questions
In addition to using the End of Call Survey API, you can create your own survey questions and incorporate them with the End of Call Survey results.
However, the result payload of SubmitSurvey operation provides data that you can use to correlate ACS Survey data with your own custom data and storage. CallSurveyResult class have the SurveyId field that denotes a unique identifier for the survey and CallId denotes an identifier for the call where the survey was generated. Saving these identifiers along with your customized data allows data to be associated uniquely.
Important
End of Call Survey is available starting on the version 1.8.0 of the Windows Calling SDK. Make sure to use that version or later when trying the instructions.
Sample of API usage
The End of Call Survey feature should be used after the call ends. Users can rate any kind of VoIP call, 1:1, group, meeting, outgoing and incoming. Once a user's call ends, your application can show a UI to the end user allowing them to choose a rating score, and if needed, pick issues they’ve encountered during the call from our predefined list.
The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices.
We encourage you to use the default rating scale, which is the five star rating (between 1-5). However, you can submit a survey with custom rating scale.
Start a survey
You create a CallSurvey object by starting a survey. This records a survey intent. In case this particular CallSurvey object isn't submitted afterwards, it means that the survey was skipped or ignored by the end customer.
C#
var surveyCallFeature = call.Features.Survey;
var survey = await surveyCallFeature.StartSurveyAsync();
General usage
When rating calls, you must respect values defined on the scale field. The lowerBound value denotes the worst experience possible, while the upperBound value means the perfect experience. Both values are inclusive.
OverallRating is a required category for all surveys.
For more information on suggested survey use, see Survey Concepts
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
Rate call only - no custom scale
C#
survey.OverallScore = new CallSurveyScore() { Score = 5 };
Rate call only - with custom scale and issues
C#
// configuring scale and score
survey.OverallScore = new CallSurveyScore() {
Scale = new CallSurveyRatingScale() {
LowerBound = 0,
UpperBound = 1,
LowScoreThreshold = 1,
},
Score = 1
};
// reporting one or more issues
survey.OverallIssues = CallIssues.HadToRejoin;
Rate overall, audio, and video with a sample issue
C#
survey.OverallScore = new CallSurveyScore() {
Score = 5
};
survey.AudioScore = new CallSurveyScore() {
Score = 4
};
survey.VideoScore = new CallSurveyScore() {
Score = 3
};
survey.videoIssues = VideoIssues.Freezes;
The submitSurvey API can return an error in the following scenarios:
Overall survey rating is required.
CallSurveyRatingScale bounds must be within 0 and 100. LowerBound should be less than UpperBound. LowScoreThreshold should be within bounds.
Any of the scores must respect the bounds defined by the CallSurveyRatingScale. All values in the CallSurveyRatingScale object are inclusive. Using the default scale, the score value should be between 1 and 5.
Survey can't be submitted because of network/service error.
Available Survey tags
Overall Call
Tag
Description
CannotJoin
Customer wasn't able to join a call
CannotInvite
Customer wasn't able to add a new participant on call
HadToRejoin
Customer left and joined again the call as a workaround for an issue
EndedUnexpectedly
Customer's call ended with no apparent reason
OtherIssues
Any issue that doesn't fit previous descriptions
Audio Issues
Tag
Description
NoLocalAudio
No Audio on the customer machine from the call, inability to hear anyone in the call
NoRemoteAudio
Missing audio from a specific participant
Echo
Echo being perceived in the call
AudioNoise
Audio received with unintended noise
LowVolume
Audio too low
AudioStoppedUnexpectedly
Audio stopped with no clear reason (e.g. no one is muted)
DistortedSpeech
A participant's voice is distorted, different from their expected voice
AudioInterruption
Customer experiences audio interruptions, voice cuts, etc.
OtherIssues
Any issue that doesn't fit previous descriptions
Video Issues
Tag
Description
NoVideoReceived
Customer doesn't receive video from a participant
NoVideoSent
Customer starts video but no one in the call is able to see it
LowQuality
Low quality video
Freezes
Video Freezes
StoppedUnexpectedly
Screen Share stops with no clear reason (e.g camera is on and video calling is on)
DarkVideoReceived
Video is being sent but participant sees only a dark box (or another single color)
AudioVideoOutOfSync
Video and Audio do not seem to be in sync
OtherIssues
Any issue that doesn't fit previous descriptions
Screen share Issues
Tag
Description
NoContentLocal
Customer doesn't receive screen share from a participant that is sharing
NoContentRemote
Customer is sharing screen, but other one or more participants are unable to see it
CannotPresent
Unable to start screen share
LowQuality
Low quality on screen share video, e.g unable to read
Freezes
Screen Share freezes during presentation
StoppedUnexpectedly
Screen Share stops with no clear reason (e.g screen share wasn't stopped by customer)
LargeDelay
Perceived delay between what is being shown and what is seen
OtherIssues
Any issue that doesn't fit previous descriptions
Customization options
You can choose to collect each of the four API values or only the ones
you find most important. For example, you can choose to only ask
customers about their overall call experience instead of asking them
about their audio, video, and screen share experience. You can also
customize input ranges to suit your needs. The default input range is 1
to 5 for Overall Call, Audio, Video, and
Screen share. However, each API value can be customized from a minimum of
0 to maximum of 100.
Note
A question’s indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
Custom questions
In addition to using the End of Call Survey API, you can create your own survey questions and incorporate them with the End of Call Survey results.
However, the result payload of SubmitSurvey operation provides data that you can use to correlate ACS Survey data with your own custom data and storage. CallSurveyResult class have the SurveyId field that denotes a unique identifier for the survey and CallId denotes an identifier for the call where the survey was generated. Saving these identifiers along with your customized data allow data to be associated uniquely.
Collect survey data
Important
You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send survey data to one of these options your survey data will not be stored and will be lost. To enable these logs for your Communications Services, see: End of Call Survey Logs
View survey data with a Log Analytics workspace
You need to enable a Log Analytics Workspace to both store the log data of your surveys and access survey results. To enable these logs for your Communications Service, see: End of Call Survey Logs.
Here are our recommended survey flows and suggested question prompts for consideration. Your development can use our recommendation or use customized question prompts and flows for your visual interface.
Question 1: How did the users perceive their overall call quality experience?
We recommend you start the survey by only asking about the participants’ overall quality. If you separate the first and second questions, it helps to only collect responses to Audio, Video, and Screen Share issues if a survey participant indicates they experienced call quality issues.
Suggested prompt: “How was the call quality?”
API Question Values: Overall Call
Question 2: Did the user perceive any Audio, Video, or Screen Sharing issues in the call?
If a survey participant responded to Question 1 with a score at or below the cutoff value for the overall call, then present the second question.
Suggested prompt: “What could have been better?”
API Question Values: Audio, Video, and Screenshare
Surveying Guidelines
Avoid survey burnout, don’t survey all call participants.
The order of your questions matters. We recommend you randomize the sequence of optional tags in Question 2 in case respondents focus most of their feedback on the first prompt they visually see.
Consider using surveys for separate Azure Communication Services Resources in controlled experiments to identify release impacts.