Add audio quality enhancements to your audio calling experience

The Azure Communication Services audio effects noise suppression abilities can improve your audio calls by filtering out unwanted background noises. Noise suppression is a technology that removes background noises from audio calls. It makes audio calls clearer and better by eliminating background noise, making it easier to talk and listen. Noise suppression can also reduce distractions and tiredness caused by noisy places. For example, if you're taking an Azure Communication Services WebJS call in a coffee shop with considerable noise, turning on noise suppression can make the call experience better.

Important

Functionality described in this article is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Using audio effects - noise suppression

Install the npm package

Use the npm install command to install the Azure Communication Services Audio Effects SDK for JavaScript.

Important

This tutorial uses the Azure Communication Services Calling SDK version of 1.24.1-beta.1 (or greater) and the Azure Communication Services Calling Audio Effects SDK version greater than or equal to 1.1.0-beta.1 (or greater).

npm install @azure/communication-calling-effects --save

Note

The calling effect library cannot be used standalone and can only work when used with the Azure Communication Calling client library for WebJS (https://www.npmjs.com/package/@azure/communication-calling).

You can find more details on the calling effects npm package page.

Note

Current browser support for adding audio noise suppression effects is only available on Chrome and Edge Desktop Browsers.

You can learn about the specifics of the calling API.

To use noise suppression audio effects within the Azure Communication Calling SDK, you need the LocalAudioStream that is currently in the call. You need access to the AudioEffects API of the LocalAudioStream to start and stop audio effects.

import * as AzureCommunicationCallingSDK from '@azure/communication-calling'; 
import { DeepNoiseSuppressionEffect } from '@azure/communication-calling-effects'; 

// Get the LocalAudioStream from the localAudioStream collection on the call object
// 'call' here represents the call object.
const localAudioStreamInCall = call.localAudioStreams[0];

// Get the audio effects feature API from LocalAudioStream
const audioEffectsFeatureApi = localAudioStreamInCall.feature(SDK.Features.AudioEffects);

// Subscribe to useful events that show audio effects status
audioEffectsFeatureApi.on('effectsStarted', (activeEffects: ActiveAudioEffects) => {
    console.log(`Current status audio effects: ${activeEffects}`);
});


audioEffectsFeatureApi.on('effectsStopped', (activeEffects: ActiveAudioEffects) => {
    console.log(`Current status audio effects: ${activeEffects}`);
});


audioEffectsFeatureApi.on('effectsError', (error: AudioEffectErrorPayload) => {
    console.log(`Error with audio effects: ${error.message}`);
});

At anytime if you want to check what noise suppression effects are currently active, you can use the activeEffects property. The activeEffects property returns an object with the names of the current active effects.

// Using the audio effects feature api
const currentActiveEffects = audioEffectsFeatureApi.activeEffects;

Start a call with Noise Suppression enabled

To start a call with noise suppression turned on, you can create a new LocalAudioStream with a AudioDeviceInfo (the LocalAudioStream source shouldn't be a raw MediaStream to use audio effects), and pass it in the CallStartOptions.audioOptions:

// As an example, here we are simply creating a LocalAudioStream using the current selected mic on the DeviceManager
const audioDevice = deviceManager.selectedMicrophone;
const localAudioStreamWithEffects = new SDK.LocalAudioStream(audioDevice);
const audioEffectsFeatureApi = localAudioStreamWithEffects.feature(SDK.Features.AudioEffects);

// Start effect
await audioEffectsFeatureApi.startEffects({
    noiseSuppression: deepNoiseSuppression
});

// Pass the LocalAudioStream in audioOptions in call start/accept options.
await call.startCall({
    audioOptions: {
        muted: false,
        localAudioStreams: [localAudioStreamWithEffects]
    }
});

How to turn on Noise Suppression during an ongoing call

There are situations where a user might start a call and not have noise suppression turned on, but their current environment might get noisy resulting in them needing to turn on noise suppression. To turn on noise suppression, you can use the audioEffectsFeatureApi.startEffects API.

// Create the noise supression instance 
const deepNoiseSuppression = new DeepNoiseSuppressionEffect();

// Its recommened to check support for the effect in the current environment using the isSupported method on the feature API. Remember that Noise Supression is only supported on Desktop Browsers for Chrome and Edge
const isDeepNoiseSuppressionSupported = await audioEffectsFeatureApi.isSupported(deepNoiseSuppression);
if (isDeepNoiseSuppressionSupported) {
    console.log('Noise supression is supported in browser environment');
}

// To start ACS Deep Noise Suppression,
await audioEffectsFeatureApi.startEffects({
    noiseSuppression: deepNoiseSuppression
});

// To stop ACS Deep Noise Suppression
await audioEffectsFeatureApi.stopEffects({
    noiseSuppression: true
});

Learn how to configure the audio filters with the Calling Native SDKs.

The Azure Communication Services audio effects offer filters that can improve your audio call. For native platforms (Android, iOS & Windows) you can configure the following filters:

Echo cancellation

It eliminates acoustic echo caused by the caller's voice echoing back into the microphone after being emitted from the speaker, ensuring clear communication.

You can configure the filter before and during a call. You can only toggle echo cancellation only if music mode is enabled. By default, this filter is enabled.

Noise suppression

Improve the audio quality filtering out unwanted background noises such as typing, air conditioning, or street sounds. This technology ensures that the voice is crisp and clear, facilitating more effective communication.

You can configure the filter before and during a call. The currently available modes are Off, Auto, Low, and High. By default, this feature is set to High mode.

Automatic gain control (AGC)

Automatically adjusts the microphone's volume to ensure consistent audio levels throughout the call.

  • Analog automatic gain control is a filter only available before a call. By default, this filter is enabled.
  • Digital automatic gain control is a filter only available before a call. By default, this filter is enabled.

Music Mode

Music mode is a filter available before and during a call. Learn more about music mode here. Music mode only works on native platforms over 1n1 or group calls and doesn't work in 1:1 calls between native and web. By default, music mode is disabled.

Prerequisites

Install the SDK

Locate your project-level build.gradle file and add mavenCentral() to the list of repositories under buildscript and allprojects:

buildscript {
    repositories {
    ...
        mavenCentral()
    ...
    }
}
allprojects {
    repositories {
    ...
        mavenCentral()
    ...
    }
}

Then, in your module-level build.gradle file, add the following lines to the dependencies section:

dependencies {
    ...
    implementation 'com.azure.android:azure-communication-calling:1.0.0'
    ...
}

Initialize the required objects

To create a CallAgent instance, you have to call the createCallAgent method on a CallClient instance. This call asynchronously returns a CallAgent instance object.

The createCallAgent method takes CommunicationUserCredential as an argument, which encapsulates an access token.

To access DeviceManager, you must create a callAgent instance first. Then you can use the CallClient.getDeviceManager method to get DeviceManager.

String userToken = '<user token>';
CallClient callClient = new CallClient();
CommunicationTokenCredential tokenCredential = new CommunicationTokenCredential(userToken);
android.content.Context appContext = this.getApplicationContext(); // From within an activity, for instance
CallAgent callAgent = callClient.createCallAgent(appContext, tokenCredential).get();
DeviceManager deviceManager = callClient.getDeviceManager(appContext).get();

To set a display name for the caller, use this alternative method:

String userToken = '<user token>';
CallClient callClient = new CallClient();
CommunicationTokenCredential tokenCredential = new CommunicationTokenCredential(userToken);
android.content.Context appContext = this.getApplicationContext(); // From within an activity, for instance
CallAgentOptions callAgentOptions = new CallAgentOptions();
callAgentOptions.setDisplayName("Alice Bob");
DeviceManager deviceManager = callClient.getDeviceManager(appContext).get();
CallAgent callAgent = callClient.createCallAgent(appContext, tokenCredential, callAgentOptions).get();

The audio filter feature allows different audio preprocessing options to be applied to outgoing audio. There are two types of audio filters: OutgoingAudioFilters and LiveOutgoingAudioFilters, with OutgoingAudioFilters changing settings before the call starts and LiveOutgoingAudioFilters changing settings while a call is in progress.

You first need to import the Calling SDK and the associated classes:

import com.azure.android.communication.calling.OutgoingAudioOptions;
import com.azure.android.communication.calling.OutgoingAudioFilters;
import com.azure.android.communication.calling.LiveOutgoingAudioFilters;

Before call starts

OutgoingAudioFilters can be applied when a call starts.

Begin by creating a OutgoingAudioFilters and passing it into OutgoingAudioOptions as shown in the following code:

OutgoingAudioOptions outgoingAudioOptions = new OutgoingAudioOptions();
OutgoingAudioFilters filters = new OutgoingAudioFilters();
filters.setNoiseSuppressionMode(NoiseSuppressionMode.HIGH);
filters.setAnalogAutomaticGainControlEnabled(true);
filters.setDigitalAutomaticGainControlEnabled(true);
filters.setMusicModeEnabled(true);
filters.setAcousticEchoCancellationEnabled(true); 
outgoingAudioOptions.setAudioFilters(filters);

During the call

LiveOutgoingAudioFilters can be applied after a call has started. You can retrieve this object from the call object once the call has started. To change the setting in LiveOutgoingAudioFilters, set the members inside the class to a valid value and they're applied.

Only a subset of the filters available from OutgoingAudioFilters are available during an active call: music mode, echo cancellation, and noise suppression mode.

LiveOutgoingAudioFilters filters = call.getLiveOutgoingAudioFilters();
filters.setMusicModeEnabled(false);
filters.setAcousticEchoCancellationEnabled(false);
filters.setNoiseSuppressionMode(NoiseSuppressionMode.HIGH);

Learn how to configure the audio filters with the Calling Native SDKs.

The Azure Communication Services audio effects offer filters that can improve your audio call. For native platforms (Android, iOS & Windows) you can configure the following filters:

Echo cancellation

It eliminates acoustic echo caused by the caller's voice echoing back into the microphone after being emitted from the speaker, ensuring clear communication.

You can configure the filter before and during a call. You can only toggle echo cancellation only if music mode is enabled. By default, this filter is enabled.

Noise suppression

Improve the audio quality filtering out unwanted background noises such as typing, air conditioning, or street sounds. This technology ensures that the voice is crisp and clear, facilitating more effective communication.

You can configure the filter before and during a call. The currently available modes are Off, Auto, Low, and High. By default, this feature is set to High mode.

Automatic gain control (AGC)

Automatically adjusts the microphone's volume to ensure consistent audio levels throughout the call.

  • Analog automatic gain control is a filter only available before a call. By default, this filter is enabled.
  • Digital automatic gain control is a filter only available before a call. By default, this filter is enabled.

Music Mode

Music mode is a filter available before and during a call. Learn more about music mode here. Music mode only works on native platforms over 1n1 or group calls and doesn't work in 1:1 calls between native and web. By default, music mode is disabled.

Prerequisites

Set up your system

Create the Xcode project

In Xcode, create a new iOS project and select the Single View App template. This quickstart uses the SwiftUI framework, so you should set Language to Swift and set Interface to SwiftUI.

You're not going to create tests during this quickstart. Feel free to clear the Include Tests checkbox.

Screenshot that shows the window for creating a project within Xcode.

Install the package and dependencies by using CocoaPods

  1. Create a Podfile for your application, like this example:

    platform :ios, '13.0'
    use_frameworks!
    target 'AzureCommunicationCallingSample' do
        pod 'AzureCommunicationCalling', '~> 1.0.0'
    end
    
  2. Run pod install.

  3. Open .xcworkspace by using Xcode.

Request access to the microphone

To access the device's microphone, you need to update your app's information property list by using NSMicrophoneUsageDescription. You set the associated value to a string that will be included in the dialog that the system uses to request access from the user.

Right-click the Info.plist entry of the project tree, and then select Open As > Source Code. Add the following lines in the top-level <dict> section, and then save the file.

<key>NSMicrophoneUsageDescription</key>
<string>Need microphone access for VOIP calling.</string>

Set up the app framework

Open your project's ContentView.swift file. Add an import declaration to the top of the file to import the AzureCommunicationCalling library. In addition, import AVFoundation. You'll need it for audio permission requests in the code.

import AzureCommunicationCalling
import AVFoundation

Initialize CallAgent

To create a CallAgent instance from CallClient, you have to use a callClient.createCallAgent method that asynchronously returns a CallAgent object after it's initialized.

To create a call client, pass a CommunicationTokenCredential object:

import AzureCommunication

let tokenString = "token_string"
var userCredential: CommunicationTokenCredential?
do {
    let options = CommunicationTokenRefreshOptions(initialToken: token, refreshProactively: true, tokenRefresher: self.fetchTokenSync)
    userCredential = try CommunicationTokenCredential(withOptions: options)
} catch {
    updates("Couldn't created Credential object", false)
    initializationDispatchGroup!.leave()
    return
}

// tokenProvider needs to be implemented by Contoso, which fetches a new token
public func fetchTokenSync(then onCompletion: TokenRefreshOnCompletion) {
    let newToken = self.tokenProvider!.fetchNewToken()
    onCompletion(newToken, nil)
}

Pass the CommunicationTokenCredential object that you created to CallClient, and set the display name:

self.callClient = CallClient()
let callAgentOptions = CallAgentOptions()
options.displayName = " iOS Azure Communication Services User"

self.callClient!.createCallAgent(userCredential: userCredential!,
    options: callAgentOptions) { (callAgent, error) in
        if error == nil {
            print("Create agent succeeded")
            self.callAgent = callAgent
        } else {
            print("Create agent failed")
        }
})

The audio filter feature allows different audio preprocessing options to be applied to outgoing audio. There are two types of audio filters: OutgoingAudioFilters and LiveOutgoingAudioFilters, with OutgoingAudioFilters changing settings before the call starts and LiveOutgoingAudioFilters changing settings while a call is in progress.

You first need to import the Calling SDK:

import AzureCommunicationCalling

Before call starts

OutgoingAudioFilters can be applied when a call starts.

Begin by creating a OutgoingAudioFilters and passing it into OutgoingAudioOptions as shown in the following code:

let outgoingAudioOptions = OutgoingAudioOptions()
let filters = OutgoingAudioFilters()
filters.NoiseSuppressionMode = NoiseSuppressionMode.high
filters.analogAutomaticGainControlEnabled = true
filters.digitalAutomaticGainControlEnabled = true
filters.musicModeEnabled = true
filters.acousticEchoCancellationEnabled = true
outgoingAudioOptions.audioFilters = filters

During the call

LiveOutgoingAudioFilters can be applied after a call has started. You can retrieve this object from the call object once the call has started. To change the setting in LiveOutgoingAudioFilters, set the members inside the class to a valid value and they're applied.

Only a subset of the filters available from OutgoingAudioFilters are available during an active call: music mode, echo cancellation, and noise suppression mode.

LiveOutgoingAudioFilters filters = call.liveOutgoingAudioFilters
filters.musicModeEnabled = true
filters.acousticEchoCancellationEnabled = true
filters.NoiseSuppressionMode = NoiseSuppressionMode.high

Learn how to configure the audio filters with the Calling Native SDKs.

The Azure Communication Services audio effects offer filters that can improve your audio call. For native platforms (Android, iOS & Windows) you can configure the following filters:

Echo cancellation

It eliminates acoustic echo caused by the caller's voice echoing back into the microphone after being emitted from the speaker, ensuring clear communication.

You can configure the filter before and during a call. You can only toggle echo cancellation only if music mode is enabled. By default, this filter is enabled.

Noise suppression

Improve the audio quality filtering out unwanted background noises such as typing, air conditioning, or street sounds. This technology ensures that the voice is crisp and clear, facilitating more effective communication.

You can configure the filter before and during a call. The currently available modes are Off, Auto, Low, and High. By default, this feature is set to High mode.

Automatic gain control (AGC)

Automatically adjusts the microphone's volume to ensure consistent audio levels throughout the call.

  • Analog automatic gain control is a filter only available before a call. By default, this filter is enabled.
  • Digital automatic gain control is a filter only available before a call. By default, this filter is enabled.

Music Mode

Music mode is a filter available before and during a call. Learn more about music mode here. Music mode only works on native platforms over 1n1 or group calls and doesn't work in 1:1 calls between native and web. By default, music mode is disabled.

Prerequisites

Set up your system

Create the Visual Studio project

For a UWP app, in Visual Studio 2022, create a new Blank App (Universal Windows) project. After you enter the project name, feel free to choose any Windows SDK later than 10.0.17763.0.

For a WinUI 3 app, create a new project with the Blank App, Packaged (WinUI 3 in Desktop) template to set up a single-page WinUI 3 app. Windows App SDK version 1.3 or later is required.

Install the package and dependencies by using NuGet Package Manager

The Calling SDK APIs and libraries are publicly available via a NuGet package.

The following steps exemplify how to find, download, and install the Calling SDK NuGet package:

  1. Open NuGet Package Manager by selecting Tools > NuGet Package Manager > Manage NuGet Packages for Solution.
  2. Select Browse, and then enter Azure.Communication.Calling.WindowsClient in the search box.
  3. Make sure that the Include prerelease check box is selected.
  4. Select the Azure.Communication.Calling.WindowsClient package, and then select Azure.Communication.Calling.WindowsClient 1.4.0-beta.1 or a newer version.
  5. Select the checkbox that corresponds to the Communication Services project on the right-side tab.
  6. Select the Install button.

The audio filter feature allows different audio preprocessing options to be applied to outgoing audio. There are two types of audio filters: OutgoingAudioFilters and LiveOutgoingAudioFilters, with OutgoingAudioFilters changing settings before the call starts and LiveOutgoingAudioFilters changing settings while a call is in progress.

You first need to import the Calling SDK:

using Azure.Communication;
using Azure.Communication.Calling.WindowsClient;

Before call starts

OutgoingAudioFilters can be applied when a call starts.

Begin by creating a OutgoingAudioFilters and passing it into OutgoingAudioOptions as shown in the following code:

var outgoingAudioOptions = new OutgoingAudioOptions();
var filters = new OutgoingAudioFilters()
{
    AnalogAutomaticGainControlEnabled = true,
    DigitalAutomaticGainControlEnabled = true,
    MusicModeEnabled = true,
    AcousticEchoCancellationEnabled = true,
    NoiseSuppressionMode = NoiseSuppressionMode.High
};
outgoingAudioOptions.Filters = filters;

During the call

LiveOutgoingAudioFilters can be applied after a call has started. You can retrieve this object from the call object once the call has started. To change the setting in LiveOutgoingAudioFilters, set the members inside the class to a valid value and they're applied.

Only a subset of the filters available from OutgoingAudioFilters are available during an active call: music mode, echo cancellation, and noise suppression mode.

LiveOutgoingAudioFilters filter = call.LiveOutgoingAudioFilters;
filter.MusicModeEnabled = true;
filter.AcousticEchoCancellationEnabled = true;
filter.NoiseSuppressionMode = NoiseSuppressionMode.Auto;