Quickstart: Real-time diarization (Preview)
Reference documentation | Package (NuGet) | Additional Samples on GitHub
In this quickstart, you run an application for speech to text transcription with real-time diarization. Here, diarization is distinguishing between the different speakers participating in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech.
Note
Real-time diarization is currently in public preview.
The speaker information is included in the result in the speaker ID field. The speaker ID is a generic identifier assigned to each conversation participant by the service during the recognition as different speakers are being identified from the provided audio content.
Tip
You can try real-time speech-to-text in Speech Studio without signing up or writing any code. However, the Speech Studio doesn't yet support diarization.
Prerequisites
- Azure subscription - Create one for free.
- Create a Speech resource in the Azure portal.
- Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
Set up the environment
The Speech SDK is available as a NuGet package and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the SDK installation guide for any more requirements.
Set environment variables
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine that runs the application.
Tip
Don't include the key directly in your code, and never post it publicly. See Azure AI services security for more authentication options such as Azure Key Vault.
To set the environment variable for your Speech resource key, open a console window, and follow the instructions for your operating system and development environment.
- To set the
SPEECH_KEY
environment variable, replace your-key with one of the keys for your resource. - To set the
SPEECH_REGION
environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region
Note
If you only need to access the environment variables in the current console, you can set the environment variable with set
instead of setx
.
After you add the environment variables, you might need to restart any programs that need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.
Diarization from file with conversation transcription
Follow these steps to create a new console application and install the Speech SDK.
Open a command prompt where you want the new project, and create a console application with the .NET CLI. The
Program.cs
file should be created in the project directory.dotnet new console
Install the Speech SDK in your new project with the .NET CLI.
dotnet add package Microsoft.CognitiveServices.Speech
Replace the contents of
Program.cs
with the following code.using Microsoft.CognitiveServices.Speech; using Microsoft.CognitiveServices.Speech.Audio; using Microsoft.CognitiveServices.Speech.Transcription; class Program { // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY"); static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION"); async static Task Main(string[] args) { var filepath = "katiesteve.wav"; var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion); speechConfig.SpeechRecognitionLanguage = "en-US"; var stopRecognition = new TaskCompletionSource<int>(TaskCreationOptions.RunContinuationsAsynchronously); // Create an audio stream from a wav file or from the default microphone using (var audioConfig = AudioConfig.FromWavFileInput(filepath)) { // Create a conversation transcriber using audio stream input using (var conversationTranscriber = new ConversationTranscriber(speechConfig, audioConfig)) { conversationTranscriber.Transcribing += (s, e) => { Console.WriteLine($"TRANSCRIBING: Text={e.Result.Text}"); }; conversationTranscriber.Transcribed += (s, e) => { if (e.Result.Reason == ResultReason.RecognizedSpeech) { Console.WriteLine($"TRANSCRIBED: Text={e.Result.Text} Speaker ID={e.Result.SpeakerId}"); } else if (e.Result.Reason == ResultReason.NoMatch) { Console.WriteLine($"NOMATCH: Speech could not be transcribed."); } }; conversationTranscriber.Canceled += (s, e) => { Console.WriteLine($"CANCELED: Reason={e.Reason}"); if (e.Reason == CancellationReason.Error) { Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}"); Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}"); Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?"); stopRecognition.TrySetResult(0); } stopRecognition.TrySetResult(0); }; conversationTranscriber.SessionStopped += (s, e) => { Console.WriteLine("\n Session stopped event."); stopRecognition.TrySetResult(0); }; await conversationTranscriber.StartTranscribingAsync(); // Waits for completion. Use Task.WaitAny to keep the task rooted. Task.WaitAny(new[] { stopRecognition.Task }); await conversationTranscriber.StopTranscribingAsync(); } } } }
Replace
katiesteve.wav
with the filepath and filename of your.wav
file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the sample audio file provided in the Speech SDK samples repository on GitHub.Note
The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as
Unknown
.To change the speech recognition language, replace
en-US
with another supported language. For example,es-ES
for Spanish (Spain). The default language isen-US
if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.
Run your new console application to start conversation transcription:
dotnet run
Important
Make sure that you set the SPEECH_KEY
and SPEECH_REGION
environment variables as described above. If you don't set these variables, the sample will fail with an error message.
The transcribed conversation should be output as text:
TRANSCRIBED: Text=Good morning, Steve. Speaker ID=Unknown
TRANSCRIBED: Text=Good morning. Katie. Speaker ID=Unknown
TRANSCRIBED: Text=Have you tried the latest real time diarization in Microsoft Speech Service which can tell you who said what in real time? Speaker ID=Guest-1
TRANSCRIBED: Text=Not yet. I've been using the batch transcription with diarization functionality, but it produces diarization result until whole audio get processed. Speaker ID=Guest-2
TRANSRIBED: Text=Is the new feature can diarize in real time? Speaker ID=Guest-2
TRANSCRIBED: Text=Absolutely. Speaker ID=GUEST-1
TRANSCRIBED: Text=That's exciting. Let me try it right now. Speaker ID=GUEST-2
CANCELED: Reason=EndOfStream
Speakers are identified as Guest-1, Guest-2, and so on, depending on the number of speakers in the conversation.
Clean up resources
You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.
Reference documentation | Package (NuGet) | Additional Samples on GitHub
In this quickstart, you run an application for speech to text transcription with real-time diarization. Here, diarization is distinguishing between the different speakers participating in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech.
Note
Real-time diarization is currently in public preview.
The speaker information is included in the result in the speaker ID field. The speaker ID is a generic identifier assigned to each conversation participant by the service during the recognition as different speakers are being identified from the provided audio content.
Tip
You can try real-time speech-to-text in Speech Studio without signing up or writing any code. However, the Speech Studio doesn't yet support diarization.
Prerequisites
- Azure subscription - Create one for free.
- Create a Speech resource in the Azure portal.
- Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
Set up the environment
The Speech SDK is available as a NuGet package and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the SDK installation guide for any more requirements.
Set environment variables
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine that runs the application.
Tip
Don't include the key directly in your code, and never post it publicly. See Azure AI services security for more authentication options such as Azure Key Vault.
To set the environment variable for your Speech resource key, open a console window, and follow the instructions for your operating system and development environment.
- To set the
SPEECH_KEY
environment variable, replace your-key with one of the keys for your resource. - To set the
SPEECH_REGION
environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region
Note
If you only need to access the environment variables in the current console, you can set the environment variable with set
instead of setx
.
After you add the environment variables, you might need to restart any programs that need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.
Diarization from file with conversation transcription
Follow these steps to create a new console application and install the Speech SDK.
Create a new C++ console project in Visual Studio Community 2022 named
ConversationTranscription
.Install the Speech SDK in your new project with the NuGet package manager.
Install-Package Microsoft.CognitiveServices.Speech
Replace the contents of
ConversationTranscription.cpp
with the following code:#include <iostream> #include <stdlib.h> #include <speechapi_cxx.h> #include <future> using namespace Microsoft::CognitiveServices::Speech; using namespace Microsoft::CognitiveServices::Speech::Audio; using namespace Microsoft::CognitiveServices::Speech::Transcription; std::string GetEnvironmentVariable(const char* name); int main() { // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" auto speechKey = GetEnvironmentVariable("SPEECH_KEY"); auto speechRegion = GetEnvironmentVariable("SPEECH_REGION"); if ((size(speechKey) == 0) || (size(speechRegion) == 0)) { std::cout << "Please set both SPEECH_KEY and SPEECH_REGION environment variables." << std::endl; return -1; } auto speechConfig = SpeechConfig::FromSubscription(speechKey, speechRegion); speechConfig->SetSpeechRecognitionLanguage("en-US"); auto audioConfig = AudioConfig::FromWavFileInput("katiesteve.wav"); auto conversationTranscriber = ConversationTranscriber::FromConfig(speechConfig, audioConfig); // promise for synchronization of recognition end. std::promise<void> recognitionEnd; // Subscribes to events. conversationTranscriber->Transcribing.Connect([](const ConversationTranscriptionEventArgs& e) { std::cout << "TRANSCRIBING:" << e.Result->Text << std::endl; }); conversationTranscriber->Transcribed.Connect([](const ConversationTranscriptionEventArgs& e) { if (e.Result->Reason == ResultReason::RecognizedSpeech) { std::cout << "TRANSCRIBED: Text=" << e.Result->Text << std::endl; std::cout << "Speaker ID=" << e.Result->SpeakerId << std::endl; } else if (e.Result->Reason == ResultReason::NoMatch) { std::cout << "NOMATCH: Speech could not be transcribed." << std::endl; } }); conversationTranscriber->Canceled.Connect([&recognitionEnd](const ConversationTranscriptionCanceledEventArgs& e) { auto cancellation = CancellationDetails::FromResult(e.Result); std::cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl; if (cancellation->Reason == CancellationReason::Error) { std::cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl; std::cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl; std::cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl; } else if (cancellation->Reason == CancellationReason::EndOfStream) { std::cout << "CANCELED: Reach the end of the file." << std::endl; } }); conversationTranscriber->SessionStopped.Connect([&recognitionEnd](const SessionEventArgs& e) { std::cout << "Session stopped."; recognitionEnd.set_value(); // Notify to stop recognition. }); conversationTranscriber->StartTranscribingAsync().wait(); // Waits for recognition end. recognitionEnd.get_future().wait(); conversationTranscriber->StopTranscribingAsync().wait(); } std::string GetEnvironmentVariable(const char* name) { #if defined(_MSC_VER) size_t requiredSize = 0; (void)getenv_s(&requiredSize, nullptr, 0, name); if (requiredSize == 0) { return ""; } auto buffer = std::make_unique<char[]>(requiredSize); (void)getenv_s(&requiredSize, buffer.get(), requiredSize, name); return buffer.get(); #else auto value = getenv(name); return value ? value : ""; #endif }
Replace
katiesteve.wav
with the filepath and filename of your.wav
file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the sample audio file provided in the Speech SDK samples repository on GitHub.Note
The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as
Unknown
.To change the speech recognition language, replace
en-US
with another supported language. For example,es-ES
for Spanish (Spain). The default language isen-US
if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.
Build and run your application to start conversation transcription:
Important
Make sure that you set the SPEECH_KEY
and SPEECH_REGION
environment variables as described above. If you don't set these variables, the sample will fail with an error message.
The transcribed conversation should be output as text:
TRANSCRIBED: Text=Good morning, Steve. Speaker ID=Unknown
TRANSCRIBED: Text=Good morning. Katie. Speaker ID=Unknown
TRANSCRIBED: Text=Have you tried the latest real time diarization in Microsoft Speech Service which can tell you who said what in real time? Speaker ID=Guest-1
TRANSCRIBED: Text=Not yet. I've been using the batch transcription with diarization functionality, but it produces diarization result until whole audio get processed. Speaker ID=Guest-2
TRANSRIBED: Text=Is the new feature can diarize in real time? Speaker ID=Guest-2
TRANSCRIBED: Text=Absolutely. Speaker ID=GUEST-1
TRANSCRIBED: Text=That's exciting. Let me try it right now. Speaker ID=GUEST-2
CANCELED: Reason=EndOfStream
Speakers are identified as Guest-1, Guest-2, and so on, depending on the number of speakers in the conversation.
Clean up resources
You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.
Reference documentation | Package (Go) | Additional Samples on GitHub
The Speech SDK for Go doesn't support conversation transcription. Please select another programming language or the Go reference and samples linked from the beginning of this article.
Reference documentation | Additional Samples on GitHub
In this quickstart, you run an application for speech to text transcription with real-time diarization. Here, diarization is distinguishing between the different speakers participating in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech.
Note
Real-time diarization is currently in public preview.
The speaker information is included in the result in the speaker ID field. The speaker ID is a generic identifier assigned to each conversation participant by the service during the recognition as different speakers are being identified from the provided audio content.
Tip
You can try real-time speech-to-text in Speech Studio without signing up or writing any code. However, the Speech Studio doesn't yet support diarization.
Prerequisites
- Azure subscription - Create one for free.
- Create a Speech resource in the Azure portal.
- Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
Set up the environment
Before you can do anything, you need to install the Speech SDK. The sample in this quickstart works with the Java Runtime.
- Install Apache Maven. Then run
mvn -v
to confirm successful installation. - Create a new
pom.xml
file in the root of your project, and copy the following into it:<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.microsoft.cognitiveservices.speech.samples</groupId> <artifactId>quickstart-eclipse</artifactId> <version>1.0.0-SNAPSHOT</version> <build> <sourceDirectory>src</sourceDirectory> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.7.0</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk</artifactId> <version>1.33.0</version> </dependency> </dependencies> </project>
- Install the Speech SDK and dependencies.
mvn clean dependency:copy-dependencies
Set environment variables
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine that runs the application.
Tip
Don't include the key directly in your code, and never post it publicly. See Azure AI services security for more authentication options such as Azure Key Vault.
To set the environment variable for your Speech resource key, open a console window, and follow the instructions for your operating system and development environment.
- To set the
SPEECH_KEY
environment variable, replace your-key with one of the keys for your resource. - To set the
SPEECH_REGION
environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region
Note
If you only need to access the environment variables in the current console, you can set the environment variable with set
instead of setx
.
After you add the environment variables, you might need to restart any programs that need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.
Diarization from file with conversation transcription
Follow these steps to create a new console application for conversation transcription.
Create a new file named
ConversationTranscription.java
in the same project root directory.Copy the following code into
ConversationTranscription.java
:import com.microsoft.cognitiveservices.speech.*; import com.microsoft.cognitiveservices.speech.audio.AudioConfig; import com.microsoft.cognitiveservices.speech.transcription.*; import java.util.concurrent.Semaphore; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; public class ConversationTranscription { // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" private static String speechKey = System.getenv("SPEECH_KEY"); private static String speechRegion = System.getenv("SPEECH_REGION"); public static void main(String[] args) throws InterruptedException, ExecutionException { SpeechConfig speechConfig = SpeechConfig.fromSubscription(speechKey, speechRegion); speechConfig.setSpeechRecognitionLanguage("en-US"); AudioConfig audioInput = AudioConfig.fromWavFileInput("katiesteve.wav"); Semaphore stopRecognitionSemaphore = new Semaphore(0); ConversationTranscriber conversationTranscriber = new ConversationTranscriber(speechConfig, audioInput); { // Subscribes to events. conversationTranscriber.transcribing.addEventListener((s, e) -> { System.out.println("TRANSCRIBING: Text=" + e.getResult().getText()); }); conversationTranscriber.transcribed.addEventListener((s, e) -> { if (e.getResult().getReason() == ResultReason.RecognizedSpeech) { System.out.println("TRANSCRIBED: Text=" + e.getResult().getText() + " Speaker ID=" + e.getResult().getSpeakerId() ); } else if (e.getResult().getReason() == ResultReason.NoMatch) { System.out.println("NOMATCH: Speech could not be transcribed."); } }); conversationTranscriber.canceled.addEventListener((s, e) -> { System.out.println("CANCELED: Reason=" + e.getReason()); if (e.getReason() == CancellationReason.Error) { System.out.println("CANCELED: ErrorCode=" + e.getErrorCode()); System.out.println("CANCELED: ErrorDetails=" + e.getErrorDetails()); System.out.println("CANCELED: Did you update the subscription info?"); } stopRecognitionSemaphore.release(); }); conversationTranscriber.sessionStarted.addEventListener((s, e) -> { System.out.println("\n Session started event."); }); conversationTranscriber.sessionStopped.addEventListener((s, e) -> { System.out.println("\n Session stopped event."); }); conversationTranscriber.startTranscribingAsync().get(); // Waits for completion. stopRecognitionSemaphore.acquire(); conversationTranscriber.stopTranscribingAsync().get(); } speechConfig.close(); audioInput.close(); conversationTranscriber.close(); System.exit(0); } }
Replace
katiesteve.wav
with the filepath and filename of your.wav
file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the sample audio file provided in the Speech SDK samples repository on GitHub.Note
The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as
Unknown
.To change the speech recognition language, replace
en-US
with another supported language. For example,es-ES
for Spanish (Spain). The default language isen-US
if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.
Run your new console application to start conversation transcription:
javac ConversationTranscription.java -cp ".;target\dependency\*"
java -cp ".;target\dependency\*" ConversationTranscription
Important
Make sure that you set the SPEECH_KEY
and SPEECH_REGION
environment variables as described above. If you don't set these variables, the sample will fail with an error message.
The transcribed conversation should be output as text:
TRANSCRIBED: Text=Good morning, Steve. Speaker ID=Unknown
TRANSCRIBED: Text=Good morning. Katie. Speaker ID=Unknown
TRANSCRIBED: Text=Have you tried the latest real time diarization in Microsoft Speech Service which can tell you who said what in real time? Speaker ID=Guest-1
TRANSCRIBED: Text=Not yet. I've been using the batch transcription with diarization functionality, but it produces diarization result until whole audio get processed. Speaker ID=Guest-2
TRANSRIBED: Text=Is the new feature can diarize in real time? Speaker ID=Guest-2
TRANSCRIBED: Text=Absolutely. Speaker ID=GUEST-1
TRANSCRIBED: Text=That's exciting. Let me try it right now. Speaker ID=GUEST-2
CANCELED: Reason=EndOfStream
Speakers are identified as Guest-1, Guest-2, and so on, depending on the number of speakers in the conversation.
Clean up resources
You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.
Reference documentation | Package (npm) | Additional Samples on GitHub | Library source code
In this quickstart, you run an application for speech to text transcription with real-time diarization. Here, diarization is distinguishing between the different speakers participating in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech.
Note
Real-time diarization is currently in public preview.
The speaker information is included in the result in the speaker ID field. The speaker ID is a generic identifier assigned to each conversation participant by the service during the recognition as different speakers are being identified from the provided audio content.
Tip
You can try real-time speech-to-text in Speech Studio without signing up or writing any code. However, the Speech Studio doesn't yet support diarization.
Prerequisites
- Azure subscription - Create one for free.
- Create a Speech resource in the Azure portal.
- Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
Set up the environment
Before you can do anything, you need to install the Speech SDK for JavaScript. If you just want the package name to install, run npm install microsoft-cognitiveservices-speech-sdk
. For guided installation instructions, see the SDK installation guide.
Set environment variables
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine that runs the application.
Tip
Don't include the key directly in your code, and never post it publicly. See Azure AI services security for more authentication options such as Azure Key Vault.
To set the environment variable for your Speech resource key, open a console window, and follow the instructions for your operating system and development environment.
- To set the
SPEECH_KEY
environment variable, replace your-key with one of the keys for your resource. - To set the
SPEECH_REGION
environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region
Note
If you only need to access the environment variables in the current console, you can set the environment variable with set
instead of setx
.
After you add the environment variables, you might need to restart any programs that need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.
Diarization from file with conversation transcription
Follow these steps to create a new console application for conversation transcription.
Open a command prompt where you want the new project, and create a new file named
ConversationTranscription.js
.Install the Speech SDK for JavaScript:
npm install microsoft-cognitiveservices-speech-sdk
Copy the following code into
ConversationTranscription.js
:const fs = require("fs"); const sdk = require("microsoft-cognitiveservices-speech-sdk"); // This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" const speechConfig = sdk.SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION); function fromFile() { const filename = "katiesteve.wav"; let audioConfig = sdk.AudioConfig.fromWavFileInput(fs.readFileSync(filename)); let conversationTranscriber = new sdk.ConversationTranscriber(speechConfig, audioConfig); var pushStream = sdk.AudioInputStream.createPushStream(); fs.createReadStream(filename).on('data', function(arrayBuffer) { pushStream.write(arrayBuffer.slice()); }).on('end', function() { pushStream.close(); }); console.log("Transcribing from: " + filename); conversationTranscriber.sessionStarted = function(s, e) { console.log("SessionStarted event"); console.log("SessionId:" + e.sessionId); }; conversationTranscriber.sessionStopped = function(s, e) { console.log("SessionStopped event"); console.log("SessionId:" + e.sessionId); conversationTranscriber.stopTranscribingAsync(); }; conversationTranscriber.canceled = function(s, e) { console.log("Canceled event"); console.log(e.errorDetails); conversationTranscriber.stopTranscribingAsync(); }; conversationTranscriber.transcribed = function(s, e) { console.log("TRANSCRIBED: Text=" + e.result.text + " Speaker ID=" + e.result.speakerId); }; // Start conversation transcription conversationTranscriber.startTranscribingAsync( function () {}, function (err) { console.trace("err - starting transcription: " + err); } ); } fromFile();
Replace
katiesteve.wav
with the filepath and filename of your.wav
file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the sample audio file provided in the Speech SDK samples repository on GitHub.Note
The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as
Unknown
.To change the speech recognition language, replace
en-US
with another supported language. For example,es-ES
for Spanish (Spain). The default language isen-US
if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.
Run your new console application to start speech recognition from a file:
node.exe ConversationTranscription.js
Important
Make sure that you set the SPEECH_KEY
and SPEECH_REGION
environment variables as described above. If you don't set these variables, the sample will fail with an error message.
The transcribed conversation should be output as text:
SessionStarted event
SessionId:E87AFBA483C2481985F6C9AF719F616B
TRANSCRIBED: Text=Good morning, Steve. Speaker ID=Unknown
TRANSCRIBED: Text=Good morning, Katie. Speaker ID=Unknown
TRANSCRIBED: Text=Have you tried the latest real time diarization in Microsoft Speech Service which can tell you who said what in real time? Speaker ID=Guest-1
TRANSCRIBED: Text=Not yet. I've been using the batch transcription with diarization functionality, but it produces diarization result until whole audio get processed. Speaker ID=Guest-2
TRANSCRIBED: Text=Is the new feature can diarize in real time? Speaker ID=Guest-2
TRANSCRIBED: Text=Absolutely. Speaker ID=Guest-1
TRANSCRIBED: Text=That's exciting. Let me try it right now. Speaker ID=Guest-2
Canceled event
undefined
SessionStopped event
SessionId:E87AFBA483C2481985F6C9AF719F616B
Speakers are identified as Guest-1, Guest-2, and so on, depending on the number of speakers in the conversation.
Clean up resources
You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.
Reference documentation | Package (Download) | Additional Samples on GitHub
The Speech SDK for Objective-C does support conversation transcription, but we haven't yet included a guide here. Please select another programming language to get started and learn about the concepts, or see the Objective-C reference and samples linked from the beginning of this article.
Reference documentation | Package (Download) | Additional Samples on GitHub
The Speech SDK for Swift does support conversation transcription, but we haven't yet included a guide here. Please select another programming language to get started and learn about the concepts, or see the Swift reference and samples linked from the beginning of this article.
Reference documentation | Package (PyPi) | Additional Samples on GitHub
In this quickstart, you run an application for speech to text transcription with real-time diarization. Here, diarization is distinguishing between the different speakers participating in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech.
Note
Real-time diarization is currently in public preview.
The speaker information is included in the result in the speaker ID field. The speaker ID is a generic identifier assigned to each conversation participant by the service during the recognition as different speakers are being identified from the provided audio content.
Tip
You can try real-time speech-to-text in Speech Studio without signing up or writing any code. However, the Speech Studio doesn't yet support diarization.
Prerequisites
- Azure subscription - Create one for free.
- Create a Speech resource in the Azure portal.
- Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
Set up the environment
The Speech SDK for Python is available as a Python Package Index (PyPI) module. The Speech SDK for Python is compatible with Windows, Linux, and macOS.
- You must install the Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017, 2019, and 2022 for your platform. Installing this package for the first time might require a restart.
- On Linux, you must use the x64 target architecture.
Install a version of Python from 3.7 or later. First check the SDK installation guide for any more requirements.
Set environment variables
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine that runs the application.
Tip
Don't include the key directly in your code, and never post it publicly. See Azure AI services security for more authentication options such as Azure Key Vault.
To set the environment variable for your Speech resource key, open a console window, and follow the instructions for your operating system and development environment.
- To set the
SPEECH_KEY
environment variable, replace your-key with one of the keys for your resource. - To set the
SPEECH_REGION
environment variable, replace your-region with one of the regions for your resource.
setx SPEECH_KEY your-key
setx SPEECH_REGION your-region
Note
If you only need to access the environment variables in the current console, you can set the environment variable with set
instead of setx
.
After you add the environment variables, you might need to restart any programs that need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.
Diarization from file with conversation transcription
Follow these steps to create a new console application.
Open a command prompt where you want the new project, and create a new file named
conversation_transcription.py
.Run this command to install the Speech SDK:
pip install azure-cognitiveservices-speech
Copy the following code into
conversation_transcription.py
:import os import time import azure.cognitiveservices.speech as speechsdk def conversation_transcriber_recognition_canceled_cb(evt: speechsdk.SessionEventArgs): print('Canceled event') def conversation_transcriber_session_stopped_cb(evt: speechsdk.SessionEventArgs): print('SessionStopped event') def conversation_transcriber_transcribed_cb(evt: speechsdk.SpeechRecognitionEventArgs): print('TRANSCRIBED:') if evt.result.reason == speechsdk.ResultReason.RecognizedSpeech: print('\tText={}'.format(evt.result.text)) print('\tSpeaker ID={}'.format(evt.result.speaker_id)) elif evt.result.reason == speechsdk.ResultReason.NoMatch: print('\tNOMATCH: Speech could not be TRANSCRIBED: {}'.format(evt.result.no_match_details)) def conversation_transcriber_session_started_cb(evt: speechsdk.SessionEventArgs): print('SessionStarted event') def recognize_from_file(): # This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" speech_config = speechsdk.SpeechConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION')) speech_config.speech_recognition_language="en-US" audio_config = speechsdk.audio.AudioConfig(filename="katiesteve.wav") conversation_transcriber = speechsdk.transcription.ConversationTranscriber(speech_config=speech_config, audio_config=audio_config) transcribing_stop = False def stop_cb(evt: speechsdk.SessionEventArgs): #"""callback that signals to stop continuous recognition upon receiving an event `evt`""" print('CLOSING on {}'.format(evt)) nonlocal transcribing_stop transcribing_stop = True # Connect callbacks to the events fired by the conversation transcriber conversation_transcriber.transcribed.connect(conversation_transcriber_transcribed_cb) conversation_transcriber.session_started.connect(conversation_transcriber_session_started_cb) conversation_transcriber.session_stopped.connect(conversation_transcriber_session_stopped_cb) conversation_transcriber.canceled.connect(conversation_transcriber_recognition_canceled_cb) # stop transcribing on either session stopped or canceled events conversation_transcriber.session_stopped.connect(stop_cb) conversation_transcriber.canceled.connect(stop_cb) conversation_transcriber.start_transcribing_async() # Waits for completion. while not transcribing_stop: time.sleep(.5) conversation_transcriber.stop_transcribing_async() # Main try: recognize_from_file() except Exception as err: print("Encountered exception. {}".format(err))
Replace
katiesteve.wav
with the filepath and filename of your.wav
file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the sample audio file provided in the Speech SDK samples repository on GitHub.Note
The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as
Unknown
.To change the speech recognition language, replace
en-US
with another supported language. For example,es-ES
for Spanish (Spain). The default language isen-US
if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see language identification.
Run your new console application to start conversation transcription:
python conversation_transcription.py
Important
Make sure that you set the SPEECH_KEY
and SPEECH_REGION
environment variables as described above. If you don't set these variables, the sample will fail with an error message.
The transcribed conversation should be output as text:
SessionStarted event
TRANSCRIBED:
Text=Good morning, Steve.
Speaker ID=Unknown
TRANSCRIBED:
Text=Good morning, Katie.
Speaker ID=Unknown
TRANSCRIBED:
Text=Have you tried the latest real time diarization in Microsoft Speech Service which can tell you who said what in real time?
Speaker ID=Guest-1
TRANSCRIBED:
Text=Not yet. I've been using the batch transcription with diarization functionality, but it produces diarization result until whole audio get processed.
Speaker ID=Guest-2
TRANSCRIBED:
Text=Is the new feature can diarize in real time?
Speaker ID=Guest-2
TRANSCRIBED:
Text=Absolutely.
Speaker ID=Guest-1
TRANSCRIBED:
Text=That's exciting. Let me try it right now.
Speaker ID=Guest-2
Canceled event
CLOSING on ConversationTranscriptionCanceledEventArgs(session_id=92a0abb68636471dac07041b335d9be3, result=ConversationTranscriptionResult(result_id=ad1b1d83b5c742fcacca0692baa8df74, speaker_id=, text=, reason=ResultReason.Canceled))
SessionStopped event
CLOSING on SessionEventArgs(session_id=92a0abb68636471dac07041b335d9be3)
Speakers are identified as Guest-1, Guest-2, and so on, depending on the number of speakers in the conversation.
Clean up resources
You can use the Azure portal or Azure Command Line Interface (CLI) to remove the Speech resource you created.
Speech to text REST API reference | Speech to text REST API for short audio reference | Additional Samples on GitHub
The REST API doesn't support conversation transcription. Please select another programming language or tool from the top of this page.
The Speech CLI doesn't support conversation transcription. Please select another programming language or tool from the top of this page.
Next steps
Feedback
Submit and view feedback for