ConversationTranscriber Class

On object that performs conversation transcription operations. If you need to specify source language information, please only specify one of these three parameters, language, source_language_config or auto_detect_source_language_config.

Inheritance
ConversationTranscriber

Constructor

ConversationTranscriber(speech_config: SpeechConfig, audio_config: AudioConfig = None, language: str = None, source_language_config: SourceLanguageConfig = None, auto_detect_source_language_config: AutoDetectSourceLanguageConfig = None)

Parameters

Name Description
speech_config
Required

The config for the conversation transcriber

audio_config

The config for the audio input

default value: None
language

The source language

default value: None
source_language_config

The source language config

default value: None
auto_detect_source_language_config

The auto detection source language config

default value: None

Methods

recognize_once

Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.

recognize_once_async

Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.

start_continuous_recognition

Synchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.

start_continuous_recognition_async

Asynchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.

start_keyword_recognition

Synchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition() to end the keyword initiated recognition.

start_keyword_recognition_async

Asynchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition_async() to end the keyword initiated recognition.

start_transcribing_async

Asynchronously starts conversation transcribing.

stop_continuous_recognition

Synchronously terminates ongoing continuous recognition operation.

stop_continuous_recognition_async

Asynchronously terminates ongoing continuous recognition operation.

stop_keyword_recognition

Synchronously ends the keyword initiated recognition.

stop_keyword_recognition_async

Asynchronously ends the keyword initiated recognition.

stop_transcribing_async

Asynchronously stops conversation transcribing.

recognize_once

Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.

recognize_once() -> SpeechRecognitionResult

Returns

Type Description

The result value of the synchronous recognition.

recognize_once_async

Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.

recognize_once_async() -> ResultFuture

Returns

Type Description

A future containing the result value of the asynchronous recognition.

start_continuous_recognition

Synchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.

start_continuous_recognition()

start_continuous_recognition_async

Asynchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.

start_continuous_recognition_async()

Returns

Type Description

A future that is fulfilled once recognition has been initialized.

start_keyword_recognition

Synchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition() to end the keyword initiated recognition.

start_keyword_recognition(model: KeywordRecognitionModel)

Parameters

Name Description
model
Required

the keyword recognition model that specifies the keyword to be recognized.

start_keyword_recognition_async

Asynchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition_async() to end the keyword initiated recognition.

start_keyword_recognition_async(model: KeywordRecognitionModel)

Parameters

Name Description
model
Required

the keyword recognition model that specifies the keyword to be recognized.

Returns

Type Description

A future that is fulfilled once recognition has been initialized.

start_transcribing_async

Asynchronously starts conversation transcribing.

start_transcribing_async() -> ResultFuture

Returns

Type Description

A future that is fulfilled once conversation transcription is started.

stop_continuous_recognition

Synchronously terminates ongoing continuous recognition operation.

stop_continuous_recognition()

stop_continuous_recognition_async

Asynchronously terminates ongoing continuous recognition operation.

stop_continuous_recognition_async()

Returns

Type Description

A future that is fulfilled once recognition has been stopped.

stop_keyword_recognition

Synchronously ends the keyword initiated recognition.

stop_keyword_recognition()

stop_keyword_recognition_async

Asynchronously ends the keyword initiated recognition.

stop_keyword_recognition_async()

Returns

Type Description

A future that is fulfilled once recognition has been stopped.

stop_transcribing_async

Asynchronously stops conversation transcribing.

stop_transcribing_async() -> ResultFuture

Returns

Type Description

A future that is fulfilled once conversation transcription is stopped.

Attributes

authorization_token

The authorization token that will be used for connecting to the service.

Note

The caller needs to ensure that the authorization token is valid. Before the

authorization token expires, the caller needs to refresh it by calling this setter with a

new valid token. As configuration values are copied when creating a new recognizer, the

new token value will not apply to recognizers that have already been created. For

recognizers that have been created before, you need to set authorization token of the

corresponding recognizer to refresh the token. Otherwise, the recognizers will encounter

errors during transcription.

canceled

Signal for events containing canceled transcription results (indicating a transcription attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).

Callbacks connected to this signal are called with a ConversationTranscriptionCanceledEventArgs, instance as the single argument.

endpoint_id

The endpoint ID of a customized speech model that is used for recognition, or a custom voice model for speech synthesis.

properties

A collection of properties and their values defined for this Participant.

recognized

Signal for events containing final recognition results (indicating a successful recognition attempt).

Callbacks connected to this signal are called with a SpeechRecognitionEventArgs, TranslationRecognitionEventArgs or IntentRecognitionEventArgs instance as the single argument, dependent on the type of recognizer.

recognizing

Signal for events containing intermediate recognition results.

Callbacks connected to this signal are called with a SpeechRecognitionEventArgs, TranslationRecognitionEventArgs or IntentRecognitionEventArgs instance as the single argument, dependent on the type of recognizer.

session_started

Signal for events indicating the start of a recognition session (operation).

Callbacks connected to this signal are called with a SessionEventArgs instance as the single argument.

session_stopped

Signal for events indicating the end of a recognition session (operation).

Callbacks connected to this signal are called with a SessionEventArgs instance as the single argument.

speech_end_detected

Signal for events indicating the end of speech.

Callbacks connected to this signal are called with a RecognitionEventArgs instance as the single argument.

speech_start_detected

Signal for events indicating the start of speech.

Callbacks connected to this signal are called with a RecognitionEventArgs instance as the single argument.

transcribed

Signal for events containing final transcription results (indicating a successful transcription attempt).

Callbacks connected to this signal are called with a ConversationTranscriptionEventArgs, instance as the single argument.

transcribing

Signal for events containing intermediate transcription results.

Callbacks connected to this signal are called with a ConversationTranscriptionEventArgs, instance as the single argument.