다음을 통해 공유


PropertyId Enum

Definition

Lists speech property IDs.

public enum PropertyId
type PropertyId = 
Public Enum PropertyId
Inheritance
PropertyId

Fields

Name Value Description
SpeechServiceConnection_Key 1000

The subscription key used with Speech service endpoints. If you are using an intent recognizer, you need to specify the LUIS endpoint key for your particular LUIS app. Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromSubscription(String, String).

SpeechServiceConnection_Endpoint 1001

The Speech service endpoint, a URL. Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromEndpoint(Uri, String), or FromEndpoint(Uri). NOTE: This endpoint is not the same as the endpoint used to obtain an access token.

SpeechServiceConnection_Region 1002

The Speech service region associated with the subscription key. Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromSubscription(String, String), FromEndpoint(Uri, String), FromEndpoint(Uri), FromHost(Uri, String), FromHost(Uri), FromAuthorizationToken(String, String).

SpeechServiceAuthorization_Token 1003

The Speech service authorization token (aka access token). Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromAuthorizationToken(String, String), AuthorizationToken, AuthorizationToken, AuthorizationToken.

SpeechServiceAuthorization_Type 1004

Unused. The Speech service authorization type.

SpeechServiceConnection_EndpointId 1005

The Custom Speech or Custom Voice Service endpoint id. Under normal circumstances, you shouldn't have to use this property directly. Instead use FromEndpoint(Uri, String), or FromEndpoint(Uri). NOTE: The endpoint id is available in the Custom Speech Portal, listed under Endpoint Details.

SpeechServiceConnection_Host 1006

The Speech service host (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromHost(Uri, String), or FromHost(Uri).

SpeechServiceConnection_ProxyHostName 1100

The host name of the proxy server used to connect to the Speech service. Under normal circumstances, you shouldn't have to use this property directly. Instead use SetProxy(String, Int32, String, String). Added in 1.1.0

SpeechServiceConnection_ProxyPort 1101

The port of the proxy server used to connect to the Speech service. Under normal circumstances, you shouldn't have to use this property directly. Instead use SetProxy(String, Int32, String, String). Added in 1.1.0

SpeechServiceConnection_ProxyUserName 1102

The user name of the proxy server used to connect to the Speech service. Under normal circumstances, you shouldn't have to use this property directly. Instead use SetProxy(String, Int32, String, String). Added in 1.1.0

SpeechServiceConnection_ProxyPassword 1103

The password of the proxy server used to connect to the Speech service. Under normal circumstances, you shouldn't have to use this property directly. Instead use SetProxy(String, Int32, String, String). Added in 1.1.0

SpeechServiceConnection_Url 1104

The URL string built from speech configuration. This property is read-only. The SDK uses this value internally. Added in 1.5.0

SpeechServiceConnection_ProxyHostBypass 1105

Specifies the list of hosts for which proxies should not be used. This setting overrides all other configurations. Hostnames are separated by commas and are matched in a case-insensitive manner. Wildcards are not supported.

SpeechServiceConnection_TranslationToLanguages 2000

The list of comma separated languages (in BCP-47 format) used as target translation languages. Under normal circumstances, you shouldn't have to use this property directly. Instead, use AddTargetLanguage(String) and the read-only TargetLanguages collection.

SpeechServiceConnection_TranslationVoice 2001

The name of the voice used for Text-to-speech. Under normal circumstances, you shouldn't have to use this property directly. Instead use VoiceName. Find valid voice names here.

SpeechServiceConnection_TranslationFeatures 2002

Translation features. For internal use.

SpeechServiceConnection_IntentRegion 2003

The Language Understanding Service Region. Under normal circumstances, you shouldn't have to use this property directly. Instead use LanguageUnderstandingModel.

SpeechServiceConnection_RecoMode 3000

The Speech service recognition mode. Can be INTERACTIVE, CONVERSATION, DICTATION. This property is read-only. The SDK uses it internally.

SpeechServiceConnection_RecoLanguage 3001

The spoken language to be recognized (in BCP-47 format). Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechRecognitionLanguage.

Speech_SessionId 3002

The session id. This id is a universally unique identifier (aka UUID) representing a specific binding of an audio input stream and the underlying speech recognition instance to which it is bound. Under normal circumstances, you shouldn't have to use this property directly. Instead use SessionId.

SpeechServiceConnection_RecoBackend 3004

The string to specify the backend to be used for speech recognition; allowed options are online and offline. Under normal circumstances, you shouldn't use this property directly. Currently the offline option is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0

SpeechServiceConnection_RecoModelName 3005

The name of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0

SpeechServiceConnection_SynthLanguage 3100

The spoken language to be synthesized (e.g. en-US). Added in 1.4.0

SpeechServiceConnection_SynthVoice 3101

The name of the voice to be used for Text-to-speech. Added in 1.4.0

SpeechServiceConnection_SynthOutputFormat 3102

The string to specify speech synthesis output audio format (e.g. riff-16khz-16bit-mono-pcm) Added in 1.4.0

SpeechServiceConnection_SynthEnableCompressedAudioTransmission 3103

Indicates whether to use compressed audio format for speech synthesis audio transmission. This property only matters when SpeechServiceConnection_SynthOutputFormat is set to a pcm format. If this property is not set to false and GStreamer is available, SDK will use compressed format for synthesized audio transmission, and decode it. You can set this property to false to use raw pcm format for transmission on wire. Added in 1.16.0

SpeechServiceConnection_SynthBackend 3110

The string to specify TTS backend; valid options are online and offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromPath(String) or FromPaths(String[]). to set the synthesis backend to offline. Added in version 1.19.0

SpeechServiceConnection_SynthOfflineDataPath 3112

The data file path(s) for offline synthesis engine; only valid when synthesis backend is offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromPath(String) or FromPaths(String[]). Added in version 1.19.0

SpeechServiceConnection_SynthOfflineVoice 3113

The name of the offline TTS voice to be used for speech synthesis. Under normal circumstances, you shouldn't use this property directly. Instead, use SetSpeechSynthesisVoice(String, String). Added in version 1.19.0

SpeechServiceConnection_VoicesListEndpoint 3130

The Cognitive Services Speech Service voices list API endpoint (url). Under normal circumstances, you don't need to specify this property, SDK will construct it based on the region/host/endpoint of SpeechConfig. Added in 1.16.0

SpeechServiceConnection_InitialSilenceTimeoutMs 3200

The initial silence timeout value (in milliseconds) used by the service. Added in 1.5.0

SpeechServiceConnection_EndSilenceTimeoutMs 3201

The end silence timeout value (in milliseconds) used by the service. Added in 1.5.0

SpeechServiceConnection_EnableAudioLogging 3202

A boolean value specifying whether audio logging is enabled in the service or not. Audio and content logs are stored either in Microsoft-owned storage, or in your own storage account linked to your Cognitive Services subscription (Bring Your Own Storage (BYOS) enabled Speech resource). Added in 1.5.0.

SpeechServiceConnection_LanguageIdMode 3205

The speech service connection language identifier mode. Can be "AtStart" (the default), or "Continuous". See Language Identification document. Added in 1.25.0

SpeechServiceConnection_TranslationCategoryId 3206

The speech service connection translation categoryId.

SpeechServiceConnection_AutoDetectSourceLanguages 3300

The auto detect source languages. Added in 1.9.0

SpeechServiceConnection_AutoDetectSourceLanguageResult 3301

The auto detect source language result. Added in 1.9.0

SpeechServiceResponse_RequestDetailedResultTrueFalse 4000

The requested Speech service response output format (OutputFormat.Simple or OutputFormat.Detailed). Under normal circumstances, you shouldn't have to use this property directly. Instead, use OutputFormat.

SpeechServiceResponse_RequestProfanityFilterTrueFalse 4001

Unused. The requested Speech service response output profanity level.

SpeechServiceResponse_ProfanityOption 4002

The requested Speech service response output profanity setting. Allowed values are masked, removed, and raw. Added in 1.5.0

SpeechServiceResponse_PostProcessingOption 4003

A string value specifying which post processing option should be used by service. Allowed value: TrueText. Added in 1.5.0

SpeechServiceResponse_RequestWordLevelTimestamps 4004

A boolean value specifying whether to include word-level timestamps in the response result. Added in 1.5.0

SpeechServiceResponse_StablePartialResultThreshold 4005

The number of times a word has to be in partial results to be returned. Added in 1.5.0

SpeechServiceResponse_OutputFormatOption 4006

A string value specifying the output format option in the response result. Internal use only. Added in 1.5.0

SpeechServiceResponse_RequestSnr 4007

A boolean value specifying whether to include SNR (signal to noise ratio) in the response result. Added in version 1.18.0

SpeechServiceResponse_TranslationRequestStablePartialResult 4100

A boolean value to request for stabilizing translation partial results by omitting words in the end. Added in 1.5.0

SpeechServiceResponse_RequestWordBoundary 4200

A boolean value specifying whether to request WordBoundary events. Added in version 1.21.0.

SpeechServiceResponse_RequestPunctuationBoundary 4201

A boolean value specifying whether to request punctuation boundary in WordBoundary Events. Default is true. Added in version 1.21.0.

SpeechServiceResponse_RequestSentenceBoundary 4202

A boolean value specifying whether to request sentence boundary in WordBoundary Events. Default is false. Added in version 1.21.0.

SpeechServiceResponse_SynthesisEventsSyncToAudio 4210

A boolean value specifying whether the SDK should synchronize synthesis metadata events, (e.g. word boundary, viseme, etc.) to the audio playback. This only takes effect when the audio is played through the SDK. Default is true. If set to false, the SDK will fire the events as they come from the service, which may be out of sync with the audio playback. Added in version 1.31.0.

SpeechServiceResponse_JsonResult 5000

The Speech service response output (in JSON format). This property is available on recognition result objects only.

SpeechServiceResponse_JsonErrorDetails 5001

The Speech service error details (in JSON format). Under normal circumstances, you shouldn't have to use this property directly. Instead use ErrorDetails.

SpeechServiceResponse_RecognitionLatencyMs 5002

The recognition latency in milliseconds. Read-only, available on final speech/translation/intent results. This measures the latency between when an audio input is received by the SDK, and the moment the final result is received from the service. The SDK computes the time difference between the last audio fragment from the audio input that is contributing to the final result, and the time the final result is received from the speech service. Added in 1.3.0

SpeechServiceResponse_RecognitionBackend 5003

The recognition backend. Read-only, available on speech recognition results. This indicates whether cloud (online) or embedded (offline) recognition was used to produce the result.

SpeechServiceResponse_SynthesisFirstByteLatencyMs 5010

The speech synthesis first byte latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the first byte audio is available. Added in version 1.17.0.

SpeechServiceResponse_SynthesisFinishLatencyMs 5011

The speech synthesis all bytes latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the whole audio is synthesized. Added in version 1.17.0.

SpeechServiceResponse_SynthesisUnderrunTimeMs 5012

The underrun time for speech synthesis in milliseconds. Read-only, available on results in SynthesisCompleted events. This measures the total underrun time from AudioConfig_PlaybackBufferLengthInMs is filled to synthesis completed. Added in version 1.17.0.

SpeechServiceResponse_SynthesisConnectionLatencyMs 5013

The speech synthesis connection latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the HTTP/WebSocket connection is established. Added in version 1.26.0.

SpeechServiceResponse_SynthesisNetworkLatencyMs 5014

The speech synthesis network latency in milliseconds. Read-only, available on final speech synthesis results. This measures the network round trip time. Added in version 1.26.0.

SpeechServiceResponse_SynthesisServiceLatencyMs 5015

The speech synthesis service latency in milliseconds. Read-only, available on final speech synthesis results. This measures the service processing time to synthesize the first byte of audio. Added in version 1.26.0.

SpeechServiceResponse_SynthesisBackend 5020

Indicates which backend the synthesis is finished by. Read-only, available on speech synthesis results, except for the result in SynthesisStarted event. Added in version 1.19.0.

SpeechServiceResponse_DiarizeIntermediateResults 5025

Determines if intermediate results contain speaker identification.

CancellationDetails_Reason 6000

Unused. The cancellation reason.

CancellationDetails_ReasonText 6001

Unused. The cancellation text.

CancellationDetails_ReasonDetailedText 6002

Unused. The cancellation detailed text.

LanguageUnderstandingServiceResponse_JsonResult 7000

The Language Understanding Service response output (in JSON format). Available via Properties.

AudioConfig_DeviceNameForRender 8005

The device name for audio render. Under normal circumstances, you shouldn't have to use this property directly. Instead, use FromSpeakerOutput(String). Added in version 1.17.0

AudioConfig_PlaybackBufferLengthInMs 8006

Playback buffer length in milliseconds, default is 50 milliseconds. Added in version 1.17.0

Speech_LogFilename 9001

The file name to write logs. Added in 1.4.0

Speech_SegmentationSilenceTimeoutMs 9002

A duration of detected silence, measured in milliseconds, after which speech-to-text will determine a spoken phrase has ended and generate a final Recognized result. Configuring this timeout may be helpful in situations where spoken input is significantly faster or slower than usual and default segmentation behavior consistently yields results that are too long or too short. Segmentation timeout values that are inappropriately high or low can negatively affect speech-to-text accuracy; this property should be carefully configured and the resulting behavior should be thoroughly validated as intended.

For more information about timeout configuration that includes discussion of default behaviors, please visit https://aka.ms/csspeech/timeouts.

Speech_SegmentationMaximumTimeMs 9003

The maximum length of a spoken phrase when using the Time segmentation strategy.

Speech_SegmentationStrategy 9004

The strategy used to determine when a spoken phrase has ended and a final Recognized result should be generated. Allowed values are "Default", "Time", and "Semantic".

Conversation_ApplicationId 10000

Identifier used to connect to the backend service. Added in 1.5.0

Conversation_DialogType 10001

Type of dialog backend to connect to. Added in 1.7.0

Conversation_Initial_Silence_Timeout 10002

Silence timeout for listening. Added in 1.5.0

Conversation_From_Id 10003

The from identifier to add to speech recognition activities. Added in 1.5.0

Conversation_Conversation_Id 10004

ConversationId for the session. Added in 1.8.0

Conversation_Custom_Voice_Deployment_Ids 10005

Comma separated list of custom voice deployment ids. Added in 1.8.0

Conversation_Speech_Activity_Template 10006

Speech activity template, stamp properties from the template on the activity generated by the service for speech. See SpeechActivityTemplate Added in 1.10.0

Conversation_ParticipantId 10007

Gets your identifier in the conversation. Added in 1.13.0

Conversation_Request_Bot_Status_Messages 10008

A boolean value that specifies whether or not the client should receive turn status messages and generate corresponding TurnStatusReceived events. Defaults to true. Added in 1.15.0

Conversation_Connection_Id 10009

Additional identifying information, such as a Direct Line token, used to authenticate with the backend service. Added in 1.16.0

ConversationTranscribingService_DataBufferTimeStamp 11001

The time stamp associated to data buffer written by client when using Pull/Push audio mode streams. The time stamp is a 64-bit value with a resolution of 90 kHz. The same as the presentation timestamp in an MPEG transport stream. See https://en.wikipedia.org/wiki/Presentation_timestamp. Added in 1.5.0

ConversationTranscribingService_DataBufferUserId 11002

The user identifier associated to data buffer written by client when using Pull/Push audio mode streams. Added in 1.5.0

PronunciationAssessment_ReferenceText 12001

The reference text of the audio for pronunciation evaluation. For this and the following pronunciation assessment parameters, see Pronunciation assessment parameters for details. Under normal circumstances, you shouldn't have to use this property directly. Added in 1.14.0

PronunciationAssessment_GradingSystem 12002

The point system for pronunciation score calibration (FivePoint or HundredMark). Under normal circumstances, you shouldn't have to use this property directly. Added in 1.14.0

PronunciationAssessment_Granularity 12003

The pronunciation evaluation granularity (Phoneme, Word, or FullText). Under normal circumstances, you shouldn't have to use this property directly. Added in 1.14.0

PronunciationAssessment_EnableMiscue 12005

Indicates miscue calculation state. When enabled, the pronounced words will be compared to the reference text, and will be marked with omission/insertion based on the comparison. The default setting is false. Under normal circumstances, you shouldn't have to use this property directly. Added in 1.14.0

PronunciationAssessment_PhonemeAlphabet 12006

The pronunciation evaluation phoneme alphabet. The valid values are "SAPI" (default) and "IPA" Under normal circumstances, you shouldn't have to use this property directly. Instead, use PhonemeAlphabet. Added in version 1.20.0

PronunciationAssessment_NBestPhonemeCount 12007

The pronunciation evaluation nbest phoneme count. Under normal circumstances, you shouldn't have to use this property directly. Instead, use NBestPhonemeCount. Added in version 1.20.0

PronunciationAssessment_EnableProsodyAssessment 12008

Whether to enable prosody assessment. Under normal circumstances, you shouldn't have to use this property directly. Instead, use EnableProsodyAssessment(). Added in version 1.33.0

PronunciationAssessment_Json 12009

The JSON string of pronunciation assessment parameters. Under normal circumstances, you shouldn't have to use this property directly. Added in 1.14.0

PronunciationAssessment_Params 12010

Pronunciation assessment parameters. This property is read-only. Added in 1.14.0

PronunciationAssessment_ContentTopic 12020

The content topic of the pronunciation assessment. Under normal circumstances, you shouldn't have to use this property directly. Instead, use EnableContentAssessmentWithTopic(String). Added in version 1.33.0

SpeakerRecognition_Api_Version 13001

Speaker recognition API version. Added in 1.18.0

SpeechTranslation_ModelName 13100

The name of a model to be used for speech translation. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used.

KeywordRecognition_ModelName 13200

The name of a model to be used for keyword recognition. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used.

EmbeddedSpeech_EnablePerformanceMetrics 13300

Enable the collection of embedded speech performance metrics which can be used to evaluate the capability of a device to use embedded speech. The collected data is included in results from specific scenarios like speech recognition. The default setting is "false". Note that metrics may not be available from all embedded speech scenarios.

SpeechSynthesisRequest_Pitch 14001

The pitch of the synthesized speech.

SpeechSynthesisRequest_Rate 14002

The rate of the synthesized speech.

SpeechSynthesisRequest_Volume 14003

The volume of the synthesized speech.

Applies to