Delen via


PropertyId Enum

  • java.lang.Object
    • java.lang.Enum
      • com.microsoft.cognitiveservices.speech.PropertyId

public enum PropertyId
extends java.lang.Enum<PropertyId>

Defines property ids. Changed in version 1.8.0.

Fields

AudioConfig_AudioProcessingOptions

Audio processing options in JSON format.

AudioConfig_DeviceNameForRender

The device name for audio render.

AudioConfig_PlaybackBufferLengthInMs

Playback buffer length in milliseconds, default is 50 milliseconds.

CancellationDetails_Reason

The cancellation reason.

CancellationDetails_ReasonDetailedText

The cancellation detailed text.

CancellationDetails_ReasonText

The cancellation text.

Conversation_ApplicationId

Identifier used to connect to the backend service.

Conversation_Connection_Id

Additional identifying information, such as a Direct Line token, used to authenticate with the backend service.

Conversation_Conversation_Id

ConversationId for the session.

Conversation_Custom_Voice_Deployment_Ids

Comma separated list of custom voice deployment ids.

Conversation_DialogType

Type of dialog backend to connect to.

Conversation_From_Id

From id to be used on speech recognition activities Added in version 1.5.0.

Conversation_Initial_Silence_Timeout

Silence timeout for listening Added in version 1.5.0.

Conversation_Request_Bot_Status_Messages

A boolean value that specifies whether the client should receive status messages and generate corresponding turnStatusReceived events.

Conversation_Speech_Activity_Template

Speech activity template, stamp properties in the template on the activity generated by the service for speech.

DataBuffer_TimeStamp

The time stamp associated to data buffer written by client when using Pull/Push audio mode streams.

DataBuffer_UserId

The user id associated to data buffer written by client when using Pull/Push audio mode streams.

EmbeddedSpeech_EnablePerformanceMetrics

Enable the collection of embedded speech performance metrics which can be used to evaluate the capability of a device to use embedded speech.

KeywordRecognition_ModelKey

The decryption key of a model to be used for keyword recognition.

KeywordRecognition_ModelName

The name of a model to be used for keyword recognition.

LanguageUnderstandingServiceResponse_JsonResult

The Language Understanding Service response output (in JSON format).

PronunciationAssessment_ContentTopic

The content type of the pronunciation assessment.

PronunciationAssessment_EnableMiscue

Defines if enable miscue calculation.

PronunciationAssessment_EnableProsodyAssessment

Whether to enable prosody assessment.

PronunciationAssessment_GradingSystem

The point system for pronunciation score calibration (FivePoint or HundredMark).

PronunciationAssessment_Granularity

The pronunciation evaluation granularity (Phoneme, Word, or FullText).

PronunciationAssessment_Json

The json string of pronunciation assessment parameters Under normal circumstances, you shouldn't have to use this property directly.

PronunciationAssessment_NBestPhonemeCount

The pronunciation evaluation nbest phoneme count.

PronunciationAssessment_Params

Pronunciation assessment parameters.

PronunciationAssessment_PhonemeAlphabet

The pronunciation evaluation phoneme alphabet.

PronunciationAssessment_ReferenceText

The reference text of the audio for pronunciation evaluation.

SpeakerRecognition_Api_Version

Version of Speaker Recognition to use.

SpeechServiceAuthorization_Token

The Cognitive Services Speech Service authorization token (aka access token).

SpeechServiceAuthorization_Type

The Cognitive Services Speech Service authorization type.

SpeechServiceConnection_AutoDetectSourceLanguageResult

The auto detect source language result Added in version 1.8.0.

SpeechServiceConnection_AutoDetectSourceLanguages

The auto detect source languages Added in version 1.8.0.

SpeechServiceConnection_EnableAudioLogging

A boolean value specifying whether audio logging is enabled in the service or not.

SpeechServiceConnection_EndSilenceTimeoutMs

The end silence timeout value (in milliseconds) used by the service.

SpeechServiceConnection_Endpoint

The Cognitive Services Speech Service endpoint (url).

SpeechServiceConnection_EndpointId

The Cognitive Services Custom Speech or Custom Voice Service endpoint id.

SpeechServiceConnection_Host

The Cognitive Services Speech Service host (url).

SpeechServiceConnection_InitialSilenceTimeoutMs

The initial silence timeout value (in milliseconds) used by the service.

SpeechServiceConnection_IntentRegion

The Language Understanding Service region.

SpeechServiceConnection_Key

The Cognitive Services Speech Service subscription key.

SpeechServiceConnection_LanguageIdMode

The speech service connection language identifier mode.

SpeechServiceConnection_ProxyHostName

The host name of the proxy server used to connect to the Cognitive Services Speech Service.

SpeechServiceConnection_ProxyPassword

The password of the proxy server used to connect to the Cognitive Services Speech Service.

SpeechServiceConnection_ProxyPort

The port of the proxy server used to connect to the Cognitive Services Speech Service.

SpeechServiceConnection_ProxyUserName

The user name of the proxy server used to connect to the Cognitive Services Speech Service.

SpeechServiceConnection_RecoBackend

The string to specify the backend to be used for speech recognition; allowed options are online and offline.

SpeechServiceConnection_RecoLanguage

The spoken language to be recognized (in BCP-47 format).

SpeechServiceConnection_RecoMode

The Cognitive Services Speech Service recognition mode.

SpeechServiceConnection_RecoModelKey

The decryption key of the model to be used for speech recognition.

SpeechServiceConnection_RecoModelName

The name of the model to be used for speech recognition.

SpeechServiceConnection_Region

The Cognitive Services Speech Service region.

SpeechServiceConnection_SynthBackend

The string to specify TTS backend; valid options are online and offline.

SpeechServiceConnection_SynthEnableCompressedAudioTransmission

Indicates if use compressed audio format for speech synthesis audio transmission.

SpeechServiceConnection_SynthLanguage

The spoken language to be synthesized (e.g.

SpeechServiceConnection_SynthModelKey

The decryption key of the model to be used for speech synthesis.

SpeechServiceConnection_SynthOfflineDataPath

The data file path(s) for offline synthesis engine; only valid when synthesis backend is offline.

SpeechServiceConnection_SynthOfflineVoice

The name of the offline TTS voice to be used for speech synthesis.

SpeechServiceConnection_SynthOutputFormat

The string to specify TTS output audio format (e.g.

SpeechServiceConnection_SynthVoice

The name of the TTS voice to be used for speech synthesis Added in version 1.7.0

SpeechServiceConnection_TranslationCategoryId

The speech service connection translation categoryId.

SpeechServiceConnection_TranslationFeatures

Translation features.

SpeechServiceConnection_TranslationToLanguages

The list of comma separated languages (BCP-47 format) used as target translation languages.

SpeechServiceConnection_TranslationVoice

The name of the Cognitive Service Text to Speech Service voice.

SpeechServiceConnection_Url

The URL string built from speech configuration.

SpeechServiceConnection_VoicesListEndpoint

The Cognitive Services Speech Service voices list api endpoint (url).

SpeechServiceResponse_JsonErrorDetails

The Cognitive Services Speech Service error details (in JSON format).

SpeechServiceResponse_JsonResult

The Cognitive Services Speech Service response output (in JSON format).

SpeechServiceResponse_OutputFormatOption

A string value specifying the output format option in the response result.

SpeechServiceResponse_PostProcessingOption

A string value specifying which post processing option should be used by service.

SpeechServiceResponse_ProfanityOption

The requested Cognitive Services Speech Service response output profanity setting.

SpeechServiceResponse_RecognitionBackend

The recognition backend.

SpeechServiceResponse_RecognitionLatencyMs

The recognition latency in milliseconds.

SpeechServiceResponse_RequestDetailedResultTrueFalse

The requested Cognitive Services Speech Service response output format (simple or detailed).

SpeechServiceResponse_RequestProfanityFilterTrueFalse

The requested Cognitive Services Speech Service response output profanity level.

SpeechServiceResponse_RequestPunctuationBoundary

A boolean value specifying whether to request punctuation boundary in WordBoundary Events.

SpeechServiceResponse_RequestSentenceBoundary

A boolean value specifying whether to request sentence boundary in WordBoundary Events.

SpeechServiceResponse_RequestSnr

A boolean value specifying whether to include SNR (signal to noise ratio) in the response result.

SpeechServiceResponse_RequestWordBoundary

A boolean value specifying whether to request WordBoundary events.

SpeechServiceResponse_RequestWordLevelTimestamps

A boolean value specifying whether to include word-level timestamps in the response result.

SpeechServiceResponse_StablePartialResultThreshold

The number of times a word has to be in partial results to be returned.

SpeechServiceResponse_SynthesisBackend

Indicates which backend the synthesis is finished by.

SpeechServiceResponse_SynthesisConnectionLatencyMs

The speech synthesis connection latency in milliseconds.

SpeechServiceResponse_SynthesisEventsSyncToAudio

A boolean value specifying whether the SDK should synchronize synthesis metadata events, (e.g.

SpeechServiceResponse_SynthesisFinishLatencyMs

The speech synthesis all bytes latency in milliseconds.

SpeechServiceResponse_SynthesisFirstByteLatencyMs

The speech synthesis first byte latency in milliseconds.

SpeechServiceResponse_SynthesisNetworkLatencyMs

The speech synthesis network latency in milliseconds.

SpeechServiceResponse_SynthesisServiceLatencyMs

The speech synthesis service latency in milliseconds.

SpeechServiceResponse_SynthesisUnderrunTimeMs

The underrun time for speech synthesis in milliseconds.

SpeechServiceResponse_TranslationRequestStablePartialResult

A boolean value to request for stabilizing translation partial results by omitting words in the end.

SpeechTranslation_ModelKey

The decryption key of a model to be used for speech translation.

SpeechTranslation_ModelName

The name of a model to be used for speech translation.

Speech_LogFilename

The file name to write logs.

Speech_SegmentationSilenceTimeoutMs

A duration of detected silence, measured in milliseconds, after which speech-to-text will determine a spoken phrase has ended and generate a final Recognized result.

Speech_SessionId

The session id.

Methods inherited from java.lang.Enum

java.lang.Enum.<T>valueOf java.lang.Enum.clone java.lang.Enum.compareTo java.lang.Enum.describeConstable java.lang.Enum.equals java.lang.Enum.finalize java.lang.Enum.getDeclaringClass java.lang.Enum.hashCode java.lang.Enum.name java.lang.Enum.ordinal java.lang.Enum.toString

Methods inherited from java.lang.Object

java.lang.Object.getClass java.lang.Object.notify java.lang.Object.notifyAll java.lang.Object.wait java.lang.Object.wait java.lang.Object.wait

Methods

getValue()

public int getValue()

Returns the internal value property id

Returns

int
the speech property id

valueOf(String name)

public static PropertyId valueOf(String name)

Parameters

name
java.lang.String

Returns

values()

public static PropertyId[] values()

Returns

Applies to