Aracılığıyla paylaş


Microsoft.Azure.Search.Models Namespace

Classes

AccessCondition

Additional parameters for a set of operations.

Analyzer

Abstract base class for analyzers.

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzerName.AsString

The names of all of the analyzers as plain strings.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html

AutocompleteItem

The result of Autocomplete requests.

AutocompleteParameters

Additional parameters for AutocompleteGet operation.

AutocompleteResult

The result of Autocomplete query.

CharFilter

Abstract base class for character filters. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from StandardTokenizer. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html

CognitiveServices

Abstract base class for describing any cognitive service resource attached to the skillset.

CognitiveServicesByKey

A cognitive service resource provisioned with a key that is attached to a skillset.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html

ConditionalSkill

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. https://docs.microsoft.com/azure/search/cognitive-search-skill-conditional

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index. https://docs.microsoft.com/rest/api/searchservice/Create-Index

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

DataChangeDetectionPolicy

Abstract base class for data change detection policies.

DataContainer

Represents information about the entity (such as Azure SQL table or DocumentDb collection) that will be indexed.

DataDeletionDetectionPolicy

Abstract base class for data deletion detection policies.

DataSource

Represents a datasource definition, which can be used to configure an indexer.

DataSourceCredentials

Represents credentials that can be used to connect to a datasource.

DataSourceListResult

Response from a List Datasources request. If successful, it includes the full definitions of all datasources.

DataType.AsString

The names of all of the data types as plain strings.

DataTypeExtensions

Defines extension methods for DataType.

DefaultCognitiveServices

An empty object that represents the default cognitive service resource for a skillset.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/compound/DictionaryCompoundWordTokenFilter.html

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location. https://docs.microsoft.com/rest/api/searchservice/Add-scoring-profiles-to-a-search-index

DistanceScoringParameters

Provides parameter values to a distance scoring function.

Document

Represents a document as a property bag. This is useful for scenarios where the index schema is only known at run-time.

DocumentIndexResult

Response containing the status of operations for all documents in the indexing request.

DocumentSearchResult<T>

Response containing search results from an index.

DocumentSuggestResult<T>

Response containing suggestion query results from an index.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html

EdgeNGramTokenFilterV2

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html

EntityRecognitionSkill

Text analytics entity recognition. https://docs.microsoft.com/azure/search/cognitive-search-skill-entity-recognition

FacetResult

A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.

Field

Represents a field in an index definition, which describes the name, data type, and search behavior of a field. https://docs.microsoft.com/rest/api/searchservice/Create-Index

FieldMapping

Defines a mapping between a field in a data source and a target field in an index. https://docs.microsoft.com/azure/search/search-indexer-field-mappings

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing. https://docs.microsoft.com/azure/search/search-indexer-field-mappings

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field. https://docs.microsoft.com/rest/api/searchservice/Add-scoring-profiles-to-a-search-index

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

ImageAnalysisSkill

A skill that analyzes image files. It extracts a rich set of visual features based on the image content. https://docs.microsoft.com/azure/search/cognitive-search-skill-image-analysis

Index

Represents a search index definition, which describes the fields and search behavior of an index.

IndexAction

Provides factory methods for creating an index action that operates on a document.

IndexAction<T>

Represents an index action that operates on a document.

IndexBatch

Provides factory methods for creating a batch of document write operations to send to the search index.

IndexBatch<T>

Contains a batch of document write actions to send to the index.

Indexer

Represents an indexer. https://docs.microsoft.com/rest/api/searchservice/Indexer-operations

IndexerExecutionInfo

Represents the current status and execution history of an indexer.

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexerLimits
IndexerListResult

Response from a List Indexers request. If successful, it includes the full definitions of all indexers.

IndexGetStatisticsResult

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersExtensions

Defines extension methods for the IndexingParameters class.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

IndexListResult

Response from a List Indexes request. If successful, it includes the full definitions of all indexes.

InputFieldMappingEntry

Input field mapping for a skill.

ItemError

Represents an item- or document-level indexing error.

ItemWarning

Represents an item-level warning.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeepWordFilter.html

KeyPhraseExtractionSkill

A skill that uses text analytics for key phrase extraction. https://docs.microsoft.com/azure/search/cognitive-search-skill-keyphrases

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeywordMarkerFilter.html

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html

KeywordTokenizerV2

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html

LanguageDetectionSkill

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis. https://docs.microsoft.com/azure/search/cognitive-search-skill-language-detection

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field. https://docs.microsoft.com/rest/api/searchservice/Add-scoring-profiles-to-a-search-index

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene. https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/MappingCharFilter.html

MergeSkill

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. https://docs.microsoft.com/azure/search/cognitive-search-skill-textmerger

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NamedEntityRecognitionSkill

Text analytics named entity recognition. This skill is deprecated in favor of EntityRecognitionSkillhttps://docs.microsoft.com/azure/search/cognitive-search-skill-named-entity-recognition

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html

NGramTokenFilterV2

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html

OcrSkill

A skill that extracts text from image files. https://docs.microsoft.com/azure/search/cognitive-search-skill-ocr

OutputFieldMappingEntry

Output field mapping for a skill. https://docs.microsoft.com/rest/api/searchservice/naming-rules

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html

PathHierarchyTokenizerV2

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternCaptureGroupTokenFilter.html

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene. https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceCharFilter.html

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceFilter.html

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene. https://lucene.apache.org/core/4_10_3/analyzers-phonetic/org/apache/lucene/analysis/phonetic/package-tree.html

RangeFacetResult<T>

A single bucket of a range facet query result that reports the number of documents with a field value falling within a particular range.

ResourceCounter

Represents a resource's usage and quota.

ScoringFunction

Abstract base class for functions that can modify document scores during ranking. https://docs.microsoft.com/rest/api/searchservice/Add-scoring-profiles-to-a-search-index

ScoringParameter

Represents a parameter value to be used in scoring functions (for example, referencePointParameter).

ScoringProfile

Defines parameters for a search index that influence scoring in search queries. https://docs.microsoft.com/rest/api/searchservice/Add-scoring-profiles-to-a-search-index

SearchContinuationToken

Encapsulates state required to continue fetching search results. This is necessary when Azure Cognitive Search cannot fulfill a search request with a single response.

SearchParameters

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestOptions

Additional parameters for a set of operations.

SearchResult<T>

Contains a document found by a search query, plus associated metadata.

SentimentSkill

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1. https://docs.microsoft.com/azure/search/cognitive-search-skill-sentiment

SerializePropertyNamesAsCamelCaseAttribute

Indicates that the public properties of a model type should be serialized as camel-case in order to match the field names of a search index.

ServiceCounters

Represents service-level resource counters and quotas.

ServiceLimits

Represents various service level limits.

ServiceStatistics

Response from a get service statistics request. If successful, it includes service level counters and limits.

ShaperSkill

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields). https://docs.microsoft.com/azure/search/cognitive-search-skill-shaper

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html

Skill

Abstract base class for skills. https://docs.microsoft.com/azure/search/cognitive-search-predefined-skills

Skillset

A list of skills. https://docs.microsoft.com/azure/search/cognitive-search-tutorial-blob

SkillsetListResult

Response from a list Skillset request. If successful, it includes the full definitions of all skillsets.

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SplitSkill

A skill to split a string into chunks of text. https://docs.microsoft.com/azure/search/cognitive-search-skill-textsplit

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardAnalyzer.html

StandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html

StandardTokenizerV2

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/StemmerOverrideFilter.html

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html

Suggester

Defines how the Suggest API should apply to a group of fields in the index.

SuggestParameters

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SuggestResult<T>

A result containing a document found by a suggestion query, plus associated metadata.

SynonymMap

Represents a synonym map definition.

SynonymMapListResult

Response from a List SynonymMaps request. If successful, it includes the full definitions of all synonym maps.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/synonym/SynonymFilter.html

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags. https://docs.microsoft.com/rest/api/searchservice/Add-scoring-profiles-to-a-search-index

TagScoringParameters

Provides parameter values to a tag scoring function.

TextTranslationSkill

A skill to translate text from one language to another. https://docs.microsoft.com/azure/search/cognitive-search-skill-text-translation

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

TokenFilter

Abstract base class for token filters. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

TokenInfo

Information about a token returned by an analyzer.

Tokenizer

Abstract base class for tokenizers. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html

ValueFacetResult<T>

A single bucket of a simple or interval facet query result that reports the number of documents with a field falling within a particular interval or having a specific value.

WebApiSkill

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. https://docs.microsoft.com/azure/search/cognitive-search-custom-skill-web-api

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.html

Structs

AnalyzerName

Defines the names of all text analyzers supported by Azure Cognitive Search. https://docs.microsoft.com/rest/api/searchservice/Language-support

BlobExtractionMode

Defines which parts of a blob will be indexed by the blob storage indexer. https://docs.microsoft.com/azure/search/search-howto-indexing-azure-blob-storage

CharFilterName

Defines the names of all character filters supported by Azure Cognitive Search. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

DataSourceType

Defines the type of a datasource.

DataType

Defines the data type of a field in a search index.

NamedEntityRecognitionSkillLanguage

Defines the format of NamedEntityRecognitionSkill supported language codes.

RegexFlags

Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer. http://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html#field_summary

TokenFilterName

Defines the names of all token filters supported by Azure Cognitive Search. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

TokenizerName

Defines the names of all tokenizers supported by Azure Cognitive Search. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

Interfaces

IResourceWithETag

Model classes that implement this interface represent resources that are persisted with an ETag version on the server.

Enums

AutocompleteMode

Defines values for AutocompleteMode.

CjkBigramTokenFilterScripts

Defines values for CjkBigramTokenFilterScripts.

EdgeNGramTokenFilterSide

Defines values for EdgeNGramTokenFilterSide.

EntityCategory

Defines values for EntityCategory.

EntityRecognitionSkillLanguage

Defines values for EntityRecognitionSkillLanguage.

FacetType

Specifies the type of a facet query result.

ImageAnalysisSkillLanguage

Defines values for ImageAnalysisSkillLanguage.

ImageDetail

Defines values for ImageDetail.

IndexActionType

Defines values for IndexActionType.

IndexerExecutionStatus

Defines values for IndexerExecutionStatus.

IndexerStatus

Defines values for IndexerStatus.

KeyPhraseExtractionSkillLanguage

Defines values for KeyPhraseExtractionSkillLanguage.

MicrosoftStemmingTokenizerLanguage

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

Defines values for MicrosoftTokenizerLanguage.

NamedEntityCategory

Defines values for NamedEntityCategory. This is deprecated, use EntityCategory instead

OcrSkillLanguage

Defines values for OcrSkillLanguage.

PhoneticEncoder

Defines values for PhoneticEncoder.

QueryType

Defines values for QueryType.

ScoringFunctionAggregation

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

Defines values for ScoringFunctionInterpolation.

SearchMode

Defines values for SearchMode.

SentimentSkillLanguage

Defines values for SentimentSkillLanguage.

SnowballTokenFilterLanguage

Defines values for SnowballTokenFilterLanguage.

SplitSkillLanguage

Defines values for SplitSkillLanguage.

StemmerTokenFilterLanguage

Defines values for StemmerTokenFilterLanguage.

StopwordsList

Defines values for StopwordsList.

TextExtractionAlgorithm

Defines values for TextExtractionAlgorithm.

TextSplitMode

Defines values for TextSplitMode.

TextTranslationSkillLanguage

Defines values for TextTranslationSkillLanguage.

TokenCharacterKind

Defines values for TokenCharacterKind.

VisualFeature

Defines values for VisualFeature.