@azure/search-documents package

Classes

AzureKeyCredential

A static-key-based credential that supports updating the underlying key value.

GeographyPoint

Represents a geographic point in global coordinates.

IndexDocumentsBatch

Class used to perform batch operations with multiple documents to the index.

KnowledgeRetrievalClient

Class used to perform operations against a knowledge base.

SearchClient

Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them.

SearchIndexClient

Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps.

SearchIndexerClient

Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets.

SearchIndexingBufferedSender

Class used to perform buffered operations against a search index, including adding, updating, and removing them.

Interfaces

AIServices

Parameters for AI Services.

AIServicesAccountIdentity

The multi-region account of an Azure AI service resource that's attached to a skillset.

AIServicesAccountKey

The account key of an Azure AI service resource that's attached to a skillset, to be used with the resource's subdomain.

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzedTokenInfo

Information about a token returned by an analyzer.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

AutocompleteItem

The result of Autocomplete requests.

AutocompleteRequest

Parameters for fuzzy matching, and other autocomplete query behaviors.

AutocompleteResult

The result of Autocomplete query.

AzureActiveDirectoryApplicationCredentials

Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.

AzureBlobKnowledgeSource

Configuration for Azure Blob Storage knowledge source.

AzureBlobKnowledgeSourceParameters

Parameters for Azure Blob Storage knowledge source.

AzureBlobKnowledgeSourceParams

Specifies runtime parameters for a azure blob knowledge source

AzureMachineLearningVectorizer

Specifies an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog for generating the vector embedding of a query string.

AzureOpenAIEmbeddingSkill

Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource.

AzureOpenAIParameters

Specifies the parameters for connecting to the Azure OpenAI resource.

AzureOpenAIVectorizer

Contains the parameters specific to using an Azure Open AI service for vectorization at query time.

BM25Similarity

Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter).

BaseAzureMachineLearningVectorizerParameters

Specifies the properties common between all AML vectorizer auth types.

BaseCharFilter

Base type for character filters.

BaseCognitiveServicesAccount

Base type for describing any Azure AI service resource attached to a skillset.

BaseDataChangeDetectionPolicy

Base type for data change detection policies.

BaseDataDeletionDetectionPolicy

Base type for data deletion detection policies.

BaseKnowledgeBaseActivityRecord

Base type for activity records. Tracks execution details, timing, and errors for knowledge base operations.

BaseKnowledgeBaseMessageContent

Specifies the type of the message content.

BaseKnowledgeBaseModel

Specifies the connection parameters for the model to use for query planning.

BaseKnowledgeBaseReference

Base type for references.

BaseKnowledgeRetrievalReasoningEffort

Base type for reasoning effort.

BaseKnowledgeSource

Represents a knowledge source definition.

BaseKnowledgeSourceParams

Base type for knowledge source runtime parameters.

BaseKnowledgeSourceVectorizer

Specifies the vectorization method to be used for knowledge source embedding model.

BaseLexicalAnalyzer

Base type for analyzers.

BaseLexicalNormalizer

Base type for normalizers.

BaseLexicalTokenizer

Base type for tokenizers.

BaseScoringFunction

Base type for functions that can modify document scores during ranking.

BaseSearchIndexerDataIdentity

Abstract base type for data identities.

BaseSearchIndexerSkill

Base type for skills.

BaseSearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

BaseSimilarityAlgorithm

Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results.

BaseTokenFilter

Base type for token filters.

BaseVectorQuery

The query parameters for vector and hybrid search queries.

BaseVectorSearchAlgorithmConfiguration

Contains configuration options specific to the algorithm used during indexing and/or querying.

BaseVectorSearchCompression

Contains configuration options specific to the compression method used during indexing or querying.

BaseVectorSearchVectorizer

Contains specific details for a vectorization method to be used during query time.

BinaryQuantizationCompression

Contains configuration options specific to the binary quantization compression method used during indexing and querying.

ChatCompletionResponseFormat

Determines how the language model's response should be serialized. Defaults to 'text'.

ChatCompletionResponseFormatJsonSchemaProperties

Properties for JSON schema response format.

ChatCompletionSchema

Object defining the custom schema the model will use to structure its output.

ChatCompletionSkill

A skill that calls a language model via Azure AI Foundry's Chat Completions endpoint.

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.

ClassicSimilarity

Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries.

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CognitiveServicesAccountKey

The multi-region account key of an Azure AI service resource that's attached to a skillset.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

CommonModelParameters

Common language model parameters for Chat Completions. If omitted, default values are used.

CompletedSynchronizationState

Represents the completed state of the last synchronization.

ComplexField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

ConditionalSkill

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.

ContentUnderstandingSkill

A skill that leverages Azure AI Content Understanding to process and extract structured insights from documents, enabling enriched, searchable content for enhanced document indexing and retrieval.

ContentUnderstandingSkillChunkingProperties

Controls the cardinality for chunking the content.

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index.

CreateKnowledgeBaseOptions
CreateKnowledgeSourceOptions
CreateOrUpdateAliasOptions

Options for create or update alias operation.

CreateOrUpdateIndexOptions

Options for create/update index operation.

CreateOrUpdateKnowledgeBaseOptions
CreateOrUpdateKnowledgeSourceOptions
CreateOrUpdateSkillsetOptions

Options for create/update skillset operation.

CreateOrUpdateSynonymMapOptions

Options for create/update synonymmap operation.

CreateorUpdateDataSourceConnectionOptions

Options for create/update datasource operation.

CreateorUpdateIndexerOptions

Options for create/update indexer operation.

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

CustomEntity

An object that contains information about the matches that were found, and related metadata.

CustomEntityAlias

A complex object that can be used to specify alternative spellings or synonyms to the root entity name.

CustomEntityLookupSkill

A skill looks for text from a custom, user-defined list of words and phrases.

CustomLexicalNormalizer

Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored.

DefaultCognitiveServicesAccount

An empty object that represents the default Azure AI service resource for a skillset.

DeleteAliasOptions

Options for delete alias operation.

DeleteDataSourceConnectionOptions

Options for delete datasource operation.

DeleteIndexOptions

Options for delete index operation.

DeleteIndexerOptions

Options for delete indexer operation.

DeleteKnowledgeBaseOptions
DeleteKnowledgeSourceOptions
DeleteSkillsetOptions

Options for delete skillset operaion.

DeleteSynonymMapOptions

Options for delete synonymmap operation.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location.

DistanceScoringParameters

Provides parameter values to a distance scoring function.

DocumentDebugInfo

Contains debugging information that can be used to further explore your search results.

DocumentExtractionSkill

A skill that extracts content from a file within the enrichment pipeline.

DocumentIntelligenceLayoutSkill

A skill that extracts content and layout information (as markdown), via Azure AI Services, from files within the enrichment pipeline.

DocumentIntelligenceLayoutSkillChunkingProperties

Controls the cardinality for chunking the content.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

EntityLinkingSkill

Using the Text Analytics API, extracts linked entities from text.

EntityRecognitionSkill

Text analytics entity recognition.

EntityRecognitionSkillV3

Using the Text Analytics API, extracts entities of different types from text.

ExhaustiveKnnParameters

Contains the parameters specific to exhaustive KNN algorithm.

ExtractiveQueryAnswer

Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language.

ExtractiveQueryCaption

Extracts captions from the matching documents that contain passages relevant to the search query.

FacetResult

A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.

FieldMapping

Defines a mapping between a field in a data source and a target field in an index.

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing.

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field.

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

GenerativeQueryRewrites

Generate alternative query terms to increase the recall of a search request.

GetDocumentOptions

Options for retrieving a single document.

GetKnowledgeBaseOptions
GetKnowledgeSourceOptions
GetKnowledgeSourceStatusOptions
HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

HnswParameters

Contains the parameters specific to hnsw algorithm.

ImageAnalysisSkill

A skill that analyzes image files. It extracts a rich set of visual features based on the image content.

IndexDocumentsClient

Index Documents Client

IndexDocumentsOptions

Options for the modify index batch operation.

IndexDocumentsResult

Response containing the status of operations for all documents in the indexing request.

IndexedOneLakeKnowledgeSource

Configuration for OneLake knowledge source.

IndexedOneLakeKnowledgeSourceParameters

Parameters for OneLake knowledge source.

IndexedOneLakeKnowledgeSourceParams

Specifies runtime parameters for a indexed OneLake knowledge source

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersConfiguration

A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

InputFieldMappingEntry

Input field mapping for a skill.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeyAuthAzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer with an authentication key.

KeyPhraseExtractionSkill

A skill that uses text analytics for key phrase extraction.

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene.

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

KnowledgeBase
KnowledgeBaseAgenticReasoningActivityRecord

Represents an agentic reasoning activity record.

KnowledgeBaseAzureBlobReference

Represents an Azure Blob Storage document reference.

KnowledgeBaseAzureOpenAIModel

Specifies the Azure OpenAI resource used to do query planning.

KnowledgeBaseErrorAdditionalInfo

The resource management error additional info.

KnowledgeBaseErrorDetail

The error details.

KnowledgeBaseIndexedOneLakeReference

Represents an indexed OneLake document reference.

KnowledgeBaseMessage

The natural language message style object.

KnowledgeBaseMessageImageContent

Image message type.

KnowledgeBaseMessageImageContentImage

Image content.

KnowledgeBaseMessageTextContent

Text message type.

KnowledgeBaseModelWebSummarizationActivityRecord

Represents an LLM web summarization activity record.

KnowledgeBaseRetrievalRequest

The input contract for the retrieval request.

KnowledgeBaseRetrievalResponse

The output contract for the retrieval response.

KnowledgeBaseSearchIndexReference

Represents an Azure Search document reference.

KnowledgeBaseWebReference

Represents a web document reference.

KnowledgeRetrievalClientOptions

Client options used to configure Cognitive Search API requests.

KnowledgeRetrievalIntent

An intended query to execute without model query planning.

KnowledgeRetrievalMinimalReasoningEffort

Run knowledge retrieval with minimal reasoning effort.

KnowledgeRetrievalSemanticIntent

A semantic query intent.

KnowledgeSourceAzureOpenAIVectorizer

Specifies the Azure OpenAI resource used to vectorize a query string.

KnowledgeSourceIngestionParameters

Consolidates all general ingestion settings for knowledge sources.

KnowledgeSourceReference

Reference to a knowledge source.

KnowledgeSourceStatistics

Statistical information about knowledge source synchronization history.

KnowledgeSourceStatus

Represents the status and synchronization history of a knowledge source.

KnowledgeSourceSynchronizationError

Represents a document-level indexing error encountered during a knowledge source synchronization run.

LanguageDetectionSkill

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

ListKnowledgeBasesOptions
ListKnowledgeSourcesOptions
ListSearchResultsPageSettings

Arguments for retrieving the next page of search results.

LuceneStandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

LuceneStandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field.

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MergeSkill

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

NativeBlobSoftDeleteDeletionDetectionPolicy

Defines a data deletion detection policy utilizing Azure Blob Storage's native soft delete feature for deletion detection.

NoAuthAzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer with no authentication.

OcrSkill

A skill that extracts text from image files.

OutputFieldMappingEntry

Output field mapping for a skill.

PIIDetectionSkill

Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it.

PageSettings

Options for the byPage method

PagedAsyncIterableIterator

An interface that allows async iterable iteration both to completion and by page.

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

QueryAnswerResult

An answer is a text passage extracted from the contents of the most relevant documents that matched the query. Answers are extracted from the top search results. Answer candidates are scored and the top answers are selected.

QueryCaptionResult

Captions are the most representative passages from the document relatively to the search query. They are often used as document summary. Captions are only returned for queries of type semantic.

QueryResultDocumentSemanticField

Description of fields that were sent to the semantic enrichment process, as well as how they were used

QueryResultDocumentSubscores

The breakdown of subscores between the text and vector query components of the search query for this document. Each vector query is shown as a separate object in the same order they were received.

RescoringOptions

Contains the options for rescoring.

ResourceCounter

Represents a resource's usage and quota.

RetrieveOptions
ScalarQuantizationCompression

Contains configuration options specific to the scalar quantization compression method used during indexing and querying.

ScalarQuantizationParameters

Contains the parameters specific to Scalar Quantization.

ScoringProfile

Defines parameters for a search index that influence scoring in search queries.

SearchAlias

Represents an index alias, which describes a mapping from the alias name to an index. The alias name can be used in place of the index name for supported operations.

SearchClientOptions

Client options used to configure AI Search API requests.

SearchDocumentsPageResult

Response containing search page results from an index.

SearchDocumentsResult

Response containing search results from an index.

SearchDocumentsResultBase

Response containing search results from an index.

SearchIndex

Represents a search index definition, which describes the fields and search behavior of an index.

SearchIndexClientOptions

Client options used to configure AI Search API requests.

SearchIndexFieldReference

Field reference for a search index.

SearchIndexKnowledgeSource

Knowledge Source targeting a search index.

SearchIndexKnowledgeSourceParameters

Parameters for search index knowledge source.

SearchIndexKnowledgeSourceParams

Specifies runtime parameters for a search index knowledge source

SearchIndexStatistics

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

SearchIndexer

Represents an indexer.

SearchIndexerClientOptions

Client options used to configure AI Search API requests.

SearchIndexerDataContainer

Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.

SearchIndexerDataNoneIdentity

Clears the identity property of a datasource.

SearchIndexerDataSourceConnection

Represents a datasource definition, which can be used to configure an indexer.

SearchIndexerDataUserAssignedIdentity

Specifies the identity for a datasource to use.

SearchIndexerError

Represents an item- or document-level indexing error.

SearchIndexerIndexProjection

Definition of additional projections to secondary search indexes.

SearchIndexerIndexProjectionParameters

A dictionary of index projection-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

SearchIndexerIndexProjectionSelector

Description for what data to store in the designated search index.

SearchIndexerKnowledgeStore

Definition of additional projections to azure blob, table, or files, of enriched data.

SearchIndexerKnowledgeStoreBlobProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreFileProjectionSelector

Projection definition for what data to store in Azure Files.

SearchIndexerKnowledgeStoreObjectProjectionSelector

Projection definition for what data to store in Azure Blob.

SearchIndexerKnowledgeStoreParameters

A dictionary of knowledge store-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

SearchIndexerKnowledgeStoreProjection

Container object for various projection selectors.

SearchIndexerKnowledgeStoreProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreTableProjectionSelector

Description for what data to store in Azure Tables.

SearchIndexerLimits

Represents the limits that can be applied to an indexer.

SearchIndexerSkillset

A list of skills.

SearchIndexerStatus

Represents the current status and execution history of an indexer.

SearchIndexerWarning

Represents an item-level warning.

SearchIndexingBufferedSenderOptions

Options for SearchIndexingBufferedSender.

SearchResourceEncryptionKey

A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure AI Search, such as indexes and synonym maps.

SearchServiceStatistics

Response from a get service statistics request. If successful, it includes service level counters and limits.

SemanticConfiguration

Defines a specific configuration to be used in the context of semantic capabilities.

SemanticDebugInfo

Debug options for semantic search queries.

SemanticField

A field that is used as part of the semantic configuration.

SemanticPrioritizedFields

Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers.

SemanticSearch

Defines parameters for a search index that influence semantic capabilities.

SemanticSearchOptions

Defines options for semantic search queries

SentimentSkill

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.

SentimentSkillV3

Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level.

ServiceCounters

Represents service-level resource counters and quotas.

ServiceLimits

Represents various service level limits.

ShaperSkill

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

SimpleField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SingleVectorFieldResult

A single vector field result. Both

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SplitSkill

A skill to split a string into chunks of text.

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/StemmerOverrideFilter.html

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene. See https://learn.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html

SuggestDocumentsResult

Response containing suggestion query results from an index.

SuggestRequest

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

Suggester

Defines how the Suggest API should apply to a group of fields in the index.

SynchronizationState

Represents the current state of an ongoing synchronization that spans multiple indexer runs.

SynonymMap

Represents a synonym map definition.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags.

TagScoringParameters

Provides parameter values to a tag scoring function.

TextResult

The BM25 or Classic score for the text portion of the query.

TextTranslationSkill

A skill to translate text from one language to another.

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

TokenAuthAzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer with a managed identity.

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

VectorSearch

Contains configuration options related to vector search.

VectorSearchOptions

Defines options for vector search queries

VectorSearchProfile

Defines a combination of configurations to use with vector search.

VectorizableImageBinaryQuery

The query parameters to use for vector search when a base 64 encoded binary of an image that needs to be vectorized is provided.

VectorizableImageUrlQuery

The query parameters to use for vector search when an url that represents an image value that needs to be vectorized is provided.

VectorizableTextQuery

The query parameters to use for vector search when a text value that needs to be vectorized is provided.

VectorizedQuery

The query parameters to use for vector search when a raw vector value is provided.

VectorsDebugInfo

"Contains debugging information specific to vector and hybrid search.")

WebApiParameters

Specifies the properties for connecting to a user-defined vectorizer.

WebApiSkill

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.

WebApiVectorizer

Specifies a user-defined vectorizer for generating the vector embedding of a query string. Integration of an external vectorizer is achieved using the custom Web API interface of a skillset.

WebKnowledgeSource

Knowledge Source targeting web results.

WebKnowledgeSourceDomain

Configuration for web knowledge source domain.

WebKnowledgeSourceDomains

Domain allow/block configuration for web knowledge source.

WebKnowledgeSourceParameters

Parameters for web knowledge source.

WebKnowledgeSourceParams

Specifies runtime parameters for a web knowledge source

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Type Aliases

AIFoundryModelCatalogName

The name of the embedding model from the Azure AI Foundry Catalog that will be called.
KnownAIFoundryModelCatalogName can be used interchangeably with AIFoundryModelCatalogName, this enum contains the known values that the service supports.

Known values supported by the service

OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32: OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32
OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336: OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336
Facebook-DinoV2-Image-Embeddings-ViT-Base: Facebook-DinoV2-Image-Embeddings-ViT-Base
Facebook-DinoV2-Image-Embeddings-ViT-Giant: Facebook-DinoV2-Image-Embeddings-ViT-Giant
Cohere-embed-v3-english: Cohere-embed-v3-english
Cohere-embed-v3-multilingual: Cohere-embed-v3-multilingual
Cohere-embed-v4: Cohere embed v4 model for generating embeddings from both text and images.

AliasIterator

An iterator for listing the aliases that exist in the Search service. This will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

AnalyzeTextOptions

Options for analyze text operation.

AutocompleteMode

Specifies the mode for Autocomplete. The default is 'oneTerm'. Use 'twoTerms' to get shingles and 'oneTermWithContext' to use the current context in producing autocomplete terms.

AutocompleteOptions

Options for retrieving completion text for a partial searchText.

AzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer.

AzureOpenAIModelName

The Azure Open AI model name that will be called.
KnownAzureOpenAIModelName can be used interchangeably with AzureOpenAIModelName, this enum contains the known values that the service supports.

Known values supported by the service

text-embedding-ada-002: TextEmbeddingAda002 model.
text-embedding-3-large: TextEmbedding3Large model.
text-embedding-3-small: TextEmbedding3Small model.
gpt-5-mini: Gpt5Mini model.
gpt-5-nano: Gpt5Nano model.
gpt-5.4-mini: Gpt54Mini model.
gpt-5.4-nano: Gpt54Nano model.

BaseKnowledgeRetrievalIntent

Alias for KnowledgeRetrievalIntentUnion

BlobIndexerDataToExtract
BlobIndexerImageAction
BlobIndexerPDFTextRotationAlgorithm
BlobIndexerParsingMode
CharFilter

Contains the possible cases for CharFilter.

CharFilterName

Defines the names of all character filters supported by the search engine.
<xref:KnownCharFilterName> can be used interchangeably with CharFilterName, this enum contains the known values that the service supports.

Known values supported by the service

html_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html

ChatCompletionExtraParametersBehavior

Specifies how 'extraParameters' should be handled by Azure AI Foundry. Defaults to 'error'.
KnownChatCompletionExtraParametersBehavior can be used interchangeably with ChatCompletionExtraParametersBehavior, this enum contains the known values that the service supports.

Known values supported by the service

passThrough: Passes any extra parameters directly to the model.
drop: Drops all extra parameters.
error: Raises an error if any extra parameter is present.

ChatCompletionResponseFormatType

Specifies how the LLM should format the response.
KnownChatCompletionResponseFormatType can be used interchangeably with ChatCompletionResponseFormatType, this enum contains the known values that the service supports.

Known values supported by the service

text: Plain text response format.
jsonObject: Arbitrary JSON object response format.
jsonSchema: JSON schema-adhering response format.

CjkBigramTokenFilterScripts

Scripts that can be ignored by CjkBigramTokenFilter.

CognitiveServicesAccount

Contains the possible cases for CognitiveServicesAccount.

ComplexDataType

Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)'

ContentUnderstandingSkillChunkingUnit

Controls the cardinality of the chunk unit. Default is 'characters'
KnownContentUnderstandingSkillChunkingUnit can be used interchangeably with ContentUnderstandingSkillChunkingUnit, this enum contains the known values that the service supports.

Known values supported by the service

characters: Specifies chunk by characters.

ContentUnderstandingSkillExtractionOptions

Controls the cardinality of the content extracted from the document by the skill.
KnownContentUnderstandingSkillExtractionOptions can be used interchangeably with ContentUnderstandingSkillExtractionOptions, this enum contains the known values that the service supports.

Known values supported by the service

images: Specify that image content should be extracted from the document.
locationMetadata: Specify that location metadata should be extracted from the document.

ContinuablePage

An interface that describes a page of results.

CountDocumentsOptions

Options for performing the count operation on the index.

CreateAliasOptions

Options for create alias operation.

CreateDataSourceConnectionOptions

Options for create datasource operation.

CreateIndexOptions

Options for create index operation.

CreateIndexerOptions

Options for create indexer operation.

CreateSkillsetOptions

Options for create skillset operation.

CreateSynonymMapOptions

Options for create synonymmap operation.

CustomEntityLookupSkillLanguage
DataChangeDetectionPolicy

Contains the possible cases for DataChangeDetectionPolicy.

DataDeletionDetectionPolicy

Contains the possible cases for DataDeletionDetectionPolicy.

DeleteDocumentsOptions

Options for the delete documents operation.

DocumentIntelligenceLayoutSkillChunkingUnit

Controls the cardinality of the chunk unit. Default is 'characters'
KnownDocumentIntelligenceLayoutSkillChunkingUnit can be used interchangeably with DocumentIntelligenceLayoutSkillChunkingUnit, this enum contains the known values that the service supports.

Known values supported by the service

characters: Specifies chunk by characters.

DocumentIntelligenceLayoutSkillExtractionOptions

Controls the cardinality of the content extracted from the document by the skill.
KnownDocumentIntelligenceLayoutSkillExtractionOptions can be used interchangeably with DocumentIntelligenceLayoutSkillExtractionOptions, this enum contains the known values that the service supports.

Known values supported by the service

images: Specify that image content should be extracted from the document.
locationMetadata: Specify that location metadata should be extracted from the document.

DocumentIntelligenceLayoutSkillMarkdownHeaderDepth

The depth of headers in the markdown output. Default is h6.
KnownDocumentIntelligenceLayoutSkillMarkdownHeaderDepth can be used interchangeably with DocumentIntelligenceLayoutSkillMarkdownHeaderDepth, this enum contains the known values that the service supports.

Known values supported by the service

h1: Header level 1.
h2: Header level 2.
h3: Header level 3.
h4: Header level 4.
h5: Header level 5.
h6: Header level 6.

DocumentIntelligenceLayoutSkillOutputFormat

Controls the cardinality of the output format. Default is 'markdown'.
KnownDocumentIntelligenceLayoutSkillOutputFormat can be used interchangeably with DocumentIntelligenceLayoutSkillOutputFormat, this enum contains the known values that the service supports.

Known values supported by the service

text: Specify the format of the output as text.
markdown: Specify the format of the output as markdown.

DocumentIntelligenceLayoutSkillOutputMode

Controls the cardinality of the output produced by the skill. Default is 'oneToMany'.
KnownDocumentIntelligenceLayoutSkillOutputMode can be used interchangeably with DocumentIntelligenceLayoutSkillOutputMode, this enum contains the known values that the service supports.

Known values supported by the service

oneToMany: Specify that the output should be parsed as 'oneToMany'.

EdgeNGramTokenFilterSide

Specifies which side of the input an n-gram should be generated from.

EntityCategory

A string indicating what entity categories to return.
KnownEntityCategory can be used interchangeably with EntityCategory, this enum contains the known values that the service supports.

Known values supported by the service

location: Entities describing a physical location.
organization: Entities describing an organization.
person: Entities describing a person.
quantity: Entities describing a quantity.
datetime: Entities describing a date and time.
url: Entities describing a URL.
email: Entities describing an email address.

EntityRecognitionSkillLanguage

The language codes supported for input text by EntityRecognitionSkill.
KnownEntityRecognitionSkillLanguage can be used interchangeably with EntityRecognitionSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

ar: Arabic
cs: Czech
zh-Hans: Chinese-Simplified
zh-Hant: Chinese-Traditional
da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
hu: Hungarian
it: Italian
ja: Japanese
ko: Korean
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
pt-BR: Portuguese (Brazil)
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish

ExcludedODataTypes
ExhaustiveKnnAlgorithmConfiguration

Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index.

ExtractDocumentKey
GetAliasOptions

Options for get alias operation.

GetDataSourceConnectionOptions

Options for get datasource operation.

GetIndexOptions

Options for get index operation.

GetIndexStatisticsOptions

Options for get index statistics operation.

GetIndexerOptions

Options for get indexer operation.

GetIndexerStatusOptions

Options for get indexer status operation.

GetServiceStatisticsOptions

Options for get service statistics operation.

GetSkillSetOptions

Options for get skillset operation.

GetSynonymMapsOptions

Options for get synonymmaps operation.

HnswAlgorithmConfiguration

Contains configuration options specific to the hnsw approximate nearest neighbors algorithm used during indexing time.

ImageAnalysisSkillLanguage
ImageDetail
IndexActionType

The operation to perform on a document in an indexing batch.

IndexDocumentsAction

Represents an index action that operates on a document.

IndexIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexNameIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexProjectionMode

Defines behavior of the index projections in relation to the rest of the indexer.
KnownIndexProjectionMode can be used interchangeably with IndexProjectionMode, this enum contains the known values that the service supports.

Known values supported by the service

skipIndexingParentDocuments: The source document will be skipped from writing into the indexer's target index.
includeIndexingParentDocuments: The source document will be written into the indexer's target index. This is the default pattern.

IndexerExecutionEnvironment
IndexerExecutionStatus

Represents the status of an individual indexer execution.

IndexerResyncOption

Options with various types of permission data to index.
KnownIndexerResyncOption can be used interchangeably with IndexerResyncOption, this enum contains the known values that the service supports.

Known values supported by the service

permissions: Indexer to re-ingest pre-selected permissions data from data source to index.

IndexerStatus

Represents the overall indexer status.

KeyPhraseExtractionSkillLanguage
KnowledgeBaseActivityRecord

Alias for KnowledgeBaseActivityRecordUnion

KnowledgeBaseActivityRecordType

The type of activity record.
<xref:KnownKnowledgeBaseActivityRecordType> can be used interchangeably with KnowledgeBaseActivityRecordType, this enum contains the known values that the service supports.

Known values supported by the service

searchIndex: Search index retrieval activity.
azureBlob: Azure Blob retrieval activity.
indexedOneLake: Indexed OneLake retrieval activity.
web: Web retrieval activity.
modelWebSummarization: LLM web summarization activity.
agenticReasoning: Agentic reasoning activity.

KnowledgeBaseIterator

An iterator for listing the knowledge bases that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

KnowledgeBaseMessageContent

Alias for KnowledgeBaseMessageContentUnion

KnowledgeBaseMessageContentType

The type of message content.
<xref:KnownKnowledgeBaseMessageContentType> can be used interchangeably with KnowledgeBaseMessageContentType, this enum contains the known values that the service supports.

Known values supported by the service

text: Text message content kind.
image: Image message content kind.

KnowledgeBaseModel
KnowledgeBaseModelKind

The AI model to be used for query planning.
KnownKnowledgeBaseModelKind can be used interchangeably with KnowledgeBaseModelKind, this enum contains the known values that the service supports.

Known values supported by the service

azureOpenAI: Use Azure Open AI models for query planning.

KnowledgeBaseReference

Alias for KnowledgeBaseReferenceUnion

KnowledgeBaseReferenceType

The type of reference.
<xref:KnownKnowledgeBaseReferenceType> can be used interchangeably with KnowledgeBaseReferenceType, this enum contains the known values that the service supports.

Known values supported by the service

searchIndex: Search index document reference.
azureBlob: Azure Blob document reference.
indexedOneLake: Indexed OneLake document reference.
web: Web document reference.

KnowledgeRetrievalIntentType

The kind of knowledge base configuration to use.
<xref:KnownKnowledgeRetrievalIntentType> can be used interchangeably with KnowledgeRetrievalIntentType, this enum contains the known values that the service supports.

Known values supported by the service

semantic: A natural language semantic query intent.

KnowledgeRetrievalReasoningEffortKind

The amount of effort to use during retrieval.
<xref:KnownKnowledgeRetrievalReasoningEffortKind> can be used interchangeably with KnowledgeRetrievalReasoningEffortKind, this enum contains the known values that the service supports.

Known values supported by the service

minimal: Does not perform any source selections, query planning, or iterative search.

KnowledgeRetrievalReasoningEffortUnion

Alias for KnowledgeRetrievalReasoningEffortUnion

KnowledgeSource
KnowledgeSourceContentExtractionMode

Optional content extraction mode. Default is 'minimal'.
<xref:KnownKnowledgeSourceContentExtractionMode> can be used interchangeably with KnowledgeSourceContentExtractionMode, this enum contains the known values that the service supports.

Known values supported by the service

minimal: Extracts only essential metadata while deferring most content processing.
standard: Performs the full default content extraction pipeline.

KnowledgeSourceIterator

An iterator for listing the knowledge sources that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

KnowledgeSourceKind

The kind of the knowledge source.
KnownKnowledgeSourceKind can be used interchangeably with KnowledgeSourceKind, this enum contains the known values that the service supports.

Known values supported by the service

searchIndex: A knowledge source that reads data from a Search Index.
azureBlob: A knowledge source that read and ingest data from Azure Blob Storage to a Search Index.
indexedOneLake: A knowledge source that reads data from indexed OneLake.
web: A knowledge source that reads data from the web.

KnowledgeSourceParams

Alias for KnowledgeSourceParamsUnion

KnowledgeSourceSynchronizationStatus

The current synchronization status of the knowledge source.
<xref:KnownKnowledgeSourceSynchronizationStatus> can be used interchangeably with KnowledgeSourceSynchronizationStatus, this enum contains the known values that the service supports.

Known values supported by the service

creating: The knowledge source is being provisioned.
active: The knowledge source is active and synchronization runs are occurring.
deleting: The knowledge source is being deleted and synchronization is paused.

KnowledgeSourceVectorizer
LexicalAnalyzer

Contains the possible cases for Analyzer.

LexicalAnalyzerName

Defines the names of all text analyzers supported by the search engine.
KnownLexicalAnalyzerName can be used interchangeably with LexicalAnalyzerName, this enum contains the known values that the service supports.

Known values supported by the service

ar.microsoft: Microsoft analyzer for Arabic.
ar.lucene: Lucene analyzer for Arabic.
hy.lucene: Lucene analyzer for Armenian.
bn.microsoft: Microsoft analyzer for Bangla.
eu.lucene: Lucene analyzer for Basque.
bg.microsoft: Microsoft analyzer for Bulgarian.
bg.lucene: Lucene analyzer for Bulgarian.
ca.microsoft: Microsoft analyzer for Catalan.
ca.lucene: Lucene analyzer for Catalan.
zh-Hans.microsoft: Microsoft analyzer for Chinese (Simplified).
zh-Hans.lucene: Lucene analyzer for Chinese (Simplified).
zh-Hant.microsoft: Microsoft analyzer for Chinese (Traditional).
zh-Hant.lucene: Lucene analyzer for Chinese (Traditional).
hr.microsoft: Microsoft analyzer for Croatian.
cs.microsoft: Microsoft analyzer for Czech.
cs.lucene: Lucene analyzer for Czech.
da.microsoft: Microsoft analyzer for Danish.
da.lucene: Lucene analyzer for Danish.
nl.microsoft: Microsoft analyzer for Dutch.
nl.lucene: Lucene analyzer for Dutch.
en.microsoft: Microsoft analyzer for English.
en.lucene: Lucene analyzer for English.
et.microsoft: Microsoft analyzer for Estonian.
fi.microsoft: Microsoft analyzer for Finnish.
fi.lucene: Lucene analyzer for Finnish.
fr.microsoft: Microsoft analyzer for French.
fr.lucene: Lucene analyzer for French.
gl.lucene: Lucene analyzer for Galician.
de.microsoft: Microsoft analyzer for German.
de.lucene: Lucene analyzer for German.
el.microsoft: Microsoft analyzer for Greek.
el.lucene: Lucene analyzer for Greek.
gu.microsoft: Microsoft analyzer for Gujarati.
he.microsoft: Microsoft analyzer for Hebrew.
hi.microsoft: Microsoft analyzer for Hindi.
hi.lucene: Lucene analyzer for Hindi.
hu.microsoft: Microsoft analyzer for Hungarian.
hu.lucene: Lucene analyzer for Hungarian.
is.microsoft: Microsoft analyzer for Icelandic.
id.microsoft: Microsoft analyzer for Indonesian (Bahasa).
id.lucene: Lucene analyzer for Indonesian.
ga.lucene: Lucene analyzer for Irish.
it.microsoft: Microsoft analyzer for Italian.
it.lucene: Lucene analyzer for Italian.
ja.microsoft: Microsoft analyzer for Japanese.
ja.lucene: Lucene analyzer for Japanese.
kn.microsoft: Microsoft analyzer for Kannada.
ko.microsoft: Microsoft analyzer for Korean.
ko.lucene: Lucene analyzer for Korean.
lv.microsoft: Microsoft analyzer for Latvian.
lv.lucene: Lucene analyzer for Latvian.
lt.microsoft: Microsoft analyzer for Lithuanian.
ml.microsoft: Microsoft analyzer for Malayalam.
ms.microsoft: Microsoft analyzer for Malay (Latin).
mr.microsoft: Microsoft analyzer for Marathi.
nb.microsoft: Microsoft analyzer for Norwegian (Bokmål).
no.lucene: Lucene analyzer for Norwegian.
fa.lucene: Lucene analyzer for Persian.
pl.microsoft: Microsoft analyzer for Polish.
pl.lucene: Lucene analyzer for Polish.
pt-BR.microsoft: Microsoft analyzer for Portuguese (Brazil).
pt-BR.lucene: Lucene analyzer for Portuguese (Brazil).
pt-PT.microsoft: Microsoft analyzer for Portuguese (Portugal).
pt-PT.lucene: Lucene analyzer for Portuguese (Portugal).
pa.microsoft: Microsoft analyzer for Punjabi.
ro.microsoft: Microsoft analyzer for Romanian.
ro.lucene: Lucene analyzer for Romanian.
ru.microsoft: Microsoft analyzer for Russian.
ru.lucene: Lucene analyzer for Russian.
sr-cyrillic.microsoft: Microsoft analyzer for Serbian (Cyrillic).
sr-latin.microsoft: Microsoft analyzer for Serbian (Latin).
sk.microsoft: Microsoft analyzer for Slovak.
sl.microsoft: Microsoft analyzer for Slovenian.
es.microsoft: Microsoft analyzer for Spanish.
es.lucene: Lucene analyzer for Spanish.
sv.microsoft: Microsoft analyzer for Swedish.
sv.lucene: Lucene analyzer for Swedish.
ta.microsoft: Microsoft analyzer for Tamil.
te.microsoft: Microsoft analyzer for Telugu.
th.microsoft: Microsoft analyzer for Thai.
th.lucene: Lucene analyzer for Thai.
tr.microsoft: Microsoft analyzer for Turkish.
tr.lucene: Lucene analyzer for Turkish.
uk.microsoft: Microsoft analyzer for Ukrainian.
ur.microsoft: Microsoft analyzer for Urdu.
vi.microsoft: Microsoft analyzer for Vietnamese.
standard.lucene: Standard Lucene analyzer.
standardasciifolding.lucene: Standard ASCII Folding Lucene analyzer. See https://learn.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#Analyzers
keyword: Treats the entire content of a field as a single token. This is useful for data like zip codes, ids, and some product names. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html
pattern: Flexibly separates text into terms via a regular expression pattern. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html
simple: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/SimpleAnalyzer.html
stop: Divides text at non-letters; Applies the lowercase and stopword token filters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html
whitespace: An analyzer that uses the whitespace tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceAnalyzer.html

LexicalNormalizer

Contains the possible cases for LexicalNormalizer.

LexicalNormalizerName

Defines the names of all text normalizers supported by the search engine.
KnownLexicalNormalizerName can be used interchangeably with LexicalNormalizerName, this enum contains the known values that the service supports.

Known values supported by the service

asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
lowercase: Normalizes token text to lowercase. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html
standard: Standard normalizer, which consists of lowercase and asciifolding. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
uppercase: Normalizes token text to uppercase. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html

LexicalTokenizer

Contains the possible cases for Tokenizer.

LexicalTokenizerName

Defines the names of all tokenizers supported by the search engine.
<xref:KnownLexicalTokenizerName> can be used interchangeably with LexicalTokenizerName, this enum contains the known values that the service supports.

Known values supported by the service

classic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html
edgeNGram: Tokenizes the input from an edge into n-grams of the given size(s). See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html
keyword_v2: Emits the entire input as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html
letter: Divides text at non-letters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html
lowercase: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html
microsoft_language_tokenizer: Divides text using language-specific rules.
microsoft_language_stemming_tokenizer: Divides text using language-specific rules and reduces words to their base forms.
nGram: Tokenizes the input into n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html
path_hierarchy_v2: Tokenizer for path-like hierarchies. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html
pattern: Tokenizer that uses regex pattern matching to construct distinct tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html
standard_v2: Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html
uax_url_email: Tokenizes urls and emails as one token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html
whitespace: Divides text at whitespace. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html

ListAliasesOptions

Options for list aliases operation.

ListDataSourceConnectionsOptions

Options for a list data sources operation.

ListIndexersOptions

Options for a list indexers operation.

ListIndexesOptions

Options for a list indexes operation.

ListSkillsetsOptions

Options for a list skillsets operation.

ListSynonymMapsOptions

Options for a list synonymMaps operation.

MarkdownHeaderDepth

Specifies the max header depth that will be considered while grouping markdown content. Default is h6.
KnownMarkdownHeaderDepth can be used interchangeably with MarkdownHeaderDepth, this enum contains the known values that the service supports.

Known values supported by the service

h1: Indicates that headers up to a level of h1 will be considered while grouping markdown content.
h2: Indicates that headers up to a level of h2 will be considered while grouping markdown content.
h3: Indicates that headers up to a level of h3 will be considered while grouping markdown content.
h4: Indicates that headers up to a level of h4 will be considered while grouping markdown content.
h5: Indicates that headers up to a level of h5 will be considered while grouping markdown content.
h6: Indicates that headers up to a level of h6 will be considered while grouping markdown content. This is the default.

MarkdownParsingSubmode

Specifies the submode that will determine whether a markdown file will be parsed into exactly one search document or multiple search documents. Default is oneToMany.
KnownMarkdownParsingSubmode can be used interchangeably with MarkdownParsingSubmode, this enum contains the known values that the service supports.

Known values supported by the service

oneToMany: Indicates that each section of the markdown file (up to a specified depth) will be parsed into individual search documents. This can result in a single markdown file producing multiple search documents. This is the default sub-mode.
oneToOne: Indicates that each markdown file will be parsed into a single search document.

MergeDocumentsOptions

Options for the merge documents operation.

MergeOrUploadDocumentsOptions

Options for the merge or upload documents operation.

MicrosoftStemmingTokenizerLanguage

Lists the languages supported by the Microsoft language stemming tokenizer.

MicrosoftTokenizerLanguage

Lists the languages supported by the Microsoft language tokenizer.

NarrowedModel

Narrows the Model type to include only the selected Fields

OcrLineEnding

Defines the sequence of characters to use between the lines of text recognized by the OCR skill. The default value is "space".
KnownOcrLineEnding can be used interchangeably with OcrLineEnding, this enum contains the known values that the service supports.

Known values supported by the service

space: Lines are separated by a single space character.
carriageReturn: Lines are separated by a carriage return ('\r') character.
lineFeed: Lines are separated by a single line feed ('\n') character.
carriageReturnLineFeed: Lines are separated by a carriage return and a line feed ('\r\n') character.

OcrSkillLanguage
PIIDetectionSkillMaskingMode
PhoneticEncoder

Identifies the type of phonetic encoder to use with a PhoneticTokenFilter.

QueryAnswer

A value that specifies whether answers should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set to extractive, the query returns answers extracted from key passages in the highest ranked documents.

QueryCaption

A value that specifies whether captions should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set, the query returns captions extracted from key passages in the highest ranked documents. When Captions is 'extractive', highlighting is enabled by default. Defaults to 'none'.

QueryDebugMode

Enables a debugging tool that can be used to further explore your search results. You can enable multiple debug modes simultaneously by separating them with a | character, for example: semantic|queryRewrites.
KnownQueryDebugMode can be used interchangeably with QueryDebugMode, this enum contains the known values that the service supports.

Known values supported by the service

disabled: No query debugging information will be returned.
semantic: Allows the user to further explore their reranked results.
vector: Allows the user to further explore their hybrid and vector query results.
queryRewrites: Allows the user to explore the list of query rewrites generated for their search request.
innerHits: Allows the user to retrieve scoring information regarding vectors matched within a collection of complex types.
all: Turn on all debug options.

QueryRewrites

Defines options for query rewrites.

QueryType

Specifies the syntax of the search query. The default is 'simple'. Use 'full' if your query uses the Lucene query syntax and 'semantic' if query syntax is not needed.
<xref:KnownQueryType> can be used interchangeably with QueryType, this enum contains the known values that the service supports.

Known values supported by the service

simple: Uses the simple query syntax for searches. Search text is interpreted using a simple query language that allows for symbols such as +, * and "". Queries are evaluated across all searchable fields by default, unless the searchFields parameter is specified.
full: Uses the full Lucene query syntax for searches. Search text is interpreted using the Lucene query language which allows field-specific and weighted searches, as well as other advanced features.
semantic: Best suited for queries expressed in natural language as opposed to keywords. Improves precision of search results by re-ranking the top search results using a ranking model trained on the Web corpus.

RankingOrder

Represents score to use for sort order of documents.
KnownRankingOrder can be used interchangeably with RankingOrder, this enum contains the known values that the service supports.

Known values supported by the service

BoostedRerankerScore: Sets sort order as BoostedRerankerScore
RerankerScore: Sets sort order as ReRankerScore

RegexFlags
ResetIndexerOptions

Options for reset indexer operation.

RunIndexerOptions

Options for run indexer operation.

ScoringFunction

Contains the possible cases for ScoringFunction.

ScoringFunctionAggregation

Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile.

ScoringFunctionInterpolation

Defines the function used to interpolate score boosting across a range of documents.

ScoringStatistics

A value that specifies whether we want to calculate scoring statistics (such as document frequency) globally for more consistent scoring, or locally, for lower latency. The default is 'local'. Use 'global' to aggregate scoring statistics globally before scoring. Using global scoring statistics can increase latency of search queries.

SearchField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SearchFieldArray

If TModel is an untyped object, an untyped string array Otherwise, the slash-delimited fields of TModel.

SearchFieldDataType

Defines values for SearchFieldDataType.

Known values supported by the service:

Edm.String: Indicates that a field contains a string.

Edm.Int32: Indicates that a field contains a 32-bit signed integer.

Edm.Int64: Indicates that a field contains a 64-bit signed integer.

Edm.Double: Indicates that a field contains an IEEE double-precision floating point number.

Edm.Boolean: Indicates that a field contains a Boolean value (true or false).

Edm.DateTimeOffset: Indicates that a field contains a date/time value, including timezone information.

Edm.GeographyPoint: Indicates that a field contains a geo-location in terms of longitude and latitude.

Edm.ComplexType: Indicates that a field contains one or more complex objects that in turn have sub-fields of other types.

Edm.Single: Indicates that a field contains a single-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Single).

Edm.Half: Indicates that a field contains a half-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Half).

Edm.Int16: Indicates that a field contains a 16-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Int16).

Edm.SByte: Indicates that a field contains a 8-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.SByte).

Edm.Byte: Indicates that a field contains a 8-bit unsigned integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Byte).

SearchIndexAlias

Search Alias object.

SearchIndexerDataIdentity

Contains the possible cases for SearchIndexerDataIdentity.

SearchIndexerDataSourceType
SearchIndexerSkill

Contains the possible cases for Skill.

SearchIndexingBufferedSenderDeleteDocumentsOptions

Options for SearchIndexingBufferedSenderDeleteDocuments.

SearchIndexingBufferedSenderFlushDocumentsOptions

Options for SearchIndexingBufferedSenderFlushDocuments.

SearchIndexingBufferedSenderMergeDocumentsOptions

Options for SearchIndexingBufferedSenderMergeDocuments.

SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions

Options for SearchIndexingBufferedSenderMergeOrUploadDocuments.

SearchIndexingBufferedSenderUploadDocumentsOptions

Options for SearchIndexingBufferedSenderUploadDocuments.

SearchIterator

An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

SearchMode

Specifies whether any or all of the search terms must be matched in order to count the document as a match.

SearchOptions

Options for committing a full search request.

SearchPick

Deeply pick fields of T using valid AI Search OData $select paths.

SearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestQueryTypeOptions
SearchResult

Contains a document found by a search query, plus associated metadata.

SelectFields

Produces a union of valid AI Search OData $select paths for T using a post-order traversal of the field tree rooted at T.

SemanticErrorMode
SemanticErrorReason
SemanticSearchResultsType

Type of partial response that was returned for a semantic ranking request.
KnownSemanticSearchResultsType can be used interchangeably with SemanticSearchResultsType, this enum contains the known values that the service supports.

Known values supported by the service

baseResults: Results without any semantic enrichment or reranking.
rerankedResults: Results have been reranked with the reranker model and will include semantic captions. They will not include any answers, answers highlights or caption highlights.

SentimentSkillLanguage

The language codes supported for input text by SentimentSkill.
KnownSentimentSkillLanguage can be used interchangeably with SentimentSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
it: Italian
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish

Similarity

Alias for SimilarityAlgorithmUnion

SimilarityAlgorithm

Contains the possible cases for Similarity.

SnowballTokenFilterLanguage

The language to use for a Snowball token filter.

SplitSkillLanguage
StemmerTokenFilterLanguage

The language to use for a stemmer token filter.

StopwordsList

Identifies a predefined list of language-specific stopwords.

SuggestNarrowedModel
SuggestOptions

Options for retrieving suggestions based on the searchText.

SuggestResult

A result containing a document found by a suggestion query, plus associated metadata.

TextSplitMode
TextTranslationSkillLanguage
TokenCharacterKind

Represents classes of characters on which a token filter can operate.

TokenFilter

Contains the possible cases for TokenFilter.

TokenFilterName

Defines the names of all token filters supported by the search engine.
<xref:KnownTokenFilterName> can be used interchangeably with TokenFilterName, this enum contains the known values that the service supports.

Known values supported by the service

arabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html
apostrophe: Strips all characters after an apostrophe (including the apostrophe itself). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/tr/ApostropheFilter.html
asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
cjk_bigram: Forms bigrams of CJK terms that are generated from the standard tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html
cjk_width: Normalizes CJK width differences. Folds full-width ASCII variants into the equivalent basic Latin, and half-width Katakana variants into the equivalent Kana. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html
classic: Removes English possessives, and dots from acronyms. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicFilter.html
common_grams: Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html
edgeNGram_v2: Generates n-grams of the given size(s) starting from the front or the back of an input token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
german_normalization: Normalizes German characters according to the heuristics of the German2 snowball algorithm. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html
hindi_normalization: Normalizes text in Hindi to remove some differences in spelling variations. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html
indic_normalization: Normalizes the Unicode representation of text in Indian languages. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html
keyword_repeat: Emits each incoming token twice, once as keyword and once as non-keyword. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html
kstem: A high-performance kstem filter for English. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/en/KStemFilter.html
length: Removes words that are too long or too short. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html
limit: Limits the number of tokens while indexing. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html
lowercase: Normalizes token text to lower case. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html
nGram_v2: Generates n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html
persian_normalization: Applies normalization for Persian. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html
phonetic: Create tokens for phonetic matches. See https://lucene.apache.org/core/4_10_3/analyzers-phonetic/org/apache/lucene/analysis/phonetic/package-tree.html
porter_stem: Uses the Porter stemming algorithm to transform the token stream. See http://tartarus.org/~martin/PorterStemmer
reverse: Reverses the token string. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
scandinavian_normalization: Normalizes use of the interchangeable Scandinavian characters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html
scandinavian_folding: Folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o. It also discriminates against use of double vowels aa, ae, ao, oe and oo, leaving just the first one. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html
shingle: Creates combinations of tokens as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html
snowball: A filter that stems words using a Snowball-generated stemmer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html
sorani_normalization: Normalizes the Unicode representation of Sorani text. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html
stemmer: Language specific stemming filter. See https://learn.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters
stopwords: Removes stop words from a token stream. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html
trim: Trims leading and trailing whitespace from tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html
truncate: Truncates the terms to a specific length. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html
unique: Filters out tokens with same text as the previous token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html
uppercase: Normalizes token text to upper case. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html
word_delimiter: Splits words into subwords and performs optional transformations on subword groups.

UnionToIntersection
UploadDocumentsOptions

Options for the upload documents operation.

VectorEncodingFormat

The encoding format for interpreting vector field contents.
KnownVectorEncodingFormat can be used interchangeably with VectorEncodingFormat, this enum contains the known values that the service supports.

Known values supported by the service

packedBit: Encoding format representing bits packed into a wider data type.

VectorFilterMode
VectorQuery

The query parameters for vector and hybrid search queries.

VectorQueryKind
VectorSearchAlgorithmConfiguration

Contains configuration options specific to the algorithm used during indexing and/or querying.

VectorSearchAlgorithmKind
VectorSearchAlgorithmMetric
VectorSearchCompression

Contains configuration options specific to the compression method used during indexing or querying.

VectorSearchCompressionKind

The compression method used for indexing and querying.
KnownVectorSearchCompressionKind can be used interchangeably with VectorSearchCompressionKind, this enum contains the known values that the service supports.

Known values supported by the service

scalarQuantization: Scalar Quantization, a type of compression method. In scalar quantization, the original vectors values are compressed to a narrower type by discretizing and representing each component of a vector using a reduced set of quantized values, thereby reducing the overall data size.
binaryQuantization: Binary Quantization, a type of compression method. In binary quantization, the original vectors values are compressed to the narrower binary type by discretizing and representing each component of a vector using binary values, thereby reducing the overall data size.

VectorSearchCompressionRescoreStorageMethod

The storage method for the original full-precision vectors used for rescoring and internal index operations.
KnownVectorSearchCompressionRescoreStorageMethod can be used interchangeably with VectorSearchCompressionRescoreStorageMethod, this enum contains the known values that the service supports.

Known values supported by the service

preserveOriginals: This option preserves the original full-precision vectors. Choose this option for maximum flexibility and highest quality of compressed search results. This consumes more storage but allows for rescoring and oversampling.
discardOriginals: This option discards the original full-precision vectors. Choose this option for maximum storage savings. Since this option does not allow for rescoring and oversampling, it will often cause slight to moderate reductions in quality.

VectorSearchCompressionTarget

The quantized data type of compressed vector values.
KnownVectorSearchCompressionTarget can be used interchangeably with VectorSearchCompressionTarget, this enum contains the known values that the service supports.

Known values supported by the service

int8: 8-bit signed integer.

VectorSearchVectorizer

Contains configuration options on how to vectorize text vector queries.

VectorSearchVectorizerKind

The vectorization method to be used during query time.
KnownVectorSearchVectorizerKind can be used interchangeably with VectorSearchVectorizerKind, this enum contains the known values that the service supports.

Known values supported by the service

azureOpenAI: Generate embeddings using an Azure OpenAI resource at query time.
customWebApi: Generate embeddings using a custom web endpoint at query time.
aiServicesVision: Generate embeddings for an image or text input at query time using the Azure AI Services Vision Vectorize API.
aml: Generate embeddings using an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog at query time.

VisualFeature
WebApiSkills

Enums

KnownAIFoundryModelCatalogName

The name of the embedding model from the Azure AI Foundry Catalog that will be called.

KnownAnalyzerNames

Defines values for AnalyzerName. See https://learn.microsoft.com/rest/api/searchservice/Language-support

KnownAzureOpenAIModelName

The Azure Open AI model name that will be called.

KnownBlobIndexerDataToExtract

Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content when "imageAction" is set to a value other than "none". This applies to embedded image content in a .PDF or other application, or image files such as .jpg and .png, in Azure blobs.

KnownBlobIndexerImageAction

Determines how to process embedded images and image files in Azure blob storage. Setting the "imageAction" configuration to any value other than "none" requires that a skillset also be attached to that indexer.

KnownBlobIndexerPDFTextRotationAlgorithm

Determines algorithm for text extraction from PDF files in Azure blob storage.

KnownBlobIndexerParsingMode

Represents the parsing mode for indexing from an Azure blob data source.

KnownCharFilterNames

Defines values for CharFilterName.

KnownChatCompletionExtraParametersBehavior

Specifies how 'extraParameters' should be handled by Azure AI Foundry. Defaults to 'error'.

KnownChatCompletionResponseFormatType

Specifies how the LLM should format the response.

KnownContentUnderstandingSkillChunkingUnit

Controls the cardinality of the chunk unit. Default is 'characters'

KnownContentUnderstandingSkillExtractionOptions

Controls the cardinality of the content extracted from the document by the skill.

KnownCustomEntityLookupSkillLanguage

The language codes supported for input text by CustomEntityLookupSkill.

KnownDocumentIntelligenceLayoutSkillChunkingUnit

Controls the cardinality of the chunk unit. Default is 'characters'

KnownDocumentIntelligenceLayoutSkillExtractionOptions

Controls the cardinality of the content extracted from the document by the skill.

KnownDocumentIntelligenceLayoutSkillMarkdownHeaderDepth

The depth of headers in the markdown output. Default is h6.

KnownDocumentIntelligenceLayoutSkillOutputFormat

Controls the cardinality of the output format. Default is 'markdown'.

KnownDocumentIntelligenceLayoutSkillOutputMode

Controls the cardinality of the output produced by the skill. Default is 'oneToMany'.

KnownEntityCategory

A string indicating what entity categories to return.

KnownEntityRecognitionSkillLanguage

The language codes supported for input text by EntityRecognitionSkill.

KnownImageAnalysisSkillLanguage

The language codes supported for input by ImageAnalysisSkill.

KnownImageDetail

A string indicating which domain-specific details to return.

KnownIndexProjectionMode

Defines behavior of the index projections in relation to the rest of the indexer.

KnownIndexerExecutionEnvironment

Specifies the environment in which the indexer should execute.

KnownIndexerResyncOption

Options with various types of permission data to index.

KnownKeyPhraseExtractionSkillLanguage

The language codes supported for input text by KeyPhraseExtractionSkill.

KnownKnowledgeBaseModelKind

The AI model to be used for query planning.

KnownKnowledgeSourceKind

The kind of the knowledge source.

KnownLexicalAnalyzerName

Defines the names of all text analyzers supported by the search engine.

KnownLexicalNormalizerName

Defines the names of all text normalizers supported by the search engine.

KnownMarkdownHeaderDepth

Specifies the max header depth that will be considered while grouping markdown content. Default is h6.

KnownMarkdownParsingSubmode

Specifies the submode that will determine whether a markdown file will be parsed into exactly one search document or multiple search documents. Default is oneToMany.

KnownOcrLineEnding

Defines the sequence of characters to use between the lines of text recognized by the OCR skill. The default value is "space".

KnownOcrSkillLanguage

The language codes supported for input by OcrSkill.

KnownPIIDetectionSkillMaskingMode

A string indicating what maskingMode to use to mask the personal information detected in the input text.

KnownQueryDebugMode

Enables a debugging tool that can be used to further explore your search results. You can enable multiple debug modes simultaneously by separating them with a | character, for example: semantic|queryRewrites.

KnownRankingOrder

Represents score to use for sort order of documents.

KnownRegexFlags

Defines a regular expression flag that can be used in the pattern analyzer and pattern tokenizer.

KnownSearchAudience

Known values for Search Audience

KnownSearchFieldDataType

Defines the data type of a field in a search index.

KnownSearchIndexerDataSourceType

Defines the type of a datasource.

KnownSemanticErrorMode

Allows the user to choose whether a semantic call should fail completely, or to return partial results.

KnownSemanticErrorReason

Reason that a partial response was returned for a semantic ranking request.

KnownSemanticSearchResultsType

Type of partial response that was returned for a semantic ranking request.

KnownSentimentSkillLanguage

The language codes supported for input text by SentimentSkill.

KnownSplitSkillLanguage

The language codes supported for input text by SplitSkill.

KnownTextSplitMode

A value indicating which split mode to perform.

KnownTextTranslationSkillLanguage

The language codes supported for input text by TextTranslationSkill.

KnownTokenFilterNames

Defines values for TokenFilterName.

KnownTokenizerNames

Defines values for TokenizerName.

KnownVectorEncodingFormat

The encoding format for interpreting vector field contents.

KnownVectorFilterMode

Determines whether or not filters are applied before or after the vector search is performed.

KnownVectorQueryKind

The kind of vector query being performed.

KnownVectorSearchAlgorithmKind

The algorithm used for indexing and querying.

KnownVectorSearchAlgorithmMetric

The similarity metric to use for vector comparisons. It is recommended to choose the same similarity metric as the embedding model was trained on.

KnownVectorSearchCompressionKind

The compression method used for indexing and querying.

KnownVectorSearchCompressionRescoreStorageMethod

The storage method for the original full-precision vectors used for rescoring and internal index operations.

KnownVectorSearchCompressionTarget

The quantized data type of compressed vector values.

KnownVectorSearchVectorizerKind

The vectorization method to be used during query time.

KnownVisualFeature

The strings indicating what visual feature types to return.

Functions

createSynonymMapFromFile(string, string)

Helper method to create a SynonymMap object. This is a NodeJS only method.

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

import { odata } from "@azure/search-documents";

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter

Variables

DEFAULT_BATCH_SIZE

Default Batch Size

DEFAULT_FLUSH_WINDOW

Default window flush interval

DEFAULT_RETRY_COUNT

Default number of times to retry.

Function Details

createSynonymMapFromFile(string, string)

Helper method to create a SynonymMap object. This is a NodeJS only method.

function createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>

Parameters

name

string

Name of the SynonymMap.

filePath

string

Path of the file that contains the Synonyms (seperated by new lines)

Returns

Promise<SynonymMap>

SynonymMap object

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

import { odata } from "@azure/search-documents";

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter

function odata(strings: TemplateStringsArray, values: unknown[]): string

Parameters

strings

TemplateStringsArray

Array of strings for the expression

values

unknown[]

Array of values for the expression

Returns

string

Variable Details

DEFAULT_BATCH_SIZE

Default Batch Size

DEFAULT_BATCH_SIZE: number

Type

number

DEFAULT_FLUSH_WINDOW

Default window flush interval

DEFAULT_FLUSH_WINDOW: number

Type

number

DEFAULT_RETRY_COUNT

Default number of times to retry.

DEFAULT_RETRY_COUNT: number

Type

number