Share via


@azure/search-documents package

Classes

AzureKeyCredential

A static-key-based credential that supports updating the underlying key value.

GeographyPoint

Represents a geographic point in global coordinates.

IndexDocumentsBatch

Class used to perform batch operations with multiple documents to the index.

KnowledgeRetrievalClient

Class used to perform operations against a knowledge base.

SearchClient

Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them.

SearchIndexClient

Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps.

SearchIndexerClient

Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets.

SearchIndexingBufferedSender

Class used to perform buffered operations against a search index, including adding, updating, and removing them.

Interfaces

AIServices

Parameters for Azure Blob Storage knowledge source.

AIServicesAccountIdentity

The multi-region account of an Azure AI service resource that's attached to a skillset.

AIServicesAccountKey

The account key of an Azure AI service resource that's attached to a skillset, to be used with the resource's subdomain.

AIServicesVisionParameters

Specifies the AI Services Vision parameters for vectorizing a query image or text.

AIServicesVisionVectorizer

Specifies the AI Services Vision parameters for vectorizing a query image or text.

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzedTokenInfo

Information about a token returned by an analyzer.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

AutocompleteItem

The result of Autocomplete requests.

AutocompleteRequest

Parameters for fuzzy matching, and other autocomplete query behaviors.

AutocompleteResult

The result of Autocomplete query.

AzureActiveDirectoryApplicationCredentials

Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.

AzureBlobKnowledgeSource

Configuration for Azure Blob Storage knowledge source.

AzureBlobKnowledgeSourceParameters

Parameters for Azure Blob Storage knowledge source.

AzureBlobKnowledgeSourceParams

Specifies runtime parameters for a azure blob knowledge source

AzureMachineLearningSkill

The AML skill allows you to extend AI enrichment with a custom Azure Machine Learning (AML) model. Once an AML model is trained and deployed, an AML skill integrates it into AI enrichment.

AzureMachineLearningVectorizer

Specifies an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog for generating the vector embedding of a query string.

AzureOpenAIEmbeddingSkill

Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource.

AzureOpenAIParameters

Specifies the parameters for connecting to the Azure OpenAI resource.

AzureOpenAITokenizerParameters
AzureOpenAIVectorizer

Contains the parameters specific to using an Azure Open AI service for vectorization at query time.

BM25Similarity

Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter).

BaseAzureMachineLearningVectorizerParameters

Specifies the properties common between all AML vectorizer auth types.

BaseCharFilter

Base type for character filters.

BaseCognitiveServicesAccount

Base type for describing any Azure AI service resource attached to a skillset.

BaseDataChangeDetectionPolicy

Base type for data change detection policies.

BaseDataDeletionDetectionPolicy

Base type for data deletion detection policies.

BaseKnowledgeBaseActivityRecord

Base type for activity records.

BaseKnowledgeBaseMessageContent

Specifies the type of the message content.

BaseKnowledgeBaseModel

Specifies the connection parameters for the model to use for query planning.

BaseKnowledgeBaseReference

Base type for references.

BaseKnowledgeBaseRetrievalActivityRecord

Represents a retrieval activity record.

BaseKnowledgeRetrievalReasoningEffort
BaseKnowledgeSource

Represents a knowledge source definition.

BaseKnowledgeSourceParams
BaseKnowledgeSourceVectorizer

Specifies the vectorization method to be used for knowledge source embedding model, with optional name.

BaseLexicalAnalyzer

Base type for analyzers.

BaseLexicalNormalizer

Base type for normalizers.

BaseLexicalTokenizer

Base type for tokenizers.

BaseScoringFunction

Base type for functions that can modify document scores during ranking.

BaseSearchIndexerDataIdentity

Abstract base type for data identities.

BaseSearchIndexerSkill

Base type for skills.

BaseSearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

BaseTokenFilter

Base type for token filters.

BaseVectorQuery

The query parameters for vector and hybrid search queries.

BaseVectorSearchAlgorithmConfiguration

Contains configuration options specific to the algorithm used during indexing and/or querying.

BaseVectorSearchCompression

Contains configuration options specific to the compression method used during indexing or querying.

BaseVectorSearchVectorizer

Contains specific details for a vectorization method to be used during query time.

BaseVectorThreshold

The threshold used for vector queries.

BinaryQuantizationCompression

Contains configuration options specific to the binary quantization compression method used during indexing and querying.

ChatCompletionResponseFormat

Determines how the language model's response should be serialized. Defaults to 'text'.

ChatCompletionResponseFormatJsonSchemaProperties

An open dictionary for extended properties. Required if 'type' == 'json_schema'

ChatCompletionSchema

Object defining the custom schema the model will use to structure its output.

ChatCompletionSkill

A skill that calls a language model via Azure AI Foundry's Chat Completions endpoint.

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.

ClassicSimilarity

Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries.

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CognitiveServicesAccountKey

The multi-region account key of an Azure AI service resource that's attached to a skillset.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

CommonModelParameters

Common language model parameters for Chat Completions. If omitted, default values are used.

CompletedSynchronizationState

Represents the completed state of the last synchronization.

ComplexField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

ConditionalSkill

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.

ContentUnderstandingSkill

A skill that leverages Azure AI Content Understanding to process and extract structured insights from documents, enabling enriched, searchable content for enhanced document indexing and retrieval

ContentUnderstandingSkillChunkingProperties

Controls the cardinality for chunking the content.

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index.

CreateKnowledgeBaseOptions
CreateKnowledgeSourceOptions
CreateOrUpdateAliasOptions

Options for create or update alias operation.

CreateOrUpdateIndexOptions

Options for create/update index operation.

CreateOrUpdateKnowledgeBaseOptions
CreateOrUpdateKnowledgeSourceOptions
CreateOrUpdateSkillsetOptions

Options for create/update skillset operation.

CreateOrUpdateSynonymMapOptions

Options for create/update synonymmap operation.

CreateorUpdateDataSourceConnectionOptions

Options for create/update datasource operation.

CreateorUpdateIndexerOptions

Options for create/update indexer operation.

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

CustomEntity

An object that contains information about the matches that were found, and related metadata.

CustomEntityAlias

A complex object that can be used to specify alternative spellings or synonyms to the root entity name.

CustomEntityLookupSkill

A skill looks for text from a custom, user-defined list of words and phrases.

CustomNormalizer

Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored.

DebugInfo

Contains debugging information that can be used to further explore your search results.

DefaultCognitiveServicesAccount

An empty object that represents the default Azure AI service resource for a skillset.

DeleteAliasOptions

Options for delete alias operation.

DeleteDataSourceConnectionOptions

Options for delete datasource operation.

DeleteIndexOptions

Options for delete index operation.

DeleteIndexerOptions

Options for delete indexer operation.

DeleteKnowledgeBaseOptions
DeleteKnowledgeSourceOptions
DeleteSkillsetOptions

Options for delete skillset operaion.

DeleteSynonymMapOptions

Options for delete synonymmap operation.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location.

DistanceScoringParameters

Provides parameter values to a distance scoring function.

DocumentDebugInfo

Contains debugging information that can be used to further explore your search results.

DocumentExtractionSkill

A skill that extracts content from a file within the enrichment pipeline.

DocumentIntelligenceLayoutSkill

A skill that extracts content and layout information (as markdown), via Azure AI Services, from files within the enrichment pipeline.

DocumentIntelligenceLayoutSkillChunkingProperties

Controls the cardinality for chunking the content.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

EntityLinkingSkill

Using the Text Analytics API, extracts linked entities from text.

EntityRecognitionSkill

Text analytics entity recognition.

EntityRecognitionSkillV3

Using the Text Analytics API, extracts entities of different types from text.

ExhaustiveKnnParameters

Contains the parameters specific to exhaustive KNN algorithm.

ExtractiveQueryAnswer

Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language.

ExtractiveQueryCaption

Extracts captions from the matching documents that contain passages relevant to the search query.

FacetResult

A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.

FieldMapping

Defines a mapping between a field in a data source and a target field in an index.

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing.

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field.

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

GenerativeQueryRewrites

Generate alternative query terms to increase the recall of a search request.

GetDocumentOptions

Options for retrieving a single document.

GetIndexStatsSummaryOptionalParams

Optional parameters.

GetIndexStatsSummaryOptions
GetKnowledgeBaseOptions
GetKnowledgeSourceOptions
GetKnowledgeSourceStatusOptions
HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

HnswParameters

Contains the parameters specific to hnsw algorithm.

HybridSearchOptions

TThe query parameters to configure hybrid search behaviors.

ImageAnalysisSkill

A skill that analyzes image files. It extracts a rich set of visual features based on the image content.

IndexDocumentsClient

Index Documents Client

IndexDocumentsOptions

Options for the modify index batch operation.

IndexDocumentsResult

Response containing the status of operations for all documents in the indexing request.

IndexStatisticsSummary

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

IndexedOneLakeKnowledgeSource

Configuration for OneLake knowledge source.

IndexedOneLakeKnowledgeSourceParameters

Parameters for OneLake knowledge source.

IndexedOneLakeKnowledgeSourceParams

Specifies runtime parameters for a indexed OneLake knowledge source

IndexedSharePointKnowledgeSource

Configuration for SharePoint knowledge source.

IndexedSharePointKnowledgeSourceParameters

Parameters for SharePoint knowledge source.

IndexedSharePointKnowledgeSourceParams

Specifies runtime parameters for a indexed SharePoint knowledge source

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexerRuntime

Represents the indexer's cumulative runtime consumption in the service.

IndexerState

Represents all of the state that defines and dictates the indexer's current execution.

IndexersResyncOptionalParams

Optional parameters.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersConfiguration

A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

InputFieldMappingEntry

Input field mapping for a skill.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeyAuthAzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer with an authentication key.

KeyPhraseExtractionSkill

A skill that uses text analytics for key phrase extraction.

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene.

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

KnowledgeBase
KnowledgeBaseAgenticReasoningActivityRecord

Represents an agentic reasoning activity record.

KnowledgeBaseAzureBlobActivityArguments

Represents the arguments the azure blob retrieval activity was run with.

KnowledgeBaseAzureBlobActivityRecord

Represents a azure blob retrieval activity record.

KnowledgeBaseAzureBlobReference

Represents an Azure Blob Storage document reference.

KnowledgeBaseAzureOpenAIModel

Specifies the Azure OpenAI resource used to do query planning.

KnowledgeBaseErrorAdditionalInfo

The resource management error additional info.

KnowledgeBaseErrorDetail

The error details.

KnowledgeBaseIndexedOneLakeActivityArguments

Represents the arguments the indexed OneLake retrieval activity was run with.

KnowledgeBaseIndexedOneLakeActivityRecord

Represents a indexed OneLake retrieval activity record.

KnowledgeBaseIndexedOneLakeReference

Represents an Azure Blob Storage document reference.

KnowledgeBaseIndexedSharePointActivityArguments

Represents the arguments the indexed SharePoint retrieval activity was run with.

KnowledgeBaseIndexedSharePointActivityRecord

Represents a indexed SharePoint retrieval activity record.

KnowledgeBaseIndexedSharePointReference

Represents an Azure Blob Storage document reference.

KnowledgeBaseMessage

The natural language message style object.

KnowledgeBaseMessageImageContent

Text message type.

KnowledgeBaseMessageImageContentImage
KnowledgeBaseMessageTextContent

Text message type.

KnowledgeBaseModelAnswerSynthesisActivityRecord

Represents an LLM answer synthesis activity record.

KnowledgeBaseModelQueryPlanningActivityRecord

Represents an LLM query planning activity record.

KnowledgeBaseRemoteSharePointActivityArguments

Represents the arguments the remote SharePoint retrieval activity was run with.

KnowledgeBaseRemoteSharePointActivityRecord

Represents a remote SharePoint retrieval activity record.

KnowledgeBaseRemoteSharePointReference

Represents a remote SharePoint document reference.

KnowledgeBaseRetrievalRequest

The input contract for the retrieval request.

KnowledgeBaseRetrievalResponse

The output contract for the retrieval response.

KnowledgeBaseSearchIndexActivityArguments

Represents the arguments the search index retrieval activity was run with.

KnowledgeBaseSearchIndexActivityRecord

Represents a search index retrieval activity record.

KnowledgeBaseSearchIndexFieldReference
KnowledgeBaseSearchIndexReference

Represents an Azure Search document reference.

KnowledgeBaseWebActivityArguments

Represents the arguments the web retrieval activity was run with.

KnowledgeBaseWebActivityRecord

Represents a web retrieval activity record.

KnowledgeBaseWebReference

Represents a web document reference.

KnowledgeRetrievalClientOptions

Client options used to configure Cognitive Search API requests.

KnowledgeRetrievalIntent

An intended query to execute without model query planning.

KnowledgeRetrievalLowReasoningEffort

Run knowledge retrieval with low reasoning effort.

KnowledgeRetrievalMediumReasoningEffort

Run knowledge retrieval with medium reasoning effort.

KnowledgeRetrievalMinimalReasoningEffort

Run knowledge retrieval with minimal reasoning effort.

KnowledgeRetrievalReasoningEffort
KnowledgeRetrievalSemanticIntent

An intended query to execute without model query planning.

KnowledgeSourceAzureOpenAIVectorizer

Specifies the Azure OpenAI resource used to vectorize a query string.

KnowledgeSourceIngestionParameters

Consolidates all general ingestion settings for knowledge sources.

KnowledgeSourceReference
KnowledgeSourceStatistics

Statistical information about knowledge source synchronization history.

KnowledgeSourceStatus

Represents the status and synchronization history of a knowledge source.

LanguageDetectionSkill

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

ListIndexStatsSummary

Response from a request to retrieve stats summary of all indexes. If successful, it includes the stats of each index in the service.

ListKnowledgeBasesOptions
ListKnowledgeSourcesOptions
ListSearchResultsPageSettings

Arguments for retrieving the next page of search results.

LuceneStandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

LuceneStandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field.

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MergeSkill

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

NativeBlobSoftDeleteDeletionDetectionPolicy

Defines a data deletion detection policy utilizing Azure Blob Storage's native soft delete feature for deletion detection.

NoAuthAzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer with no authentication.

OcrSkill

A skill that extracts text from image files.

OutputFieldMappingEntry

Output field mapping for a skill.

PIIDetectionSkill

Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it.

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

QueryAnswerResult

An answer is a text passage extracted from the contents of the most relevant documents that matched the query. Answers are extracted from the top search results. Answer candidates are scored and the top answers are selected.

QueryCaptionResult

Captions are the most representative passages from the document relatively to the search query. They are often used as document summary. Captions are only returned for queries of type semantic.

QueryResultDocumentInnerHit

Detailed scoring information for an individual element of a complex collection.

QueryResultDocumentRerankerInput

The raw concatenated strings that were sent to the semantic enrichment process.

QueryResultDocumentSemanticField

Description of fields that were sent to the semantic enrichment process, as well as how they were used

QueryResultDocumentSubscores

The breakdown of subscores between the text and vector query components of the search query for this document. Each vector query is shown as a separate object in the same order they were received.

QueryRewritesDebugInfo

Contains debugging information specific to query rewrites.

QueryRewritesValuesDebugInfo

Contains debugging information specific to query rewrites.

RemoteSharePointKnowledgeSource

Configuration for remote SharePoint knowledge source.

RemoteSharePointKnowledgeSourceParameters

Parameters for remote SharePoint knowledge source.

RemoteSharePointKnowledgeSourceParams

Specifies runtime parameters for a remote SharePoint knowledge source

RescoringOptions

Contains the options for rescoring.

ResetDocumentsOptions

Options for reset docs operation.

ResetSkillsOptions

Options for reset skills operation.

ResourceCounter

Represents a resource's usage and quota.

RetrieveKnowledgeOptions
ScalarQuantizationCompression

Contains configuration options specific to the scalar quantization compression method used during indexing and querying.

ScalarQuantizationParameters

Contains the parameters specific to Scalar Quantization.

ScoringProfile

Defines parameters for a search index that influence scoring in search queries.

SearchAlias

Represents an index alias, which describes a mapping from the alias name to an index. The alias name can be used in place of the index name for supported operations.

SearchClientOptions

Client options used to configure AI Search API requests.

SearchDocumentsPageResult

Response containing search page results from an index.

SearchDocumentsResult

Response containing search results from an index.

SearchDocumentsResultBase

Response containing search results from an index.

SearchIndex

Represents a search index definition, which describes the fields and search behavior of an index.

SearchIndexClientOptions

Client options used to configure AI Search API requests.

SearchIndexFieldReference
SearchIndexKnowledgeSource

Knowledge Source targeting a search index.

SearchIndexKnowledgeSourceParameters

Parameters for search index knowledge source.

SearchIndexKnowledgeSourceParams

Specifies runtime parameters for a search index knowledge source

SearchIndexStatistics

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

SearchIndexer

Represents an indexer.

SearchIndexerCache
SearchIndexerClientOptions

Client options used to configure AI Search API requests.

SearchIndexerDataContainer

Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.

SearchIndexerDataNoneIdentity

Clears the identity property of a datasource.

SearchIndexerDataSourceConnection

Represents a datasource definition, which can be used to configure an indexer.

SearchIndexerDataUserAssignedIdentity

Specifies the identity for a datasource to use.

SearchIndexerError

Represents an item- or document-level indexing error.

SearchIndexerIndexProjection

Definition of additional projections to secondary search indexes.

SearchIndexerIndexProjectionParameters

A dictionary of index projection-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

SearchIndexerIndexProjectionSelector

Description for what data to store in the designated search index.

SearchIndexerKnowledgeStore

Definition of additional projections to azure blob, table, or files, of enriched data.

SearchIndexerKnowledgeStoreBlobProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreFileProjectionSelector

Projection definition for what data to store in Azure Files.

SearchIndexerKnowledgeStoreObjectProjectionSelector

Projection definition for what data to store in Azure Blob.

SearchIndexerKnowledgeStoreParameters

A dictionary of knowledge store-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

SearchIndexerKnowledgeStoreProjection

Container object for various projection selectors.

SearchIndexerKnowledgeStoreProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreTableProjectionSelector

Description for what data to store in Azure Tables.

SearchIndexerLimits
SearchIndexerSkillset

A list of skills.

SearchIndexerStatus

Represents the current status and execution history of an indexer.

SearchIndexerWarning

Represents an item-level warning.

SearchIndexingBufferedSenderOptions

Options for SearchIndexingBufferedSender.

SearchResourceEncryptionKey

A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure AI Search, such as indexes and synonym maps.

SearchScoreThreshold

The results of the vector query will filter based on the '@search.score' value. Note this is the @search.score returned as part of the search response. The threshold direction will be chosen for higher @search.score.

SearchServiceStatistics

Response from a get service statistics request. If successful, it includes service level counters and limits.

SearchSuggester

Defines how the Suggest API should apply to a group of fields in the index.

SemanticConfiguration

Defines a specific configuration to be used in the context of semantic capabilities.

SemanticDebugInfo

Debug options for semantic search queries.

SemanticField

A field that is used as part of the semantic configuration.

SemanticPrioritizedFields

Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers.

SemanticSearch

Defines parameters for a search index that influence semantic capabilities.

SemanticSearchOptions

Defines options for semantic search queries

SentimentSkill

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.

SentimentSkillV3

Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level.

ServiceCounters

Represents service-level resource counters and quotas.

ServiceLimits

Represents various service level limits.

ShaperSkill

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).

SharePointSensitivityLabelInfo

Information about the sensitivity label applied to a SharePoint document.

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

Similarity

Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results.

SimpleField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SingleVectorFieldResult

A single vector field result. Both @search.score and vector similarity values are returned. Vector similarity is related to @search.score by an equation.

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SplitSkill

A skill to split a string into chunks of text.

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene.

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene.

SuggestDocumentsResult

Response containing suggestion query results from an index.

SuggestRequest

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SynchronizationState

Represents the current state of an ongoing synchronization that spans multiple indexer runs.

SynonymMap

Represents a synonym map definition.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags.

TagScoringParameters

Provides parameter values to a tag scoring function.

TextResult

The BM25 or Classic score for the text portion of the query.

TextTranslationSkill

A skill to translate text from one language to another.

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

TokenAuthAzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer with a managed identity.

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

VectorSearch

Contains configuration options related to vector search.

VectorSearchOptions

Defines options for vector search queries

VectorSearchProfile

Defines a combination of configurations to use with vector search.

VectorSimilarityThreshold

The results of the vector query will be filtered based on the vector similarity metric. Note this is the canonical definition of similarity metric, not the 'distance' version. The threshold direction (larger or smaller) will be chosen automatically according to the metric used by the field.

VectorizableImageBinaryQuery

The query parameters to use for vector search when a base 64 encoded binary of an image that needs to be vectorized is provided.

VectorizableImageUrlQuery

The query parameters to use for vector search when an url that represents an image value that needs to be vectorized is provided.

VectorizableTextQuery

The query parameters to use for vector search when a text value that needs to be vectorized is provided.

VectorizedQuery

The query parameters to use for vector search when a raw vector value is provided.

VectorsDebugInfo
VisionVectorizeSkill

Allows you to generate a vector embedding for a given image or text input using the Azure AI Services Vision Vectorize API.

WebApiParameters

Specifies the properties for connecting to a user-defined vectorizer.

WebApiSkill

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.

WebApiVectorizer

Specifies a user-defined vectorizer for generating the vector embedding of a query string. Integration of an external vectorizer is achieved using the custom Web API interface of a skillset.

WebKnowledgeSource

Knowledge Source targeting web results.

WebKnowledgeSourceDomain

Configuration for web knowledge source domain.

WebKnowledgeSourceDomains

Domain allow/block configuration for web knowledge source.

WebKnowledgeSourceParameters

Parameters for web knowledge source.

WebKnowledgeSourceParams

Specifies runtime parameters for a web knowledge source

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Type Aliases

AIFoundryModelCatalogName

Defines values for AIFoundryModelCatalogName.
KnownAIFoundryModelCatalogName can be used interchangeably with AIFoundryModelCatalogName, this enum contains the known values that the service supports.

Known values supported by the service

OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32
OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336
Facebook-DinoV2-Image-Embeddings-ViT-Base
Facebook-DinoV2-Image-Embeddings-ViT-Giant
Cohere-embed-v3-english
Cohere-embed-v3-multilingual
Cohere-embed-v4: Cohere embed v4 model for generating embeddings from both text and images.

AliasIterator

An iterator for listing the aliases that exist in the Search service. This will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

AnalyzeTextOptions

Options for analyze text operation.

AutocompleteMode

Defines values for AutocompleteMode.

AutocompleteOptions

Options for retrieving completion text for a partial searchText.

AzureMachineLearningVectorizerParameters

Specifies the properties for connecting to an AML vectorizer.

AzureOpenAIModelName

Defines values for AzureOpenAIModelName.
KnownAzureOpenAIModelName can be used interchangeably with AzureOpenAIModelName, this enum contains the known values that the service supports.

Known values supported by the service

text-embedding-ada-002
text-embedding-3-large
text-embedding-3-small
gpt-4o
gpt-4o-mini
gpt-4.1
gpt-4.1-mini
gpt-4.1-nano
gpt-5
gpt-5-mini
gpt-5-nano

BaseKnowledgeRetrievalIntent
BaseKnowledgeRetrievalOutputMode

Defines values for KnowledgeRetrievalOutputMode.
KnownKnowledgeRetrievalOutputMode can be used interchangeably with KnowledgeRetrievalOutputMode, this enum contains the known values that the service supports.

Known values supported by the service

extractiveData: Return data from the knowledge sources directly without generative alteration.
answerSynthesis: Synthesize an answer for the response payload.

BlobIndexerDataToExtract
BlobIndexerImageAction
BlobIndexerPDFTextRotationAlgorithm
BlobIndexerParsingMode
CharFilter

Contains the possible cases for CharFilter.

CharFilterName

Defines values for CharFilterName.
<xref:KnownCharFilterName> can be used interchangeably with CharFilterName, this enum contains the known values that the service supports.

Known values supported by the service

html_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html

ChatCompletionExtraParametersBehavior

Defines values for ChatCompletionExtraParametersBehavior.
KnownChatCompletionExtraParametersBehavior can be used interchangeably with ChatCompletionExtraParametersBehavior, this enum contains the known values that the service supports.

Known values supported by the service

passThrough: Passes any extra parameters directly to the model.
drop: Drops all extra parameters.
error: Raises an error if any extra parameter is present.

ChatCompletionResponseFormatType

Defines values for ChatCompletionResponseFormatType.
KnownChatCompletionResponseFormatType can be used interchangeably with ChatCompletionResponseFormatType, this enum contains the known values that the service supports.

Known values supported by the service

text
jsonObject
jsonSchema

CjkBigramTokenFilterScripts

Defines values for CjkBigramTokenFilterScripts.

CognitiveServicesAccount

Contains the possible cases for CognitiveServicesAccount.

ComplexDataType

Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)'

ContentUnderstandingSkillChunkingUnit

Defines values for ContentUnderstandingSkillChunkingUnit.
<xref:KnownContentUnderstandingSkillChunkingUnit> can be used interchangeably with ContentUnderstandingSkillChunkingUnit, this enum contains the known values that the service supports.

Known values supported by the service

characters: Specifies chunk by characters.

ContentUnderstandingSkillExtractionOptions

Defines values for ContentUnderstandingSkillExtractionOptions.
<xref:KnownContentUnderstandingSkillExtractionOptions> can be used interchangeably with ContentUnderstandingSkillExtractionOptions, this enum contains the known values that the service supports.

Known values supported by the service

images: Specify that image content should be extracted from the document.
locationMetadata: Specify that location metadata should be extracted from the document.

CountDocumentsOptions

Options for performing the count operation on the index.

CreateAliasOptions

Options for create alias operation.

CreateDataSourceConnectionOptions

Options for create datasource operation.

CreateIndexOptions

Options for create index operation.

CreateIndexerOptions

Options for create indexer operation.

CreateSkillsetOptions

Options for create skillset operation.

CreateSynonymMapOptions

Options for create synonymmap operation.

CustomEntityLookupSkillLanguage
DataChangeDetectionPolicy

Contains the possible cases for DataChangeDetectionPolicy.

DataDeletionDetectionPolicy

Contains the possible cases for DataDeletionDetectionPolicy.

DeleteDocumentsOptions

Options for the delete documents operation.

DocumentIntelligenceLayoutSkillChunkingUnit

Defines values for DocumentIntelligenceLayoutSkillChunkingUnit.
KnownDocumentIntelligenceLayoutSkillChunkingUnit can be used interchangeably with DocumentIntelligenceLayoutSkillChunkingUnit, this enum contains the known values that the service supports.

Known values supported by the service

characters: Specifies chunk by characters.

DocumentIntelligenceLayoutSkillExtractionOptions

Defines values for DocumentIntelligenceLayoutSkillExtractionOptions.
KnownDocumentIntelligenceLayoutSkillExtractionOptions can be used interchangeably with DocumentIntelligenceLayoutSkillExtractionOptions, this enum contains the known values that the service supports.

Known values supported by the service

images: Specify that image content should be extracted from the document.
locationMetadata: Specify that location metadata should be extracted from the document.

DocumentIntelligenceLayoutSkillMarkdownHeaderDepth

Defines values for DocumentIntelligenceLayoutSkillMarkdownHeaderDepth.
KnownDocumentIntelligenceLayoutSkillMarkdownHeaderDepth can be used interchangeably with DocumentIntelligenceLayoutSkillMarkdownHeaderDepth, this enum contains the known values that the service supports.

Known values supported by the service

h1: Header level 1.
h2: Header level 2.
h3: Header level 3.
h4: Header level 4.
h5: Header level 5.
h6: Header level 6.

DocumentIntelligenceLayoutSkillOutputFormat

Defines values for DocumentIntelligenceLayoutSkillOutputFormat.
KnownDocumentIntelligenceLayoutSkillOutputFormat can be used interchangeably with DocumentIntelligenceLayoutSkillOutputFormat, this enum contains the known values that the service supports.

Known values supported by the service

text: Specify the format of the output as text.
markdown: Specify the format of the output as markdown.

DocumentIntelligenceLayoutSkillOutputMode

Defines values for DocumentIntelligenceLayoutSkillOutputMode.
KnownDocumentIntelligenceLayoutSkillOutputMode can be used interchangeably with DocumentIntelligenceLayoutSkillOutputMode, this enum contains the known values that the service supports.

Known values supported by the service

oneToMany: Specify that the output should be parsed as 'oneToMany'.

EdgeNGramTokenFilterSide

Defines values for EdgeNGramTokenFilterSide.

EntityCategory
EntityRecognitionSkillLanguage
ExcludedODataTypes
ExhaustiveKnnAlgorithmConfiguration

Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index.

ExtractDocumentKey
GetAliasOptions

Options for get alias operation.

GetDataSourceConnectionOptions

Options for get datasource operation.

GetIndexOptions

Options for get index operation.

GetIndexStatisticsOptions

Options for get index statistics operation.

GetIndexStatsSummaryResponse

Contains response data for the getIndexStatsSummary operation.

GetIndexerOptions

Options for get indexer operation.

GetIndexerStatusOptions

Options for get indexer status operation.

GetServiceStatisticsOptions

Options for get service statistics operation.

GetSkillSetOptions

Options for get skillset operation.

GetSynonymMapsOptions

Options for get synonymmaps operation.

HnswAlgorithmConfiguration

Contains configuration options specific to the hnsw approximate nearest neighbors algorithm used during indexing time.

HybridCountAndFacetMode

Defines values for HybridCountAndFacetMode.
KnownHybridCountAndFacetMode can be used interchangeably with HybridCountAndFacetMode, this enum contains the known values that the service supports.

Known values supported by the service

countRetrievableResults: Only include documents that were matched within the 'maxTextRecallSize' retrieval window when computing 'count' and 'facets'.
countAllResults: Include all documents that were matched by the search query when computing 'count' and 'facets', regardless of whether or not those documents are within the 'maxTextRecallSize' retrieval window.

ImageAnalysisSkillLanguage
ImageDetail
IndexActionType

Defines values for IndexActionType.

IndexDocumentsAction

Represents an index action that operates on a document.

IndexIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexNameIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexProjectionMode

Defines values for IndexProjectionMode.
KnownIndexProjectionMode can be used interchangeably with IndexProjectionMode, this enum contains the known values that the service supports.

Known values supported by the service

skipIndexingParentDocuments: The source document will be skipped from writing into the indexer's target index.
includeIndexingParentDocuments: The source document will be written into the indexer's target index. This is the default pattern.

IndexStatisticsSummaryIterator

An iterator for statistics summaries for each index in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexedSharePointContainerName

Defines values for IndexedSharePointContainerName.
<xref:KnownIndexedSharePointContainerName> can be used interchangeably with IndexedSharePointContainerName, this enum contains the known values that the service supports.

Known values supported by the service

defaultSiteLibrary: Index content from the site's default document library.
allSiteLibraries: Index content from every document library in the site.
useQuery: Index only content that matches the query specified in the knowledge source.

IndexerExecutionEnvironment
IndexerExecutionStatus

Defines values for IndexerExecutionStatus.

IndexerExecutionStatusDetail

Defines values for IndexerExecutionStatusDetail.
KnownIndexerExecutionStatusDetail can be used interchangeably with IndexerExecutionStatusDetail, this enum contains the known values that the service supports.

Known values supported by the service

resetDocs: Indicates that the reset that occurred was for a call to ResetDocs.
resync: Indicates to selectively resync based on option(s) from data source.

IndexerPermissionOption

Defines values for IndexerPermissionOption.
KnownIndexerPermissionOption can be used interchangeably with IndexerPermissionOption, this enum contains the known values that the service supports.

Known values supported by the service

userIds: Indexer to ingest ACL userIds from data source to index.
groupIds: Indexer to ingest ACL groupIds from data source to index.
rbacScope: Indexer to ingest Azure RBAC scope from data source to index.

IndexerResyncOption

Defines values for IndexerResyncOption.
KnownIndexerResyncOption can be used interchangeably with IndexerResyncOption, this enum contains the known values that the service supports.

Known values supported by the service

permissions: Indexer to re-ingest pre-selected permissions data from data source to index.

IndexerStatus

Defines values for IndexerStatus.

IndexingMode

Defines values for IndexingMode.
KnownIndexingMode can be used interchangeably with IndexingMode, this enum contains the known values that the service supports.

Known values supported by the service

indexingAllDocs: The indexer is indexing all documents in the datasource.
indexingResetDocs: The indexer is indexing selective, reset documents in the datasource. The documents being indexed are defined on indexer status.
indexingResync: The indexer is resyncing and indexing selective option(s) from the datasource.

KeyPhraseExtractionSkillLanguage
KnowledgeBaseActivityRecord
KnowledgeBaseIterator

An iterator for listing the knowledge bases that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

KnowledgeBaseMessageContent
KnowledgeBaseModel
KnowledgeBaseReference
KnowledgeBaseRetrievalActivityRecord
KnowledgeRetrievalOutputMode

Defines values for KnowledgeRetrievalOutputMode.
KnownKnowledgeRetrievalOutputMode can be used interchangeably with KnowledgeRetrievalOutputMode, this enum contains the known values that the service supports.

Known values supported by the service

extractiveData: Return data from the knowledge sources directly without generative alteration.
answerSynthesis: Synthesize an answer for the response payload.

KnowledgeRetrievalReasoningEffortUnion
KnowledgeSource
KnowledgeSourceContentExtractionMode

Defines values for KnowledgeSourceContentExtractionMode.
<xref:KnownKnowledgeSourceContentExtractionMode> can be used interchangeably with KnowledgeSourceContentExtractionMode, this enum contains the known values that the service supports.

Known values supported by the service

minimal: Extracts only essential metadata while deferring most content processing.
standard: Performs the full default content extraction pipeline.

KnowledgeSourceIngestionPermissionOption

Defines values for KnowledgeSourceIngestionPermissionOption.
<xref:KnownKnowledgeSourceIngestionPermissionOption> can be used interchangeably with KnowledgeSourceIngestionPermissionOption, this enum contains the known values that the service supports.

Known values supported by the service

userIds: Ingest explicit user identifiers alongside document content.
groupIds: Ingest group identifiers alongside document content.
rbacScope: Ingest RBAC scope information alongside document content.

KnowledgeSourceIterator

An iterator for listing the knowledge sources that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

KnowledgeSourceKind

Defines values for KnowledgeSourceKind.
KnownKnowledgeSourceKind can be used interchangeably with KnowledgeSourceKind, this enum contains the known values that the service supports.

Known values supported by the service

searchIndex: A knowledge source that retrieves data from a Search Index.
azureBlob: A knowledge source that retrieves and ingests data from Azure Blob Storage to a Search Index.
web: A knowledge source that retrieves data from the web.
remoteSharePoint: A knowledge source that retrieves data from a remote SharePoint endpoint.
indexedSharePoint: A knowledge source that retrieves and ingests data from SharePoint to a Search Index.
indexedOneLake: A knowledge source that retrieves and ingests data from OneLake to a Search Index.

KnowledgeSourceParams
KnowledgeSourceSynchronizationStatus

Defines values for KnowledgeSourceSynchronizationStatus.
<xref:KnownKnowledgeSourceSynchronizationStatus> can be used interchangeably with KnowledgeSourceSynchronizationStatus, this enum contains the known values that the service supports.

Known values supported by the service

creating: The knowledge source is being provisioned.
active: The knowledge source is active and synchronization runs are occurring.
deleting: The knowledge source is being deleted and synchronization is paused.

KnowledgeSourceVectorizer
LexicalAnalyzer

Contains the possible cases for Analyzer.

LexicalAnalyzerName

Defines values for LexicalAnalyzerName.
KnownLexicalAnalyzerName can be used interchangeably with LexicalAnalyzerName, this enum contains the known values that the service supports.

Known values supported by the service

ar.microsoft: Microsoft analyzer for Arabic.
ar.lucene: Lucene analyzer for Arabic.
hy.lucene: Lucene analyzer for Armenian.
bn.microsoft: Microsoft analyzer for Bangla.
eu.lucene: Lucene analyzer for Basque.
bg.microsoft: Microsoft analyzer for Bulgarian.
bg.lucene: Lucene analyzer for Bulgarian.
ca.microsoft: Microsoft analyzer for Catalan.
ca.lucene: Lucene analyzer for Catalan.
zh-Hans.microsoft: Microsoft analyzer for Chinese (Simplified).
zh-Hans.lucene: Lucene analyzer for Chinese (Simplified).
zh-Hant.microsoft: Microsoft analyzer for Chinese (Traditional).
zh-Hant.lucene: Lucene analyzer for Chinese (Traditional).
hr.microsoft: Microsoft analyzer for Croatian.
cs.microsoft: Microsoft analyzer for Czech.
cs.lucene: Lucene analyzer for Czech.
da.microsoft: Microsoft analyzer for Danish.
da.lucene: Lucene analyzer for Danish.
nl.microsoft: Microsoft analyzer for Dutch.
nl.lucene: Lucene analyzer for Dutch.
en.microsoft: Microsoft analyzer for English.
en.lucene: Lucene analyzer for English.
et.microsoft: Microsoft analyzer for Estonian.
fi.microsoft: Microsoft analyzer for Finnish.
fi.lucene: Lucene analyzer for Finnish.
fr.microsoft: Microsoft analyzer for French.
fr.lucene: Lucene analyzer for French.
gl.lucene: Lucene analyzer for Galician.
de.microsoft: Microsoft analyzer for German.
de.lucene: Lucene analyzer for German.
el.microsoft: Microsoft analyzer for Greek.
el.lucene: Lucene analyzer for Greek.
gu.microsoft: Microsoft analyzer for Gujarati.
he.microsoft: Microsoft analyzer for Hebrew.
hi.microsoft: Microsoft analyzer for Hindi.
hi.lucene: Lucene analyzer for Hindi.
hu.microsoft: Microsoft analyzer for Hungarian.
hu.lucene: Lucene analyzer for Hungarian.
is.microsoft: Microsoft analyzer for Icelandic.
id.microsoft: Microsoft analyzer for Indonesian (Bahasa).
id.lucene: Lucene analyzer for Indonesian.
ga.lucene: Lucene analyzer for Irish.
it.microsoft: Microsoft analyzer for Italian.
it.lucene: Lucene analyzer for Italian.
ja.microsoft: Microsoft analyzer for Japanese.
ja.lucene: Lucene analyzer for Japanese.
kn.microsoft: Microsoft analyzer for Kannada.
ko.microsoft: Microsoft analyzer for Korean.
ko.lucene: Lucene analyzer for Korean.
lv.microsoft: Microsoft analyzer for Latvian.
lv.lucene: Lucene analyzer for Latvian.
lt.microsoft: Microsoft analyzer for Lithuanian.
ml.microsoft: Microsoft analyzer for Malayalam.
ms.microsoft: Microsoft analyzer for Malay (Latin).
mr.microsoft: Microsoft analyzer for Marathi.
nb.microsoft: Microsoft analyzer for Norwegian (Bokmål).
no.lucene: Lucene analyzer for Norwegian.
fa.lucene: Lucene analyzer for Persian.
pl.microsoft: Microsoft analyzer for Polish.
pl.lucene: Lucene analyzer for Polish.
pt-BR.microsoft: Microsoft analyzer for Portuguese (Brazil).
pt-BR.lucene: Lucene analyzer for Portuguese (Brazil).
pt-PT.microsoft: Microsoft analyzer for Portuguese (Portugal).
pt-PT.lucene: Lucene analyzer for Portuguese (Portugal).
pa.microsoft: Microsoft analyzer for Punjabi.
ro.microsoft: Microsoft analyzer for Romanian.
ro.lucene: Lucene analyzer for Romanian.
ru.microsoft: Microsoft analyzer for Russian.
ru.lucene: Lucene analyzer for Russian.
sr-cyrillic.microsoft: Microsoft analyzer for Serbian (Cyrillic).
sr-latin.microsoft: Microsoft analyzer for Serbian (Latin).
sk.microsoft: Microsoft analyzer for Slovak.
sl.microsoft: Microsoft analyzer for Slovenian.
es.microsoft: Microsoft analyzer for Spanish.
es.lucene: Lucene analyzer for Spanish.
sv.microsoft: Microsoft analyzer for Swedish.
sv.lucene: Lucene analyzer for Swedish.
ta.microsoft: Microsoft analyzer for Tamil.
te.microsoft: Microsoft analyzer for Telugu.
th.microsoft: Microsoft analyzer for Thai.
th.lucene: Lucene analyzer for Thai.
tr.microsoft: Microsoft analyzer for Turkish.
tr.lucene: Lucene analyzer for Turkish.
uk.microsoft: Microsoft analyzer for Ukrainian.
ur.microsoft: Microsoft analyzer for Urdu.
vi.microsoft: Microsoft analyzer for Vietnamese.
standard.lucene: Standard Lucene analyzer.
standardasciifolding.lucene: Standard ASCII Folding Lucene analyzer. See https://learn.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#Analyzers
keyword: Treats the entire content of a field as a single token. This is useful for data like zip codes, ids, and some product names. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html
pattern: Flexibly separates text into terms via a regular expression pattern. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html
simple: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/SimpleAnalyzer.html
stop: Divides text at non-letters; Applies the lowercase and stopword token filters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html
whitespace: An analyzer that uses the whitespace tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceAnalyzer.html

LexicalNormalizer

Contains the possible cases for LexicalNormalizer.

LexicalNormalizerName

Defines values for LexicalNormalizerName.
KnownLexicalNormalizerName can be used interchangeably with LexicalNormalizerName, this enum contains the known values that the service supports.

Known values supported by the service

asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
lowercase: Normalizes token text to lowercase. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html
standard: Standard normalizer, which consists of lowercase and asciifolding. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
uppercase: Normalizes token text to uppercase. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html

LexicalTokenizer

Contains the possible cases for Tokenizer.

LexicalTokenizerName

Defines values for LexicalTokenizerName.
<xref:KnownLexicalTokenizerName> can be used interchangeably with LexicalTokenizerName, this enum contains the known values that the service supports.

Known values supported by the service

classic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html
edgeNGram: Tokenizes the input from an edge into n-grams of the given size(s). See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html
keyword_v2: Emits the entire input as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html
letter: Divides text at non-letters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html
lowercase: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html
microsoft_language_tokenizer: Divides text using language-specific rules.
microsoft_language_stemming_tokenizer: Divides text using language-specific rules and reduces words to their base forms.
nGram: Tokenizes the input into n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html
path_hierarchy_v2: Tokenizer for path-like hierarchies. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html
pattern: Tokenizer that uses regex pattern matching to construct distinct tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html
standard_v2: Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html
uax_url_email: Tokenizes urls and emails as one token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html
whitespace: Divides text at whitespace. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html

ListAliasesOptions

Options for list aliases operation.

ListDataSourceConnectionsOptions

Options for a list data sources operation.

ListIndexersOptions

Options for a list indexers operation.

ListIndexesOptions

Options for a list indexes operation.

ListSkillsetsOptions

Options for a list skillsets operation.

ListSynonymMapsOptions

Options for a list synonymMaps operation.

MarkdownHeaderDepth

Defines values for MarkdownHeaderDepth.
KnownMarkdownHeaderDepth can be used interchangeably with MarkdownHeaderDepth, this enum contains the known values that the service supports.

Known values supported by the service

h1: Indicates that headers up to a level of h1 will be considered while grouping markdown content.
h2: Indicates that headers up to a level of h2 will be considered while grouping markdown content.
h3: Indicates that headers up to a level of h3 will be considered while grouping markdown content.
h4: Indicates that headers up to a level of h4 will be considered while grouping markdown content.
h5: Indicates that headers up to a level of h5 will be considered while grouping markdown content.
h6: Indicates that headers up to a level of h6 will be considered while grouping markdown content. This is the default.

MarkdownParsingSubmode

Defines values for MarkdownParsingSubmode.
KnownMarkdownParsingSubmode can be used interchangeably with MarkdownParsingSubmode, this enum contains the known values that the service supports.

Known values supported by the service

oneToMany: Indicates that each section of the markdown file (up to a specified depth) will be parsed into individual search documents. This can result in a single markdown file producing multiple search documents. This is the default sub-mode.
oneToOne: Indicates that each markdown file will be parsed into a single search document.

MergeDocumentsOptions

Options for the merge documents operation.

MergeOrUploadDocumentsOptions

Options for the merge or upload documents operation.

MicrosoftStemmingTokenizerLanguage

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

Defines values for MicrosoftTokenizerLanguage.

NarrowedModel

Narrows the Model type to include only the selected Fields

OcrLineEnding

Defines values for OcrLineEnding.
KnownOcrLineEnding can be used interchangeably with OcrLineEnding, this enum contains the known values that the service supports.

Known values supported by the service

space: Lines are separated by a single space character.
carriageReturn: Lines are separated by a carriage return ('\r') character.
lineFeed: Lines are separated by a single line feed ('\n') character.
carriageReturnLineFeed: Lines are separated by a carriage return and a line feed ('\r\n') character.

OcrSkillLanguage
PIIDetectionSkillMaskingMode
PermissionFilter

Defines values for PermissionFilter.
KnownPermissionFilter can be used interchangeably with PermissionFilter, this enum contains the known values that the service supports.

Known values supported by the service

userIds: Field represents user IDs that should be used to filter document access on queries.
groupIds: Field represents group IDs that should be used to filter document access on queries.
rbacScope: Field represents an RBAC scope that should be used to filter document access on queries.

PhoneticEncoder

Defines values for PhoneticEncoder.

QueryAnswer

A value that specifies whether answers should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set to extractive, the query returns answers extracted from key passages in the highest ranked documents.

QueryCaption

A value that specifies whether captions should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set, the query returns captions extracted from key passages in the highest ranked documents. When Captions is 'extractive', highlighting is enabled by default. Defaults to 'none'.

QueryDebugMode

Defines values for QueryDebugMode.
KnownQueryDebugMode can be used interchangeably with QueryDebugMode, this enum contains the known values that the service supports.

Known values supported by the service

disabled: No query debugging information will be returned.
semantic: Allows the user to further explore their reranked results.
vector: Allows the user to further explore their hybrid and vector query results.
queryRewrites: Allows the user to explore the list of query rewrites generated for their search request.
innerHits: Allows the user to retrieve scoring information regarding vectors matched within a collection of complex types.
all: Turn on all debug options.

QueryLanguage

Defines values for QueryLanguage.
KnownQueryLanguage can be used interchangeably with QueryLanguage, this enum contains the known values that the service supports.

Known values supported by the service

none: Query language not specified.
en-us: Query language value for English (United States).
en-gb: Query language value for English (Great Britain).
en-in: Query language value for English (India).
en-ca: Query language value for English (Canada).
en-au: Query language value for English (Australia).
fr-fr: Query language value for French (France).
fr-ca: Query language value for French (Canada).
de-de: Query language value for German (Germany).
es-es: Query language value for Spanish (Spain).
es-mx: Query language value for Spanish (Mexico).
zh-cn: Query language value for Chinese (China).
zh-tw: Query language value for Chinese (Taiwan).
pt-br: Query language value for Portuguese (Brazil).
pt-pt: Query language value for Portuguese (Portugal).
it-it: Query language value for Italian (Italy).
ja-jp: Query language value for Japanese (Japan).
ko-kr: Query language value for Korean (Korea).
ru-ru: Query language value for Russian (Russia).
cs-cz: Query language value for Czech (Czech Republic).
nl-be: Query language value for Dutch (Belgium).
nl-nl: Query language value for Dutch (Netherlands).
hu-hu: Query language value for Hungarian (Hungary).
pl-pl: Query language value for Polish (Poland).
sv-se: Query language value for Swedish (Sweden).
tr-tr: Query language value for Turkish (Turkey).
hi-in: Query language value for Hindi (India).
ar-sa: Query language value for Arabic (Saudi Arabia).
ar-eg: Query language value for Arabic (Egypt).
ar-ma: Query language value for Arabic (Morocco).
ar-kw: Query language value for Arabic (Kuwait).
ar-jo: Query language value for Arabic (Jordan).
da-dk: Query language value for Danish (Denmark).
no-no: Query language value for Norwegian (Norway).
bg-bg: Query language value for Bulgarian (Bulgaria).
hr-hr: Query language value for Croatian (Croatia).
hr-ba: Query language value for Croatian (Bosnia and Herzegovina).
ms-my: Query language value for Malay (Malaysia).
ms-bn: Query language value for Malay (Brunei Darussalam).
sl-sl: Query language value for Slovenian (Slovenia).
ta-in: Query language value for Tamil (India).
vi-vn: Query language value for Vietnamese (Viet Nam).
el-gr: Query language value for Greek (Greece).
ro-ro: Query language value for Romanian (Romania).
is-is: Query language value for Icelandic (Iceland).
id-id: Query language value for Indonesian (Indonesia).
th-th: Query language value for Thai (Thailand).
lt-lt: Query language value for Lithuanian (Lithuania).
uk-ua: Query language value for Ukrainian (Ukraine).
lv-lv: Query language value for Latvian (Latvia).
et-ee: Query language value for Estonian (Estonia).
ca-es: Query language value for Catalan.
fi-fi: Query language value for Finnish (Finland).
sr-ba: Query language value for Serbian (Bosnia and Herzegovina).
sr-me: Query language value for Serbian (Montenegro).
sr-rs: Query language value for Serbian (Serbia).
sk-sk: Query language value for Slovak (Slovakia).
nb-no: Query language value for Norwegian (Norway).
hy-am: Query language value for Armenian (Armenia).
bn-in: Query language value for Bengali (India).
eu-es: Query language value for Basque.
gl-es: Query language value for Galician.
gu-in: Query language value for Gujarati (India).
he-il: Query language value for Hebrew (Israel).
ga-ie: Query language value for Irish (Ireland).
kn-in: Query language value for Kannada (India).
ml-in: Query language value for Malayalam (India).
mr-in: Query language value for Marathi (India).
fa-ae: Query language value for Persian (U.A.E.).
pa-in: Query language value for Punjabi (India).
te-in: Query language value for Telugu (India).
ur-pk: Query language value for Urdu (Pakistan).

QueryRewrites

Defines options for query rewrites.

QuerySpeller

Defines values for QuerySpellerType.
<xref:KnownQuerySpellerType> can be used interchangeably with QuerySpellerType, this enum contains the known values that the service supports.

Known values supported by the service

none: Speller not enabled.
lexicon: Speller corrects individual query terms using a static lexicon for the language specified by the queryLanguage parameter.

QueryType

Defines values for QueryType.

RankingOrder

Defines values for RankingOrder.
KnownRankingOrder can be used interchangeably with RankingOrder, this enum contains the known values that the service supports.

Known values supported by the service

BoostedRerankerScore: Sets sort order as BoostedRerankerScore
RerankerScore: Sets sort order as ReRankerScore

RegexFlags
ResetIndexerOptions

Options for reset indexer operation.

RunIndexerOptions

Options for run indexer operation.

ScoringFunction

Contains the possible cases for ScoringFunction.

ScoringFunctionAggregation

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

Defines values for ScoringFunctionInterpolation.

ScoringStatistics

Defines values for ScoringStatistics.

SearchField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SearchFieldArray

If TModel is an untyped object, an untyped string array Otherwise, the slash-delimited fields of TModel.

SearchFieldDataType

Defines values for SearchFieldDataType.

Known values supported by the service:

Edm.String: Indicates that a field contains a string.

Edm.Int32: Indicates that a field contains a 32-bit signed integer.

Edm.Int64: Indicates that a field contains a 64-bit signed integer.

Edm.Double: Indicates that a field contains an IEEE double-precision floating point number.

Edm.Boolean: Indicates that a field contains a Boolean value (true or false).

Edm.DateTimeOffset: Indicates that a field contains a date/time value, including timezone information.

Edm.GeographyPoint: Indicates that a field contains a geo-location in terms of longitude and latitude.

Edm.ComplexType: Indicates that a field contains one or more complex objects that in turn have sub-fields of other types.

Edm.Single: Indicates that a field contains a single-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Single).

Edm.Half: Indicates that a field contains a half-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Half).

Edm.Int16: Indicates that a field contains a 16-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Int16).

Edm.SByte: Indicates that a field contains a 8-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.SByte).

Edm.Byte: Indicates that a field contains a 8-bit unsigned integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Byte).

SearchIndexAlias

Search Alias object.

SearchIndexPermissionFilterOption

Defines values for SearchIndexPermissionFilterOption.
KnownSearchIndexPermissionFilterOption can be used interchangeably with SearchIndexPermissionFilterOption, this enum contains the known values that the service supports.

Known values supported by the service

enabled
disabled

SearchIndexerDataIdentity

Contains the possible cases for SearchIndexerDataIdentity.

SearchIndexerDataSourceType
SearchIndexerSkill

Contains the possible cases for Skill.

SearchIndexingBufferedSenderDeleteDocumentsOptions

Options for SearchIndexingBufferedSenderDeleteDocuments.

SearchIndexingBufferedSenderFlushDocumentsOptions

Options for SearchIndexingBufferedSenderFlushDocuments.

SearchIndexingBufferedSenderMergeDocumentsOptions

Options for SearchIndexingBufferedSenderMergeDocuments.

SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions

Options for SearchIndexingBufferedSenderMergeOrUploadDocuments.

SearchIndexingBufferedSenderUploadDocumentsOptions

Options for SearchIndexingBufferedSenderUploadDocuments.

SearchIterator

An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

SearchMode

Defines values for SearchMode.

SearchOptions

Options for committing a full search request.

SearchPick

Deeply pick fields of T using valid AI Search OData $select paths.

SearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestQueryTypeOptions
SearchResult

Contains a document found by a search query, plus associated metadata.

SelectArray

If TFields is never, an untyped string array Otherwise, a narrowed Fields[] type to be used elsewhere in the consuming type.

SelectFields

Produces a union of valid AI Search OData $select paths for T using a post-order traversal of the field tree rooted at T.

SemanticErrorMode
SemanticErrorReason
SemanticFieldState

Defines values for SemanticFieldState.
KnownSemanticFieldState can be used interchangeably with SemanticFieldState, this enum contains the known values that the service supports.

Known values supported by the service

used: The field was fully used for semantic enrichment.
unused: The field was not used for semantic enrichment.
partial: The field was partially used for semantic enrichment.

SemanticQueryRewritesResultType

Defines values for SemanticQueryRewritesResultType.
KnownSemanticQueryRewritesResultType can be used interchangeably with SemanticQueryRewritesResultType, this enum contains the known values that the service supports.

Known values supported by the service

originalQueryOnly: Query rewrites were not successfully generated for this request. Only the original query was used to retrieve the results.

SemanticSearchResultsType
SentimentSkillLanguage
SimilarityAlgorithm

Contains the possible cases for Similarity.

SnowballTokenFilterLanguage

Defines values for SnowballTokenFilterLanguage.

SplitSkillEncoderModelName

Defines values for SplitSkillEncoderModelName.
KnownSplitSkillEncoderModelName can be used interchangeably with SplitSkillEncoderModelName, this enum contains the known values that the service supports.

Known values supported by the service

r50k_base: Refers to a base model trained with a 50,000 token vocabulary, often used in general natural language processing tasks.
p50k_base: A base model with a 50,000 token vocabulary, optimized for prompt-based tasks.
p50k_edit: Similar to p50k_base but fine-tuned for editing or rephrasing tasks with a 50,000 token vocabulary.
cl100k_base: A base model with a 100,000 token vocabulary.

SplitSkillLanguage
SplitSkillUnit

Defines values for SplitSkillUnit.
KnownSplitSkillUnit can be used interchangeably with SplitSkillUnit, this enum contains the known values that the service supports.

Known values supported by the service

characters: The length will be measured by character.
azureOpenAITokens: The length will be measured by an AzureOpenAI tokenizer from the tiktoken library.

StemmerTokenFilterLanguage

Defines values for StemmerTokenFilterLanguage.

StopwordsList

Defines values for StopwordsList.

SuggestNarrowedModel
SuggestOptions

Options for retrieving suggestions based on the searchText.

SuggestResult

A result containing a document found by a suggestion query, plus associated metadata.

TextSplitMode
TextTranslationSkillLanguage
TokenCharacterKind

Defines values for TokenCharacterKind.

TokenFilter

Contains the possible cases for TokenFilter.

TokenFilterName

Defines values for TokenFilterName.
<xref:KnownTokenFilterName> can be used interchangeably with TokenFilterName, this enum contains the known values that the service supports.

Known values supported by the service

arabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html
apostrophe: Strips all characters after an apostrophe (including the apostrophe itself). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/tr/ApostropheFilter.html
asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
cjk_bigram: Forms bigrams of CJK terms that are generated from the standard tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html
cjk_width: Normalizes CJK width differences. Folds fullwidth ASCII variants into the equivalent basic Latin, and half-width Katakana variants into the equivalent Kana. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html
classic: Removes English possessives, and dots from acronyms. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicFilter.html
common_grams: Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html
edgeNGram_v2: Generates n-grams of the given size(s) starting from the front or the back of an input token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
german_normalization: Normalizes German characters according to the heuristics of the German2 snowball algorithm. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html
hindi_normalization: Normalizes text in Hindi to remove some differences in spelling variations. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html
indic_normalization: Normalizes the Unicode representation of text in Indian languages. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html
keyword_repeat: Emits each incoming token twice, once as keyword and once as non-keyword. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html
kstem: A high-performance kstem filter for English. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/en/KStemFilter.html
length: Removes words that are too long or too short. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html
limit: Limits the number of tokens while indexing. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html
lowercase: Normalizes token text to lower case. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html
nGram_v2: Generates n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html
persian_normalization: Applies normalization for Persian. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html
phonetic: Create tokens for phonetic matches. See https://lucene.apache.org/core/4_10_3/analyzers-phonetic/org/apache/lucene/analysis/phonetic/package-tree.html
porter_stem: Uses the Porter stemming algorithm to transform the token stream. See http://tartarus.org/~martin/PorterStemmer
reverse: Reverses the token string. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
scandinavian_normalization: Normalizes use of the interchangeable Scandinavian characters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html
scandinavian_folding: Folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o. It also discriminates against use of double vowels aa, ae, ao, oe and oo, leaving just the first one. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html
shingle: Creates combinations of tokens as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html
snowball: A filter that stems words using a Snowball-generated stemmer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html
sorani_normalization: Normalizes the Unicode representation of Sorani text. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html
stemmer: Language specific stemming filter. See https://learn.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters
stopwords: Removes stop words from a token stream. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html
trim: Trims leading and trailing whitespace from tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html
truncate: Truncates the terms to a specific length. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html
unique: Filters out tokens with same text as the previous token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html
uppercase: Normalizes token text to upper case. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html
word_delimiter: Splits words into subwords and performs optional transformations on subword groups.

UnionToIntersection
UploadDocumentsOptions

Options for the upload documents operation.

VectorEncodingFormat

Defines values for VectorEncodingFormat.
KnownVectorEncodingFormat can be used interchangeably with VectorEncodingFormat, this enum contains the known values that the service supports.

Known values supported by the service

packedBit: Encoding format representing bits packed into a wider data type.

VectorFilterMode
VectorQuery

The query parameters for vector and hybrid search queries.

VectorQueryKind
VectorSearchAlgorithmConfiguration

Contains configuration options specific to the algorithm used during indexing and/or querying.

VectorSearchAlgorithmKind
VectorSearchAlgorithmMetric
VectorSearchCompression

Contains configuration options specific to the compression method used during indexing or querying.

VectorSearchCompressionKind

Defines values for VectorSearchCompressionKind.
KnownVectorSearchCompressionKind can be used interchangeably with VectorSearchCompressionKind, this enum contains the known values that the service supports.

Known values supported by the service

scalarQuantization: Scalar Quantization, a type of compression method. In scalar quantization, the original vectors values are compressed to a narrower type by discretizing and representing each component of a vector using a reduced set of quantized values, thereby reducing the overall data size.
binaryQuantization: Binary Quantization, a type of compression method. In binary quantization, the original vectors values are compressed to the narrower binary type by discretizing and representing each component of a vector using binary values, thereby reducing the overall data size.

VectorSearchCompressionRescoreStorageMethod

Defines values for VectorSearchCompressionRescoreStorageMethod.
KnownVectorSearchCompressionRescoreStorageMethod can be used interchangeably with VectorSearchCompressionRescoreStorageMethod, this enum contains the known values that the service supports.

Known values supported by the service

preserveOriginals: This option preserves the original full-precision vectors. Choose this option for maximum flexibility and highest quality of compressed search results. This consumes more storage but allows for rescoring and oversampling.
discardOriginals: This option discards the original full-precision vectors. Choose this option for maximum storage savings. Since this option does not allow for rescoring and oversampling, it will often cause slight to moderate reductions in quality.

VectorSearchCompressionTarget

Defines values for VectorSearchCompressionTarget.
KnownVectorSearchCompressionTarget can be used interchangeably with VectorSearchCompressionTarget, this enum contains the known values that the service supports.

Known values supported by the service

int8

VectorSearchVectorizer

Contains configuration options on how to vectorize text vector queries.

VectorSearchVectorizerKind

Defines values for VectorSearchVectorizerKind.
KnownVectorSearchVectorizerKind can be used interchangeably with VectorSearchVectorizerKind, this enum contains the known values that the service supports.

Known values supported by the service

azureOpenAI: Generate embeddings using an Azure OpenAI resource at query time.
customWebApi: Generate embeddings using a custom web endpoint at query time.
aiServicesVision: Generate embeddings for an image or text input at query time using the Azure AI Services Vision Vectorize API.
aml: Generate embeddings using an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog at query time.

VectorThreshold

The threshold used for vector queries.

VisualFeature
WebApiSkills

Enums

KnownAIFoundryModelCatalogName

Known values of AIFoundryModelCatalogName that the service accepts.

KnownAnalyzerNames

Defines values for AnalyzerName. See https://learn.microsoft.com/rest/api/searchservice/Language-support

KnownAzureOpenAIModelName

Known values of AzureOpenAIModelName that the service accepts.

KnownBlobIndexerDataToExtract

Known values of BlobIndexerDataToExtract that the service accepts.

KnownBlobIndexerImageAction

Known values of BlobIndexerImageAction that the service accepts.

KnownBlobIndexerPDFTextRotationAlgorithm

Known values of BlobIndexerPDFTextRotationAlgorithm that the service accepts.

KnownBlobIndexerParsingMode

Known values of BlobIndexerParsingMode that the service accepts.

KnownCharFilterNames

Defines values for CharFilterName.

KnownChatCompletionExtraParametersBehavior

Known values of ChatCompletionExtraParametersBehavior that the service accepts.

KnownChatCompletionResponseFormatType

Known values of ChatCompletionResponseFormatType that the service accepts.

KnownCustomEntityLookupSkillLanguage

Known values of CustomEntityLookupSkillLanguage that the service accepts.

KnownDocumentIntelligenceLayoutSkillChunkingUnit

Known values of DocumentIntelligenceLayoutSkillChunkingUnit that the service accepts.

KnownDocumentIntelligenceLayoutSkillExtractionOptions

Known values of DocumentIntelligenceLayoutSkillExtractionOptions that the service accepts.

KnownDocumentIntelligenceLayoutSkillMarkdownHeaderDepth

Known values of DocumentIntelligenceLayoutSkillMarkdownHeaderDepth that the service accepts.

KnownDocumentIntelligenceLayoutSkillOutputFormat

Known values of DocumentIntelligenceLayoutSkillOutputFormat that the service accepts.

KnownDocumentIntelligenceLayoutSkillOutputMode

Known values of DocumentIntelligenceLayoutSkillOutputMode that the service accepts.

KnownEntityCategory

Known values of EntityCategory that the service accepts.

KnownEntityRecognitionSkillLanguage

Known values of EntityRecognitionSkillLanguage that the service accepts.

KnownHybridCountAndFacetMode

Known values of HybridCountAndFacetMode that the service accepts.

KnownImageAnalysisSkillLanguage

Known values of ImageAnalysisSkillLanguage that the service accepts.

KnownImageDetail

Known values of ImageDetail that the service accepts.

KnownIndexProjectionMode

Known values of IndexProjectionMode that the service accepts.

KnownIndexerExecutionEnvironment

Known values of IndexerExecutionEnvironment that the service accepts.

KnownIndexerExecutionStatusDetail

Known values of IndexerExecutionStatusDetail that the service accepts.

KnownIndexerPermissionOption

Known values of IndexerPermissionOption that the service accepts.

KnownIndexerResyncOption

Known values of IndexerResyncOption that the service accepts.

KnownIndexingMode

Known values of IndexingMode that the service accepts.

KnownKeyPhraseExtractionSkillLanguage

Known values of KeyPhraseExtractionSkillLanguage that the service accepts.

KnownKnowledgeBaseModelKind

Known values of KnownKnowledgeBaseModelKind that the service accepts.

KnownKnowledgeRetrievalOutputMode

Known values of KnowledgeRetrievalOutputMode that the service accepts.

KnownKnowledgeSourceKind

Known values of KnowledgeSourceKind that the service accepts.

KnownLexicalAnalyzerName

Known values of LexicalAnalyzerName that the service accepts.

KnownLexicalNormalizerName

Known values of LexicalNormalizerName that the service accepts.

KnownMarkdownHeaderDepth

Known values of MarkdownHeaderDepth that the service accepts.

KnownMarkdownParsingSubmode

Known values of MarkdownParsingSubmode that the service accepts.

KnownOcrLineEnding

Known values of OcrLineEnding that the service accepts.

KnownOcrSkillLanguage

Known values of OcrSkillLanguage that the service accepts.

KnownPIIDetectionSkillMaskingMode

Known values of PIIDetectionSkillMaskingMode that the service accepts.

KnownPermissionFilter

Known values of PermissionFilter that the service accepts.

KnownQueryDebugMode

Known values of QueryDebugMode that the service accepts.

KnownQueryLanguage

Known values of QueryLanguage that the service accepts.

KnownQuerySpeller

Known values of <xref:QuerySpellerType> that the service accepts.

KnownRankingOrder

Known values of RankingOrder that the service accepts.

KnownRegexFlags

Known values of RegexFlags that the service accepts.

KnownSearchAudience

Known values for Search Audience

KnownSearchFieldDataType

Known values of SearchFieldDataType that the service accepts.

KnownSearchIndexPermissionFilterOption

Known values of SearchIndexPermissionFilterOption that the service accepts.

KnownSearchIndexerDataSourceType

Known values of SearchIndexerDataSourceType that the service accepts.

KnownSemanticErrorMode

Known values of SemanticErrorMode that the service accepts.

KnownSemanticErrorReason

Known values of SemanticErrorReason that the service accepts.

KnownSemanticFieldState

Known values of SemanticFieldState that the service accepts.

KnownSemanticQueryRewritesResultType

Known values of SemanticQueryRewritesResultType that the service accepts.

KnownSemanticSearchResultsType

Known values of SemanticSearchResultsType that the service accepts.

KnownSentimentSkillLanguage

Known values of SentimentSkillLanguage that the service accepts.

KnownSplitSkillEncoderModelName

Known values of SplitSkillEncoderModelName that the service accepts.

KnownSplitSkillLanguage

Known values of SplitSkillLanguage that the service accepts.

KnownSplitSkillUnit

Known values of SplitSkillUnit that the service accepts.

KnownTextSplitMode

Known values of TextSplitMode that the service accepts.

KnownTextTranslationSkillLanguage

Known values of TextTranslationSkillLanguage that the service accepts.

KnownTokenFilterNames

Defines values for TokenFilterName.

KnownTokenizerNames

Defines values for TokenizerName.

KnownVectorEncodingFormat

Known values of VectorEncodingFormat that the service accepts.

KnownVectorFilterMode

Known values of VectorFilterMode that the service accepts.

KnownVectorQueryKind

Known values of VectorQueryKind that the service accepts.

KnownVectorSearchAlgorithmKind

Known values of VectorSearchAlgorithmKind that the service accepts.

KnownVectorSearchAlgorithmMetric

Known values of VectorSearchAlgorithmMetric that the service accepts.

KnownVectorSearchCompressionKind

Known values of VectorSearchCompressionKind that the service accepts.

KnownVectorSearchCompressionRescoreStorageMethod

Known values of VectorSearchCompressionRescoreStorageMethod that the service accepts.

KnownVectorSearchCompressionTarget

Known values of VectorSearchCompressionTarget that the service accepts.

KnownVectorSearchVectorizerKind

Known values of VectorSearchVectorizerKind that the service accepts.

KnownVectorThresholdKind

Known values of KnownVectorThresholdKind that the service accepts.

KnownVisualFeature

Known values of VisualFeature that the service accepts.

Functions

createSynonymMapFromFile(string, string)

Helper method to create a SynonymMap object. This is a NodeJS only method.

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

import { odata } from "@azure/search-documents";

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter

Variables

DEFAULT_BATCH_SIZE

Default Batch Size

DEFAULT_FLUSH_WINDOW

Default window flush interval

DEFAULT_RETRY_COUNT

Default number of times to retry.

Function Details

createSynonymMapFromFile(string, string)

Helper method to create a SynonymMap object. This is a NodeJS only method.

function createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>

Parameters

name

string

Name of the SynonymMap.

filePath

string

Path of the file that contains the Synonyms (seperated by new lines)

Returns

Promise<SynonymMap>

SynonymMap object

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

import { odata } from "@azure/search-documents";

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter

function odata(strings: TemplateStringsArray, values: unknown[]): string

Parameters

strings

TemplateStringsArray

Array of strings for the expression

values

unknown[]

Array of values for the expression

Returns

string

Variable Details

DEFAULT_BATCH_SIZE

Default Batch Size

DEFAULT_BATCH_SIZE: number

Type

number

DEFAULT_FLUSH_WINDOW

Default window flush interval

DEFAULT_FLUSH_WINDOW: number

Type

number

DEFAULT_RETRY_COUNT

Default number of times to retry.

DEFAULT_RETRY_COUNT: number

Type

number