@azure/search-documents package
Classes
| AzureKeyCredential |
A static-key-based credential that supports updating the underlying key value. |
| GeographyPoint |
Represents a geographic point in global coordinates. |
| IndexDocumentsBatch |
Class used to perform batch operations with multiple documents to the index. |
| KnowledgeRetrievalClient |
Class used to perform operations against a knowledge base. |
| SearchClient |
Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them. |
| SearchIndexClient |
Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps. |
| SearchIndexerClient |
Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets. |
| SearchIndexingBufferedSender |
Class used to perform buffered operations against a search index, including adding, updating, and removing them. |
Interfaces
| AIServices |
Parameters for AI Services. |
| AIServicesAccountIdentity |
The multi-region account of an Azure AI service resource that's attached to a skillset. |
| AIServicesAccountKey |
The account key of an Azure AI service resource that's attached to a skillset, to be used with the resource's subdomain. |
| AnalyzeRequest |
Specifies some text and analysis components used to break that text into tokens. |
| AnalyzeResult |
The result of testing an analyzer on text. |
| AnalyzedTokenInfo |
Information about a token returned by an analyzer. |
| AsciiFoldingTokenFilter |
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene. |
| AutocompleteItem |
The result of Autocomplete requests. |
| AutocompleteRequest |
Parameters for fuzzy matching, and other autocomplete query behaviors. |
| AutocompleteResult |
The result of Autocomplete query. |
| AzureActiveDirectoryApplicationCredentials |
Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault. |
| AzureBlobKnowledgeSource |
Configuration for Azure Blob Storage knowledge source. |
| AzureBlobKnowledgeSourceParameters |
Parameters for Azure Blob Storage knowledge source. |
| AzureBlobKnowledgeSourceParams |
Specifies runtime parameters for a azure blob knowledge source |
| AzureMachineLearningVectorizer |
Specifies an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog for generating the vector embedding of a query string. |
| AzureOpenAIEmbeddingSkill |
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource. |
| AzureOpenAIParameters |
Specifies the parameters for connecting to the Azure OpenAI resource. |
| AzureOpenAIVectorizer |
Contains the parameters specific to using an Azure Open AI service for vectorization at query time. |
| BM25Similarity |
Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter). |
| BaseAzureMachineLearningVectorizerParameters |
Specifies the properties common between all AML vectorizer auth types. |
| BaseCharFilter |
Base type for character filters. |
| BaseCognitiveServicesAccount |
Base type for describing any Azure AI service resource attached to a skillset. |
| BaseDataChangeDetectionPolicy |
Base type for data change detection policies. |
| BaseDataDeletionDetectionPolicy |
Base type for data deletion detection policies. |
| BaseKnowledgeBaseActivityRecord |
Base type for activity records. Tracks execution details, timing, and errors for knowledge base operations. |
| BaseKnowledgeBaseMessageContent |
Specifies the type of the message content. |
| BaseKnowledgeBaseModel |
Specifies the connection parameters for the model to use for query planning. |
| BaseKnowledgeBaseReference |
Base type for references. |
| BaseKnowledgeRetrievalReasoningEffort |
Base type for reasoning effort. |
| BaseKnowledgeSource |
Represents a knowledge source definition. |
| BaseKnowledgeSourceParams |
Base type for knowledge source runtime parameters. |
| BaseKnowledgeSourceVectorizer |
Specifies the vectorization method to be used for knowledge source embedding model. |
| BaseLexicalAnalyzer |
Base type for analyzers. |
| BaseLexicalNormalizer |
Base type for normalizers. |
| BaseLexicalTokenizer |
Base type for tokenizers. |
| BaseScoringFunction |
Base type for functions that can modify document scores during ranking. |
| BaseSearchIndexerDataIdentity |
Abstract base type for data identities. |
| BaseSearchIndexerSkill |
Base type for skills. |
| BaseSearchRequestOptions |
Parameters for filtering, sorting, faceting, paging, and other search query behaviors. |
| BaseSimilarityAlgorithm |
Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results. |
| BaseTokenFilter |
Base type for token filters. |
| BaseVectorQuery |
The query parameters for vector and hybrid search queries. |
| BaseVectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing and/or querying. |
| BaseVectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
| BaseVectorSearchVectorizer |
Contains specific details for a vectorization method to be used during query time. |
| BinaryQuantizationCompression |
Contains configuration options specific to the binary quantization compression method used during indexing and querying. |
| ChatCompletionResponseFormat |
Determines how the language model's response should be serialized. Defaults to 'text'. |
| ChatCompletionResponseFormatJsonSchemaProperties |
Properties for JSON schema response format. |
| ChatCompletionSchema |
Object defining the custom schema the model will use to structure its output. |
| ChatCompletionSkill |
A skill that calls a language model via Azure AI Foundry's Chat Completions endpoint. |
| CjkBigramTokenFilter |
Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene. |
| ClassicSimilarity |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries. |
| ClassicTokenizer |
Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene. |
| CognitiveServicesAccountKey |
The multi-region account key of an Azure AI service resource that's attached to a skillset. |
| CommonGramTokenFilter |
Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene. |
| CommonModelParameters |
Common language model parameters for Chat Completions. If omitted, default values are used. |
| CompletedSynchronizationState |
Represents the completed state of the last synchronization. |
| ComplexField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| ConditionalSkill |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. |
| ContentUnderstandingSkill |
A skill that leverages Azure AI Content Understanding to process and extract structured insights from documents, enabling enriched, searchable content for enhanced document indexing and retrieval. |
| ContentUnderstandingSkillChunkingProperties |
Controls the cardinality for chunking the content. |
| CorsOptions |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index. |
| CreateKnowledgeBaseOptions | |
| CreateKnowledgeSourceOptions | |
| CreateOrUpdateAliasOptions |
Options for create or update alias operation. |
| CreateOrUpdateIndexOptions |
Options for create/update index operation. |
| CreateOrUpdateKnowledgeBaseOptions | |
| CreateOrUpdateKnowledgeSourceOptions | |
| CreateOrUpdateSkillsetOptions |
Options for create/update skillset operation. |
| CreateOrUpdateSynonymMapOptions |
Options for create/update synonymmap operation. |
| CreateorUpdateDataSourceConnectionOptions |
Options for create/update datasource operation. |
| CreateorUpdateIndexerOptions |
Options for create/update indexer operation. |
| CustomAnalyzer |
Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer. |
| CustomEntity |
An object that contains information about the matches that were found, and related metadata. |
| CustomEntityAlias |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name. |
| CustomEntityLookupSkill |
A skill looks for text from a custom, user-defined list of words and phrases. |
| CustomLexicalNormalizer |
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored. |
| DefaultCognitiveServicesAccount |
An empty object that represents the default Azure AI service resource for a skillset. |
| DeleteAliasOptions |
Options for delete alias operation. |
| DeleteDataSourceConnectionOptions |
Options for delete datasource operation. |
| DeleteIndexOptions |
Options for delete index operation. |
| DeleteIndexerOptions |
Options for delete indexer operation. |
| DeleteKnowledgeBaseOptions | |
| DeleteKnowledgeSourceOptions | |
| DeleteSkillsetOptions |
Options for delete skillset operaion. |
| DeleteSynonymMapOptions |
Options for delete synonymmap operation. |
| DictionaryDecompounderTokenFilter |
Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene. |
| DistanceScoringFunction |
Defines a function that boosts scores based on distance from a geographic location. |
| DistanceScoringParameters |
Provides parameter values to a distance scoring function. |
| DocumentDebugInfo |
Contains debugging information that can be used to further explore your search results. |
| DocumentExtractionSkill |
A skill that extracts content from a file within the enrichment pipeline. |
| DocumentIntelligenceLayoutSkill |
A skill that extracts content and layout information (as markdown), via Azure AI Services, from files within the enrichment pipeline. |
| DocumentIntelligenceLayoutSkillChunkingProperties |
Controls the cardinality for chunking the content. |
| EdgeNGramTokenFilter |
Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. |
| EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
| ElisionTokenFilter |
Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene. |
| EntityLinkingSkill |
Using the Text Analytics API, extracts linked entities from text. |
| EntityRecognitionSkill |
Text analytics entity recognition. |
| EntityRecognitionSkillV3 |
Using the Text Analytics API, extracts entities of different types from text. |
| ExhaustiveKnnParameters |
Contains the parameters specific to exhaustive KNN algorithm. |
| ExtractiveQueryAnswer |
Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language. |
| ExtractiveQueryCaption |
Extracts captions from the matching documents that contain passages relevant to the search query. |
| FacetResult |
A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval. |
| FieldMapping |
Defines a mapping between a field in a data source and a target field in an index. |
| FieldMappingFunction |
Represents a function that transforms a value from a data source before indexing. |
| FreshnessScoringFunction |
Defines a function that boosts scores based on the value of a date-time field. |
| FreshnessScoringParameters |
Provides parameter values to a freshness scoring function. |
| GenerativeQueryRewrites |
Generate alternative query terms to increase the recall of a search request. |
| GetDocumentOptions |
Options for retrieving a single document. |
| GetKnowledgeBaseOptions | |
| GetKnowledgeSourceOptions | |
| GetKnowledgeSourceStatusOptions | |
| HighWaterMarkChangeDetectionPolicy |
Defines a data change detection policy that captures changes based on the value of a high water mark column. |
| HnswParameters |
Contains the parameters specific to hnsw algorithm. |
| ImageAnalysisSkill |
A skill that analyzes image files. It extracts a rich set of visual features based on the image content. |
| IndexDocumentsClient |
Index Documents Client |
| IndexDocumentsOptions |
Options for the modify index batch operation. |
| IndexDocumentsResult |
Response containing the status of operations for all documents in the indexing request. |
| IndexedOneLakeKnowledgeSource |
Configuration for OneLake knowledge source. |
| IndexedOneLakeKnowledgeSourceParameters |
Parameters for OneLake knowledge source. |
| IndexedOneLakeKnowledgeSourceParams |
Specifies runtime parameters for a indexed OneLake knowledge source |
| IndexerExecutionResult |
Represents the result of an individual indexer execution. |
| IndexingParameters |
Represents parameters for indexer execution. |
| IndexingParametersConfiguration |
A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
| IndexingResult |
Status of an indexing operation for a single document. |
| IndexingSchedule |
Represents a schedule for indexer execution. |
| InputFieldMappingEntry |
Input field mapping for a skill. |
| KeepTokenFilter |
A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene. |
| KeyAuthAzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer with an authentication key. |
| KeyPhraseExtractionSkill |
A skill that uses text analytics for key phrase extraction. |
| KeywordMarkerTokenFilter |
Marks terms as keywords. This token filter is implemented using Apache Lucene. |
| KeywordTokenizer |
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. |
| KnowledgeBase | |
| KnowledgeBaseAgenticReasoningActivityRecord |
Represents an agentic reasoning activity record. |
| KnowledgeBaseAzureBlobReference |
Represents an Azure Blob Storage document reference. |
| KnowledgeBaseAzureOpenAIModel |
Specifies the Azure OpenAI resource used to do query planning. |
| KnowledgeBaseErrorAdditionalInfo |
The resource management error additional info. |
| KnowledgeBaseErrorDetail |
The error details. |
| KnowledgeBaseIndexedOneLakeReference |
Represents an indexed OneLake document reference. |
| KnowledgeBaseMessage |
The natural language message style object. |
| KnowledgeBaseMessageImageContent |
Image message type. |
| KnowledgeBaseMessageImageContentImage |
Image content. |
| KnowledgeBaseMessageTextContent |
Text message type. |
| KnowledgeBaseModelWebSummarizationActivityRecord |
Represents an LLM web summarization activity record. |
| KnowledgeBaseRetrievalRequest |
The input contract for the retrieval request. |
| KnowledgeBaseRetrievalResponse |
The output contract for the retrieval response. |
| KnowledgeBaseSearchIndexReference |
Represents an Azure Search document reference. |
| KnowledgeBaseWebReference |
Represents a web document reference. |
| KnowledgeRetrievalClientOptions |
Client options used to configure Cognitive Search API requests. |
| KnowledgeRetrievalIntent |
An intended query to execute without model query planning. |
| KnowledgeRetrievalMinimalReasoningEffort |
Run knowledge retrieval with minimal reasoning effort. |
| KnowledgeRetrievalSemanticIntent |
A semantic query intent. |
| KnowledgeSourceAzureOpenAIVectorizer |
Specifies the Azure OpenAI resource used to vectorize a query string. |
| KnowledgeSourceIngestionParameters |
Consolidates all general ingestion settings for knowledge sources. |
| KnowledgeSourceReference |
Reference to a knowledge source. |
| KnowledgeSourceStatistics |
Statistical information about knowledge source synchronization history. |
| KnowledgeSourceStatus |
Represents the status and synchronization history of a knowledge source. |
| KnowledgeSourceSynchronizationError |
Represents a document-level indexing error encountered during a knowledge source synchronization run. |
| LanguageDetectionSkill |
A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis. |
| LengthTokenFilter |
Removes words that are too long or too short. This token filter is implemented using Apache Lucene. |
| LimitTokenFilter |
Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene. |
| ListKnowledgeBasesOptions | |
| ListKnowledgeSourcesOptions | |
| ListSearchResultsPageSettings |
Arguments for retrieving the next page of search results. |
| LuceneStandardAnalyzer |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. |
| LuceneStandardTokenizer |
Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. |
| MagnitudeScoringFunction |
Defines a function that boosts scores based on the magnitude of a numeric field. |
| MagnitudeScoringParameters |
Provides parameter values to a magnitude scoring function. |
| MappingCharFilter |
A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene. |
| MergeSkill |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. |
| MicrosoftLanguageStemmingTokenizer |
Divides text using language-specific rules and reduces words to their base forms. |
| MicrosoftLanguageTokenizer |
Divides text using language-specific rules. |
| NGramTokenFilter |
Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. |
| NGramTokenizer |
Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
| NativeBlobSoftDeleteDeletionDetectionPolicy |
Defines a data deletion detection policy utilizing Azure Blob Storage's native soft delete feature for deletion detection. |
| NoAuthAzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer with no authentication. |
| OcrSkill |
A skill that extracts text from image files. |
| OutputFieldMappingEntry |
Output field mapping for a skill. |
| PIIDetectionSkill |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it. |
| PageSettings |
Options for the byPage method |
| PagedAsyncIterableIterator |
An interface that allows async iterable iteration both to completion and by page. |
| PathHierarchyTokenizer |
Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene. |
| PatternAnalyzer |
Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene. |
| PatternCaptureTokenFilter |
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene. |
| PatternReplaceCharFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene. |
| PatternReplaceTokenFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene. |
| PatternTokenizer |
Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene. |
| PhoneticTokenFilter |
Create tokens for phonetic matches. This token filter is implemented using Apache Lucene. |
| QueryAnswerResult |
An answer is a text passage extracted from the contents of the most relevant documents that matched the query. Answers are extracted from the top search results. Answer candidates are scored and the top answers are selected. |
| QueryCaptionResult |
Captions are the most representative passages from the document relatively to the search query.
They are often used as document summary. Captions are only returned for queries of type
|
| QueryResultDocumentSemanticField |
Description of fields that were sent to the semantic enrichment process, as well as how they were used |
| QueryResultDocumentSubscores |
The breakdown of subscores between the text and vector query components of the search query for this document. Each vector query is shown as a separate object in the same order they were received. |
| RescoringOptions |
Contains the options for rescoring. |
| ResourceCounter |
Represents a resource's usage and quota. |
| RetrieveOptions | |
| ScalarQuantizationCompression |
Contains configuration options specific to the scalar quantization compression method used during indexing and querying. |
| ScalarQuantizationParameters |
Contains the parameters specific to Scalar Quantization. |
| ScoringProfile |
Defines parameters for a search index that influence scoring in search queries. |
| SearchAlias |
Represents an index alias, which describes a mapping from the alias name to an index. The alias name can be used in place of the index name for supported operations. |
| SearchClientOptions |
Client options used to configure AI Search API requests. |
| SearchDocumentsPageResult |
Response containing search page results from an index. |
| SearchDocumentsResult |
Response containing search results from an index. |
| SearchDocumentsResultBase |
Response containing search results from an index. |
| SearchIndex |
Represents a search index definition, which describes the fields and search behavior of an index. |
| SearchIndexClientOptions |
Client options used to configure AI Search API requests. |
| SearchIndexFieldReference |
Field reference for a search index. |
| SearchIndexKnowledgeSource |
Knowledge Source targeting a search index. |
| SearchIndexKnowledgeSourceParameters |
Parameters for search index knowledge source. |
| SearchIndexKnowledgeSourceParams |
Specifies runtime parameters for a search index knowledge source |
| SearchIndexStatistics |
Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date. |
| SearchIndexer |
Represents an indexer. |
| SearchIndexerClientOptions |
Client options used to configure AI Search API requests. |
| SearchIndexerDataContainer |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed. |
| SearchIndexerDataNoneIdentity |
Clears the identity property of a datasource. |
| SearchIndexerDataSourceConnection |
Represents a datasource definition, which can be used to configure an indexer. |
| SearchIndexerDataUserAssignedIdentity |
Specifies the identity for a datasource to use. |
| SearchIndexerError |
Represents an item- or document-level indexing error. |
| SearchIndexerIndexProjection |
Definition of additional projections to secondary search indexes. |
| SearchIndexerIndexProjectionParameters |
A dictionary of index projection-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
| SearchIndexerIndexProjectionSelector |
Description for what data to store in the designated search index. |
| SearchIndexerKnowledgeStore |
Definition of additional projections to azure blob, table, or files, of enriched data. |
| SearchIndexerKnowledgeStoreBlobProjectionSelector |
Abstract class to share properties between concrete selectors. |
| SearchIndexerKnowledgeStoreFileProjectionSelector |
Projection definition for what data to store in Azure Files. |
| SearchIndexerKnowledgeStoreObjectProjectionSelector |
Projection definition for what data to store in Azure Blob. |
| SearchIndexerKnowledgeStoreParameters |
A dictionary of knowledge store-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
| SearchIndexerKnowledgeStoreProjection |
Container object for various projection selectors. |
| SearchIndexerKnowledgeStoreProjectionSelector |
Abstract class to share properties between concrete selectors. |
| SearchIndexerKnowledgeStoreTableProjectionSelector |
Description for what data to store in Azure Tables. |
| SearchIndexerLimits |
Represents the limits that can be applied to an indexer. |
| SearchIndexerSkillset |
A list of skills. |
| SearchIndexerStatus |
Represents the current status and execution history of an indexer. |
| SearchIndexerWarning |
Represents an item-level warning. |
| SearchIndexingBufferedSenderOptions |
Options for SearchIndexingBufferedSender. |
| SearchResourceEncryptionKey |
A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure AI Search, such as indexes and synonym maps. |
| SearchServiceStatistics |
Response from a get service statistics request. If successful, it includes service level counters and limits. |
| SemanticConfiguration |
Defines a specific configuration to be used in the context of semantic capabilities. |
| SemanticDebugInfo |
Debug options for semantic search queries. |
| SemanticField |
A field that is used as part of the semantic configuration. |
| SemanticPrioritizedFields |
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers. |
| SemanticSearch |
Defines parameters for a search index that influence semantic capabilities. |
| SemanticSearchOptions |
Defines options for semantic search queries |
| SentimentSkill |
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1. |
| SentimentSkillV3 |
Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. |
| ServiceCounters |
Represents service-level resource counters and quotas. |
| ServiceLimits |
Represents various service level limits. |
| ShaperSkill |
A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields). |
| ShingleTokenFilter |
Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene. |
| SimpleField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| SingleVectorFieldResult |
A single vector field result. Both |
| SnowballTokenFilter |
A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene. |
| SoftDeleteColumnDeletionDetectionPolicy |
Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column. |
| SplitSkill |
A skill to split a string into chunks of text. |
| SqlIntegratedChangeTrackingPolicy |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database. |
| StemmerOverrideTokenFilter |
Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/StemmerOverrideFilter.html |
| StemmerTokenFilter |
Language specific stemming filter. This token filter is implemented using Apache Lucene. See https://learn.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters |
| StopAnalyzer |
Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene. |
| StopwordsTokenFilter |
Removes stop words from a token stream. This token filter is implemented using Apache Lucene. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html |
| SuggestDocumentsResult |
Response containing suggestion query results from an index. |
| SuggestRequest |
Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors. |
| Suggester |
Defines how the Suggest API should apply to a group of fields in the index. |
| SynchronizationState |
Represents the current state of an ongoing synchronization that spans multiple indexer runs. |
| SynonymMap |
Represents a synonym map definition. |
| SynonymTokenFilter |
Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene. |
| TagScoringFunction |
Defines a function that boosts scores of documents with string values matching a given list of tags. |
| TagScoringParameters |
Provides parameter values to a tag scoring function. |
| TextResult |
The BM25 or Classic score for the text portion of the query. |
| TextTranslationSkill |
A skill to translate text from one language to another. |
| TextWeights |
Defines weights on index fields for which matches should boost scoring in search queries. |
| TokenAuthAzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer with a managed identity. |
| TruncateTokenFilter |
Truncates the terms to a specific length. This token filter is implemented using Apache Lucene. |
| UaxUrlEmailTokenizer |
Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene. |
| UniqueTokenFilter |
Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene. |
| VectorSearch |
Contains configuration options related to vector search. |
| VectorSearchOptions |
Defines options for vector search queries |
| VectorSearchProfile |
Defines a combination of configurations to use with vector search. |
| VectorizableImageBinaryQuery |
The query parameters to use for vector search when a base 64 encoded binary of an image that needs to be vectorized is provided. |
| VectorizableImageUrlQuery |
The query parameters to use for vector search when an url that represents an image value that needs to be vectorized is provided. |
| VectorizableTextQuery |
The query parameters to use for vector search when a text value that needs to be vectorized is provided. |
| VectorizedQuery |
The query parameters to use for vector search when a raw vector value is provided. |
| VectorsDebugInfo |
"Contains debugging information specific to vector and hybrid search.") |
| WebApiParameters |
Specifies the properties for connecting to a user-defined vectorizer. |
| WebApiSkill |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. |
| WebApiVectorizer |
Specifies a user-defined vectorizer for generating the vector embedding of a query string. Integration of an external vectorizer is achieved using the custom Web API interface of a skillset. |
| WebKnowledgeSource |
Knowledge Source targeting web results. |
| WebKnowledgeSourceDomain |
Configuration for web knowledge source domain. |
| WebKnowledgeSourceDomains |
Domain allow/block configuration for web knowledge source. |
| WebKnowledgeSourceParameters |
Parameters for web knowledge source. |
| WebKnowledgeSourceParams |
Specifies runtime parameters for a web knowledge source |
| WordDelimiterTokenFilter |
Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene. |
Type Aliases
| AIFoundryModelCatalogName |
The name of the embedding model from the Azure AI Foundry Catalog that will be called. Known values supported by the serviceOpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32: OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32 |
| AliasIterator |
An iterator for listing the aliases that exist in the Search service. This will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| AnalyzeTextOptions |
Options for analyze text operation. |
| AutocompleteMode |
Specifies the mode for Autocomplete. The default is 'oneTerm'. Use 'twoTerms' to get shingles and 'oneTermWithContext' to use the current context in producing autocomplete terms. |
| AutocompleteOptions |
Options for retrieving completion text for a partial searchText. |
| AzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer. |
| AzureOpenAIModelName |
The Azure Open AI model name that will be called. Known values supported by the servicetext-embedding-ada-002: TextEmbeddingAda002 model. |
| BaseKnowledgeRetrievalIntent |
Alias for KnowledgeRetrievalIntentUnion |
| BlobIndexerDataToExtract | |
| BlobIndexerImageAction | |
| BlobIndexerPDFTextRotationAlgorithm | |
| BlobIndexerParsingMode | |
| CharFilter |
Contains the possible cases for CharFilter. |
| CharFilterName |
Defines the names of all character filters supported by the search engine. Known values supported by the servicehtml_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html |
| ChatCompletionExtraParametersBehavior |
Specifies how 'extraParameters' should be handled by Azure AI Foundry. Defaults to 'error'. Known values supported by the servicepassThrough: Passes any extra parameters directly to the model. |
| ChatCompletionResponseFormatType |
Specifies how the LLM should format the response. Known values supported by the servicetext: Plain text response format. |
| CjkBigramTokenFilterScripts |
Scripts that can be ignored by CjkBigramTokenFilter. |
| CognitiveServicesAccount |
Contains the possible cases for CognitiveServicesAccount. |
| ComplexDataType |
Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)' |
| ContentUnderstandingSkillChunkingUnit |
Controls the cardinality of the chunk unit. Default is 'characters' Known values supported by the servicecharacters: Specifies chunk by characters. |
| ContentUnderstandingSkillExtractionOptions |
Controls the cardinality of the content extracted from the document by the skill. Known values supported by the serviceimages: Specify that image content should be extracted from the document. |
| ContinuablePage |
An interface that describes a page of results. |
| CountDocumentsOptions |
Options for performing the count operation on the index. |
| CreateAliasOptions |
Options for create alias operation. |
| CreateDataSourceConnectionOptions |
Options for create datasource operation. |
| CreateIndexOptions |
Options for create index operation. |
| CreateIndexerOptions |
Options for create indexer operation. |
| CreateSkillsetOptions |
Options for create skillset operation. |
| CreateSynonymMapOptions |
Options for create synonymmap operation. |
| CustomEntityLookupSkillLanguage | |
| DataChangeDetectionPolicy |
Contains the possible cases for DataChangeDetectionPolicy. |
| DataDeletionDetectionPolicy |
Contains the possible cases for DataDeletionDetectionPolicy. |
| DeleteDocumentsOptions |
Options for the delete documents operation. |
| DocumentIntelligenceLayoutSkillChunkingUnit |
Controls the cardinality of the chunk unit. Default is 'characters' Known values supported by the servicecharacters: Specifies chunk by characters. |
| DocumentIntelligenceLayoutSkillExtractionOptions |
Controls the cardinality of the content extracted from the document by the skill. Known values supported by the serviceimages: Specify that image content should be extracted from the document. |
| DocumentIntelligenceLayoutSkillMarkdownHeaderDepth |
The depth of headers in the markdown output. Default is h6. Known values supported by the serviceh1: Header level 1. |
| DocumentIntelligenceLayoutSkillOutputFormat |
Controls the cardinality of the output format. Default is 'markdown'. Known values supported by the servicetext: Specify the format of the output as text. |
| DocumentIntelligenceLayoutSkillOutputMode |
Controls the cardinality of the output produced by the skill. Default is 'oneToMany'. Known values supported by the serviceoneToMany: Specify that the output should be parsed as 'oneToMany'. |
| EdgeNGramTokenFilterSide |
Specifies which side of the input an n-gram should be generated from. |
| EntityCategory |
A string indicating what entity categories to return. Known values supported by the servicelocation: Entities describing a physical location. |
| EntityRecognitionSkillLanguage |
The language codes supported for input text by EntityRecognitionSkill. Known values supported by the servicear: Arabic |
| ExcludedODataTypes | |
| ExhaustiveKnnAlgorithmConfiguration |
Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index. |
| ExtractDocumentKey | |
| GetAliasOptions |
Options for get alias operation. |
| GetDataSourceConnectionOptions |
Options for get datasource operation. |
| GetIndexOptions |
Options for get index operation. |
| GetIndexStatisticsOptions |
Options for get index statistics operation. |
| GetIndexerOptions |
Options for get indexer operation. |
| GetIndexerStatusOptions |
Options for get indexer status operation. |
| GetServiceStatisticsOptions |
Options for get service statistics operation. |
| GetSkillSetOptions |
Options for get skillset operation. |
| GetSynonymMapsOptions |
Options for get synonymmaps operation. |
| HnswAlgorithmConfiguration |
Contains configuration options specific to the hnsw approximate nearest neighbors algorithm used during indexing time. |
| ImageAnalysisSkillLanguage | |
| ImageDetail | |
| IndexActionType |
The operation to perform on a document in an indexing batch. |
| IndexDocumentsAction |
Represents an index action that operates on a document. |
| IndexIterator |
An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| IndexNameIterator |
An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| IndexProjectionMode |
Defines behavior of the index projections in relation to the rest of the indexer. Known values supported by the serviceskipIndexingParentDocuments: The source document will be skipped from writing into the indexer's target index. |
| IndexerExecutionEnvironment | |
| IndexerExecutionStatus |
Represents the status of an individual indexer execution. |
| IndexerResyncOption |
Options with various types of permission data to index. Known values supported by the servicepermissions: Indexer to re-ingest pre-selected permissions data from data source to index. |
| IndexerStatus |
Represents the overall indexer status. |
| KeyPhraseExtractionSkillLanguage | |
| KnowledgeBaseActivityRecord |
Alias for KnowledgeBaseActivityRecordUnion |
| KnowledgeBaseActivityRecordType |
The type of activity record. Known values supported by the servicesearchIndex: Search index retrieval activity. |
| KnowledgeBaseIterator |
An iterator for listing the knowledge bases that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| KnowledgeBaseMessageContent |
Alias for KnowledgeBaseMessageContentUnion |
| KnowledgeBaseMessageContentType |
The type of message content. Known values supported by the servicetext: Text message content kind. |
| KnowledgeBaseModel | |
| KnowledgeBaseModelKind |
The AI model to be used for query planning. Known values supported by the serviceazureOpenAI: Use Azure Open AI models for query planning. |
| KnowledgeBaseReference |
Alias for KnowledgeBaseReferenceUnion |
| KnowledgeBaseReferenceType |
The type of reference. Known values supported by the servicesearchIndex: Search index document reference. |
| KnowledgeRetrievalIntentType |
The kind of knowledge base configuration to use. Known values supported by the servicesemantic: A natural language semantic query intent. |
| KnowledgeRetrievalReasoningEffortKind |
The amount of effort to use during retrieval. Known values supported by the serviceminimal: Does not perform any source selections, query planning, or iterative search. |
| KnowledgeRetrievalReasoningEffortUnion |
Alias for KnowledgeRetrievalReasoningEffortUnion |
| KnowledgeSource | |
| KnowledgeSourceContentExtractionMode |
Optional content extraction mode. Default is 'minimal'. Known values supported by the serviceminimal: Extracts only essential metadata while deferring most content processing. |
| KnowledgeSourceIterator |
An iterator for listing the knowledge sources that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| KnowledgeSourceKind |
The kind of the knowledge source. Known values supported by the servicesearchIndex: A knowledge source that reads data from a Search Index. |
| KnowledgeSourceParams |
Alias for KnowledgeSourceParamsUnion |
| KnowledgeSourceSynchronizationStatus |
The current synchronization status of the knowledge source. Known values supported by the servicecreating: The knowledge source is being provisioned. |
| KnowledgeSourceVectorizer | |
| LexicalAnalyzer |
Contains the possible cases for Analyzer. |
| LexicalAnalyzerName |
Defines the names of all text analyzers supported by the search engine. Known values supported by the servicear.microsoft: Microsoft analyzer for Arabic. |
| LexicalNormalizer |
Contains the possible cases for LexicalNormalizer. |
| LexicalNormalizerName |
Defines the names of all text normalizers supported by the search engine. Known values supported by the serviceasciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html |
| LexicalTokenizer |
Contains the possible cases for Tokenizer. |
| LexicalTokenizerName |
Defines the names of all tokenizers supported by the search engine. Known values supported by the serviceclassic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html |
| ListAliasesOptions |
Options for list aliases operation. |
| ListDataSourceConnectionsOptions |
Options for a list data sources operation. |
| ListIndexersOptions |
Options for a list indexers operation. |
| ListIndexesOptions |
Options for a list indexes operation. |
| ListSkillsetsOptions |
Options for a list skillsets operation. |
| ListSynonymMapsOptions |
Options for a list synonymMaps operation. |
| MarkdownHeaderDepth |
Specifies the max header depth that will be considered while grouping markdown content. Default is Known values supported by the serviceh1: Indicates that headers up to a level of h1 will be considered while grouping markdown content. |
| MarkdownParsingSubmode |
Specifies the submode that will determine whether a markdown file will be parsed into exactly one search document or multiple search documents. Default is Known values supported by the serviceoneToMany: Indicates that each section of the markdown file (up to a specified depth) will be parsed into individual search documents. This can result in a single markdown file producing multiple search documents. This is the default sub-mode. |
| MergeDocumentsOptions |
Options for the merge documents operation. |
| MergeOrUploadDocumentsOptions |
Options for the merge or upload documents operation. |
| MicrosoftStemmingTokenizerLanguage |
Lists the languages supported by the Microsoft language stemming tokenizer. |
| MicrosoftTokenizerLanguage |
Lists the languages supported by the Microsoft language tokenizer. |
| NarrowedModel |
Narrows the Model type to include only the selected Fields |
| OcrLineEnding |
Defines the sequence of characters to use between the lines of text recognized by the OCR skill. The default value is "space". Known values supported by the servicespace: Lines are separated by a single space character. |
| OcrSkillLanguage | |
| PIIDetectionSkillMaskingMode | |
| PhoneticEncoder |
Identifies the type of phonetic encoder to use with a PhoneticTokenFilter. |
| QueryAnswer |
A value that specifies whether answers should be returned as part of the search response.
This parameter is only valid if the query type is 'semantic'. If set to |
| QueryCaption |
A value that specifies whether captions should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set, the query returns captions extracted from key passages in the highest ranked documents. When Captions is 'extractive', highlighting is enabled by default. Defaults to 'none'. |
| QueryDebugMode |
Enables a debugging tool that can be used to further explore your search results. You can enable multiple debug modes simultaneously by separating them with a | character, for example: semantic|queryRewrites. Known values supported by the servicedisabled: No query debugging information will be returned. |
| QueryRewrites |
Defines options for query rewrites. |
| QueryType |
Specifies the syntax of the search query. The default is 'simple'. Use 'full' if your query uses the Lucene query syntax and 'semantic' if query syntax is not needed. Known values supported by the servicesimple: Uses the simple query syntax for searches. Search text is interpreted using a simple query language that allows for symbols such as +, * and "". Queries are evaluated across all searchable fields by default, unless the searchFields parameter is specified. |
| RankingOrder |
Represents score to use for sort order of documents. Known values supported by the serviceBoostedRerankerScore: Sets sort order as BoostedRerankerScore |
| RegexFlags | |
| ResetIndexerOptions |
Options for reset indexer operation. |
| RunIndexerOptions |
Options for run indexer operation. |
| ScoringFunction |
Contains the possible cases for ScoringFunction. |
| ScoringFunctionAggregation |
Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile. |
| ScoringFunctionInterpolation |
Defines the function used to interpolate score boosting across a range of documents. |
| ScoringStatistics |
A value that specifies whether we want to calculate scoring statistics (such as document frequency) globally for more consistent scoring, or locally, for lower latency. The default is 'local'. Use 'global' to aggregate scoring statistics globally before scoring. Using global scoring statistics can increase latency of search queries. |
| SearchField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| SearchFieldArray |
If |
| SearchFieldDataType |
Defines values for SearchFieldDataType. Known values supported by the service:Edm.String: Indicates that a field contains a string. Edm.Int32: Indicates that a field contains a 32-bit signed integer. Edm.Int64: Indicates that a field contains a 64-bit signed integer. Edm.Double: Indicates that a field contains an IEEE double-precision floating point number. Edm.Boolean: Indicates that a field contains a Boolean value (true or false). Edm.DateTimeOffset: Indicates that a field contains a date/time value, including timezone information. Edm.GeographyPoint: Indicates that a field contains a geo-location in terms of longitude and latitude. Edm.ComplexType: Indicates that a field contains one or more complex objects that in turn have sub-fields of other types. Edm.Single: Indicates that a field contains a single-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Single). Edm.Half: Indicates that a field contains a half-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Half). Edm.Int16: Indicates that a field contains a 16-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Int16). Edm.SByte: Indicates that a field contains a 8-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.SByte). Edm.Byte: Indicates that a field contains a 8-bit unsigned integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Byte). |
| SearchIndexAlias |
Search Alias object. |
| SearchIndexerDataIdentity |
Contains the possible cases for SearchIndexerDataIdentity. |
| SearchIndexerDataSourceType | |
| SearchIndexerSkill |
Contains the possible cases for Skill. |
| SearchIndexingBufferedSenderDeleteDocumentsOptions |
Options for SearchIndexingBufferedSenderDeleteDocuments. |
| SearchIndexingBufferedSenderFlushDocumentsOptions |
Options for SearchIndexingBufferedSenderFlushDocuments. |
| SearchIndexingBufferedSenderMergeDocumentsOptions |
Options for SearchIndexingBufferedSenderMergeDocuments. |
| SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions |
Options for SearchIndexingBufferedSenderMergeOrUploadDocuments. |
| SearchIndexingBufferedSenderUploadDocumentsOptions |
Options for SearchIndexingBufferedSenderUploadDocuments. |
| SearchIterator |
An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| SearchMode |
Specifies whether any or all of the search terms must be matched in order to count the document as a match. |
| SearchOptions |
Options for committing a full search request. |
| SearchPick |
Deeply pick fields of T using valid AI Search OData $select paths. |
| SearchRequestOptions |
Parameters for filtering, sorting, faceting, paging, and other search query behaviors. |
| SearchRequestQueryTypeOptions | |
| SearchResult |
Contains a document found by a search query, plus associated metadata. |
| SelectFields |
Produces a union of valid AI Search OData $select paths for T using a post-order traversal of the field tree rooted at T. |
| SemanticErrorMode | |
| SemanticErrorReason | |
| SemanticSearchResultsType |
Type of partial response that was returned for a semantic ranking request. Known values supported by the servicebaseResults: Results without any semantic enrichment or reranking. |
| SentimentSkillLanguage |
The language codes supported for input text by SentimentSkill. Known values supported by the serviceda: Danish |
| Similarity |
Alias for SimilarityAlgorithmUnion |
| SimilarityAlgorithm |
Contains the possible cases for Similarity. |
| SnowballTokenFilterLanguage |
The language to use for a Snowball token filter. |
| SplitSkillLanguage | |
| StemmerTokenFilterLanguage |
The language to use for a stemmer token filter. |
| StopwordsList |
Identifies a predefined list of language-specific stopwords. |
| SuggestNarrowedModel | |
| SuggestOptions |
Options for retrieving suggestions based on the searchText. |
| SuggestResult |
A result containing a document found by a suggestion query, plus associated metadata. |
| TextSplitMode | |
| TextTranslationSkillLanguage | |
| TokenCharacterKind |
Represents classes of characters on which a token filter can operate. |
| TokenFilter |
Contains the possible cases for TokenFilter. |
| TokenFilterName |
Defines the names of all token filters supported by the search engine. Known values supported by the servicearabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html |
| UnionToIntersection | |
| UploadDocumentsOptions |
Options for the upload documents operation. |
| VectorEncodingFormat |
The encoding format for interpreting vector field contents. Known values supported by the servicepackedBit: Encoding format representing bits packed into a wider data type. |
| VectorFilterMode | |
| VectorQuery |
The query parameters for vector and hybrid search queries. |
| VectorQueryKind | |
| VectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing and/or querying. |
| VectorSearchAlgorithmKind | |
| VectorSearchAlgorithmMetric | |
| VectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
| VectorSearchCompressionKind |
The compression method used for indexing and querying. Known values supported by the servicescalarQuantization: Scalar Quantization, a type of compression method. In scalar quantization, the original vectors values are compressed to a narrower type by discretizing and representing each component of a vector using a reduced set of quantized values, thereby reducing the overall data size. |
| VectorSearchCompressionRescoreStorageMethod |
The storage method for the original full-precision vectors used for rescoring and internal index operations. Known values supported by the servicepreserveOriginals: This option preserves the original full-precision vectors. Choose this option for maximum flexibility and highest quality of compressed search results. This consumes more storage but allows for rescoring and oversampling. |
| VectorSearchCompressionTarget |
The quantized data type of compressed vector values. Known values supported by the serviceint8: 8-bit signed integer. |
| VectorSearchVectorizer |
Contains configuration options on how to vectorize text vector queries. |
| VectorSearchVectorizerKind |
The vectorization method to be used during query time. Known values supported by the serviceazureOpenAI: Generate embeddings using an Azure OpenAI resource at query time. |
| VisualFeature | |
| WebApiSkills | |
Enums
| KnownAIFoundryModelCatalogName |
The name of the embedding model from the Azure AI Foundry Catalog that will be called. |
| KnownAnalyzerNames |
Defines values for AnalyzerName. See https://learn.microsoft.com/rest/api/searchservice/Language-support |
| KnownAzureOpenAIModelName |
The Azure Open AI model name that will be called. |
| KnownBlobIndexerDataToExtract |
Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content when "imageAction" is set to a value other than "none". This applies to embedded image content in a .PDF or other application, or image files such as .jpg and .png, in Azure blobs. |
| KnownBlobIndexerImageAction |
Determines how to process embedded images and image files in Azure blob storage. Setting the "imageAction" configuration to any value other than "none" requires that a skillset also be attached to that indexer. |
| KnownBlobIndexerPDFTextRotationAlgorithm |
Determines algorithm for text extraction from PDF files in Azure blob storage. |
| KnownBlobIndexerParsingMode |
Represents the parsing mode for indexing from an Azure blob data source. |
| KnownCharFilterNames |
Defines values for CharFilterName. |
| KnownChatCompletionExtraParametersBehavior |
Specifies how 'extraParameters' should be handled by Azure AI Foundry. Defaults to 'error'. |
| KnownChatCompletionResponseFormatType |
Specifies how the LLM should format the response. |
| KnownContentUnderstandingSkillChunkingUnit |
Controls the cardinality of the chunk unit. Default is 'characters' |
| KnownContentUnderstandingSkillExtractionOptions |
Controls the cardinality of the content extracted from the document by the skill. |
| KnownCustomEntityLookupSkillLanguage |
The language codes supported for input text by CustomEntityLookupSkill. |
| KnownDocumentIntelligenceLayoutSkillChunkingUnit |
Controls the cardinality of the chunk unit. Default is 'characters' |
| KnownDocumentIntelligenceLayoutSkillExtractionOptions |
Controls the cardinality of the content extracted from the document by the skill. |
| KnownDocumentIntelligenceLayoutSkillMarkdownHeaderDepth |
The depth of headers in the markdown output. Default is h6. |
| KnownDocumentIntelligenceLayoutSkillOutputFormat |
Controls the cardinality of the output format. Default is 'markdown'. |
| KnownDocumentIntelligenceLayoutSkillOutputMode |
Controls the cardinality of the output produced by the skill. Default is 'oneToMany'. |
| KnownEntityCategory |
A string indicating what entity categories to return. |
| KnownEntityRecognitionSkillLanguage |
The language codes supported for input text by EntityRecognitionSkill. |
| KnownImageAnalysisSkillLanguage |
The language codes supported for input by ImageAnalysisSkill. |
| KnownImageDetail |
A string indicating which domain-specific details to return. |
| KnownIndexProjectionMode |
Defines behavior of the index projections in relation to the rest of the indexer. |
| KnownIndexerExecutionEnvironment |
Specifies the environment in which the indexer should execute. |
| KnownIndexerResyncOption |
Options with various types of permission data to index. |
| KnownKeyPhraseExtractionSkillLanguage |
The language codes supported for input text by KeyPhraseExtractionSkill. |
| KnownKnowledgeBaseModelKind |
The AI model to be used for query planning. |
| KnownKnowledgeSourceKind |
The kind of the knowledge source. |
| KnownLexicalAnalyzerName |
Defines the names of all text analyzers supported by the search engine. |
| KnownLexicalNormalizerName |
Defines the names of all text normalizers supported by the search engine. |
| KnownMarkdownHeaderDepth |
Specifies the max header depth that will be considered while grouping markdown content. Default is |
| KnownMarkdownParsingSubmode |
Specifies the submode that will determine whether a markdown file will be parsed into exactly one search document or multiple search documents. Default is |
| KnownOcrLineEnding |
Defines the sequence of characters to use between the lines of text recognized by the OCR skill. The default value is "space". |
| KnownOcrSkillLanguage |
The language codes supported for input by OcrSkill. |
| KnownPIIDetectionSkillMaskingMode |
A string indicating what maskingMode to use to mask the personal information detected in the input text. |
| KnownQueryDebugMode |
Enables a debugging tool that can be used to further explore your search results. You can enable multiple debug modes simultaneously by separating them with a | character, for example: semantic|queryRewrites. |
| KnownRankingOrder |
Represents score to use for sort order of documents. |
| KnownRegexFlags |
Defines a regular expression flag that can be used in the pattern analyzer and pattern tokenizer. |
| KnownSearchAudience |
Known values for Search Audience |
| KnownSearchFieldDataType |
Defines the data type of a field in a search index. |
| KnownSearchIndexerDataSourceType |
Defines the type of a datasource. |
| KnownSemanticErrorMode |
Allows the user to choose whether a semantic call should fail completely, or to return partial results. |
| KnownSemanticErrorReason |
Reason that a partial response was returned for a semantic ranking request. |
| KnownSemanticSearchResultsType |
Type of partial response that was returned for a semantic ranking request. |
| KnownSentimentSkillLanguage |
The language codes supported for input text by SentimentSkill. |
| KnownSplitSkillLanguage |
The language codes supported for input text by SplitSkill. |
| KnownTextSplitMode |
A value indicating which split mode to perform. |
| KnownTextTranslationSkillLanguage |
The language codes supported for input text by TextTranslationSkill. |
| KnownTokenFilterNames |
Defines values for TokenFilterName. |
| KnownTokenizerNames |
Defines values for TokenizerName. |
| KnownVectorEncodingFormat |
The encoding format for interpreting vector field contents. |
| KnownVectorFilterMode |
Determines whether or not filters are applied before or after the vector search is performed. |
| KnownVectorQueryKind |
The kind of vector query being performed. |
| KnownVectorSearchAlgorithmKind |
The algorithm used for indexing and querying. |
| KnownVectorSearchAlgorithmMetric |
The similarity metric to use for vector comparisons. It is recommended to choose the same similarity metric as the embedding model was trained on. |
| KnownVectorSearchCompressionKind |
The compression method used for indexing and querying. |
| KnownVectorSearchCompressionRescoreStorageMethod |
The storage method for the original full-precision vectors used for rescoring and internal index operations. |
| KnownVectorSearchCompressionTarget |
The quantized data type of compressed vector values. |
| KnownVectorSearchVectorizerKind |
The vectorization method to be used during query time. |
| KnownVisualFeature |
The strings indicating what visual feature types to return. |
Functions
| create |
Helper method to create a SynonymMap object. This is a NodeJS only method. |
| odata(Template |
Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:
For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter |
Variables
| DEFAULT_BATCH_SIZE | Default Batch Size |
| DEFAULT_FLUSH_WINDOW | Default window flush interval |
| DEFAULT_RETRY_COUNT | Default number of times to retry. |
Function Details
createSynonymMapFromFile(string, string)
Helper method to create a SynonymMap object. This is a NodeJS only method.
function createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>
Parameters
- name
-
string
Name of the SynonymMap.
- filePath
-
string
Path of the file that contains the Synonyms (seperated by new lines)
Returns
Promise<SynonymMap>
SynonymMap object
odata(TemplateStringsArray, unknown[])
Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:
import { odata } from "@azure/search-documents";
const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;
For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter
function odata(strings: TemplateStringsArray, values: unknown[]): string
Parameters
- strings
-
TemplateStringsArray
Array of strings for the expression
- values
-
unknown[]
Array of values for the expression
Returns
string
Variable Details
DEFAULT_BATCH_SIZE
Default Batch Size
DEFAULT_BATCH_SIZE: number
Type
number
DEFAULT_FLUSH_WINDOW
Default window flush interval
DEFAULT_FLUSH_WINDOW: number
Type
number
DEFAULT_RETRY_COUNT
Default number of times to retry.
DEFAULT_RETRY_COUNT: number
Type
number