@azure/search-documents package
Classes
| AzureKeyCredential |
A static-key-based credential that supports updating the underlying key value. |
| GeographyPoint |
Represents a geographic point in global coordinates. |
| IndexDocumentsBatch |
Class used to perform batch operations with multiple documents to the index. |
| KnowledgeRetrievalClient |
Class used to perform operations against a knowledge base. |
| SearchClient |
Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them. |
| SearchIndexClient |
Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps. |
| SearchIndexerClient |
Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets. |
| SearchIndexingBufferedSender |
Class used to perform buffered operations against a search index, including adding, updating, and removing them. |
Interfaces
| AIServices |
Parameters for Azure Blob Storage knowledge source. |
| AIServicesAccountIdentity |
The multi-region account of an Azure AI service resource that's attached to a skillset. |
| AIServicesAccountKey |
The account key of an Azure AI service resource that's attached to a skillset, to be used with the resource's subdomain. |
| AIServicesVisionParameters |
Specifies the AI Services Vision parameters for vectorizing a query image or text. |
| AIServicesVisionVectorizer |
Specifies the AI Services Vision parameters for vectorizing a query image or text. |
| AnalyzeRequest |
Specifies some text and analysis components used to break that text into tokens. |
| AnalyzeResult |
The result of testing an analyzer on text. |
| AnalyzedTokenInfo |
Information about a token returned by an analyzer. |
| AsciiFoldingTokenFilter |
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene. |
| AutocompleteItem |
The result of Autocomplete requests. |
| AutocompleteRequest |
Parameters for fuzzy matching, and other autocomplete query behaviors. |
| AutocompleteResult |
The result of Autocomplete query. |
| AzureActiveDirectoryApplicationCredentials |
Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault. |
| AzureBlobKnowledgeSource |
Configuration for Azure Blob Storage knowledge source. |
| AzureBlobKnowledgeSourceParameters |
Parameters for Azure Blob Storage knowledge source. |
| AzureBlobKnowledgeSourceParams |
Specifies runtime parameters for a azure blob knowledge source |
| AzureMachineLearningSkill |
The AML skill allows you to extend AI enrichment with a custom Azure Machine Learning (AML) model. Once an AML model is trained and deployed, an AML skill integrates it into AI enrichment. |
| AzureMachineLearningVectorizer |
Specifies an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog for generating the vector embedding of a query string. |
| AzureOpenAIEmbeddingSkill |
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource. |
| AzureOpenAIParameters |
Specifies the parameters for connecting to the Azure OpenAI resource. |
| AzureOpenAITokenizerParameters | |
| AzureOpenAIVectorizer |
Contains the parameters specific to using an Azure Open AI service for vectorization at query time. |
| BM25Similarity |
Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter). |
| BaseAzureMachineLearningVectorizerParameters |
Specifies the properties common between all AML vectorizer auth types. |
| BaseCharFilter |
Base type for character filters. |
| BaseCognitiveServicesAccount |
Base type for describing any Azure AI service resource attached to a skillset. |
| BaseDataChangeDetectionPolicy |
Base type for data change detection policies. |
| BaseDataDeletionDetectionPolicy |
Base type for data deletion detection policies. |
| BaseKnowledgeBaseActivityRecord |
Base type for activity records. |
| BaseKnowledgeBaseMessageContent |
Specifies the type of the message content. |
| BaseKnowledgeBaseModel |
Specifies the connection parameters for the model to use for query planning. |
| BaseKnowledgeBaseReference |
Base type for references. |
| BaseKnowledgeBaseRetrievalActivityRecord |
Represents a retrieval activity record. |
| BaseKnowledgeRetrievalReasoningEffort | |
| BaseKnowledgeSource |
Represents a knowledge source definition. |
| BaseKnowledgeSourceParams | |
| BaseKnowledgeSourceVectorizer |
Specifies the vectorization method to be used for knowledge source embedding model, with optional name. |
| BaseLexicalAnalyzer |
Base type for analyzers. |
| BaseLexicalNormalizer |
Base type for normalizers. |
| BaseLexicalTokenizer |
Base type for tokenizers. |
| BaseScoringFunction |
Base type for functions that can modify document scores during ranking. |
| BaseSearchIndexerDataIdentity |
Abstract base type for data identities. |
| BaseSearchIndexerSkill |
Base type for skills. |
| BaseSearchRequestOptions |
Parameters for filtering, sorting, faceting, paging, and other search query behaviors. |
| BaseTokenFilter |
Base type for token filters. |
| BaseVectorQuery |
The query parameters for vector and hybrid search queries. |
| BaseVectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing and/or querying. |
| BaseVectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
| BaseVectorSearchVectorizer |
Contains specific details for a vectorization method to be used during query time. |
| BaseVectorThreshold |
The threshold used for vector queries. |
| BinaryQuantizationCompression |
Contains configuration options specific to the binary quantization compression method used during indexing and querying. |
| ChatCompletionResponseFormat |
Determines how the language model's response should be serialized. Defaults to 'text'. |
| ChatCompletionResponseFormatJsonSchemaProperties |
An open dictionary for extended properties. Required if 'type' == 'json_schema' |
| ChatCompletionSchema |
Object defining the custom schema the model will use to structure its output. |
| ChatCompletionSkill |
A skill that calls a language model via Azure AI Foundry's Chat Completions endpoint. |
| CjkBigramTokenFilter |
Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene. |
| ClassicSimilarity |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries. |
| ClassicTokenizer |
Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene. |
| CognitiveServicesAccountKey |
The multi-region account key of an Azure AI service resource that's attached to a skillset. |
| CommonGramTokenFilter |
Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene. |
| CommonModelParameters |
Common language model parameters for Chat Completions. If omitted, default values are used. |
| CompletedSynchronizationState |
Represents the completed state of the last synchronization. |
| ComplexField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| ConditionalSkill |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. |
| ContentUnderstandingSkill |
A skill that leverages Azure AI Content Understanding to process and extract structured insights from documents, enabling enriched, searchable content for enhanced document indexing and retrieval |
| ContentUnderstandingSkillChunkingProperties |
Controls the cardinality for chunking the content. |
| CorsOptions |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index. |
| CreateKnowledgeBaseOptions | |
| CreateKnowledgeSourceOptions | |
| CreateOrUpdateAliasOptions |
Options for create or update alias operation. |
| CreateOrUpdateIndexOptions |
Options for create/update index operation. |
| CreateOrUpdateKnowledgeBaseOptions | |
| CreateOrUpdateKnowledgeSourceOptions | |
| CreateOrUpdateSkillsetOptions |
Options for create/update skillset operation. |
| CreateOrUpdateSynonymMapOptions |
Options for create/update synonymmap operation. |
| CreateorUpdateDataSourceConnectionOptions |
Options for create/update datasource operation. |
| CreateorUpdateIndexerOptions |
Options for create/update indexer operation. |
| CustomAnalyzer |
Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer. |
| CustomEntity |
An object that contains information about the matches that were found, and related metadata. |
| CustomEntityAlias |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name. |
| CustomEntityLookupSkill |
A skill looks for text from a custom, user-defined list of words and phrases. |
| CustomNormalizer |
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored. |
| DebugInfo |
Contains debugging information that can be used to further explore your search results. |
| DefaultCognitiveServicesAccount |
An empty object that represents the default Azure AI service resource for a skillset. |
| DeleteAliasOptions |
Options for delete alias operation. |
| DeleteDataSourceConnectionOptions |
Options for delete datasource operation. |
| DeleteIndexOptions |
Options for delete index operation. |
| DeleteIndexerOptions |
Options for delete indexer operation. |
| DeleteKnowledgeBaseOptions | |
| DeleteKnowledgeSourceOptions | |
| DeleteSkillsetOptions |
Options for delete skillset operaion. |
| DeleteSynonymMapOptions |
Options for delete synonymmap operation. |
| DictionaryDecompounderTokenFilter |
Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene. |
| DistanceScoringFunction |
Defines a function that boosts scores based on distance from a geographic location. |
| DistanceScoringParameters |
Provides parameter values to a distance scoring function. |
| DocumentDebugInfo |
Contains debugging information that can be used to further explore your search results. |
| DocumentExtractionSkill |
A skill that extracts content from a file within the enrichment pipeline. |
| DocumentIntelligenceLayoutSkill |
A skill that extracts content and layout information (as markdown), via Azure AI Services, from files within the enrichment pipeline. |
| DocumentIntelligenceLayoutSkillChunkingProperties |
Controls the cardinality for chunking the content. |
| EdgeNGramTokenFilter |
Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. |
| EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
| ElisionTokenFilter |
Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene. |
| EntityLinkingSkill |
Using the Text Analytics API, extracts linked entities from text. |
| EntityRecognitionSkill |
Text analytics entity recognition. |
| EntityRecognitionSkillV3 |
Using the Text Analytics API, extracts entities of different types from text. |
| ExhaustiveKnnParameters |
Contains the parameters specific to exhaustive KNN algorithm. |
| ExtractiveQueryAnswer |
Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language. |
| ExtractiveQueryCaption |
Extracts captions from the matching documents that contain passages relevant to the search query. |
| FacetResult |
A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval. |
| FieldMapping |
Defines a mapping between a field in a data source and a target field in an index. |
| FieldMappingFunction |
Represents a function that transforms a value from a data source before indexing. |
| FreshnessScoringFunction |
Defines a function that boosts scores based on the value of a date-time field. |
| FreshnessScoringParameters |
Provides parameter values to a freshness scoring function. |
| GenerativeQueryRewrites |
Generate alternative query terms to increase the recall of a search request. |
| GetDocumentOptions |
Options for retrieving a single document. |
| GetIndexStatsSummaryOptionalParams |
Optional parameters. |
| GetIndexStatsSummaryOptions | |
| GetKnowledgeBaseOptions | |
| GetKnowledgeSourceOptions | |
| GetKnowledgeSourceStatusOptions | |
| HighWaterMarkChangeDetectionPolicy |
Defines a data change detection policy that captures changes based on the value of a high water mark column. |
| HnswParameters |
Contains the parameters specific to hnsw algorithm. |
| HybridSearchOptions |
TThe query parameters to configure hybrid search behaviors. |
| ImageAnalysisSkill |
A skill that analyzes image files. It extracts a rich set of visual features based on the image content. |
| IndexDocumentsClient |
Index Documents Client |
| IndexDocumentsOptions |
Options for the modify index batch operation. |
| IndexDocumentsResult |
Response containing the status of operations for all documents in the indexing request. |
| IndexStatisticsSummary |
Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date. |
| IndexedOneLakeKnowledgeSource |
Configuration for OneLake knowledge source. |
| IndexedOneLakeKnowledgeSourceParameters |
Parameters for OneLake knowledge source. |
| IndexedOneLakeKnowledgeSourceParams |
Specifies runtime parameters for a indexed OneLake knowledge source |
| IndexedSharePointKnowledgeSource |
Configuration for SharePoint knowledge source. |
| IndexedSharePointKnowledgeSourceParameters |
Parameters for SharePoint knowledge source. |
| IndexedSharePointKnowledgeSourceParams |
Specifies runtime parameters for a indexed SharePoint knowledge source |
| IndexerExecutionResult |
Represents the result of an individual indexer execution. |
| IndexerRuntime |
Represents the indexer's cumulative runtime consumption in the service. |
| IndexerState |
Represents all of the state that defines and dictates the indexer's current execution. |
| IndexersResyncOptionalParams |
Optional parameters. |
| IndexingParameters |
Represents parameters for indexer execution. |
| IndexingParametersConfiguration |
A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
| IndexingResult |
Status of an indexing operation for a single document. |
| IndexingSchedule |
Represents a schedule for indexer execution. |
| InputFieldMappingEntry |
Input field mapping for a skill. |
| KeepTokenFilter |
A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene. |
| KeyAuthAzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer with an authentication key. |
| KeyPhraseExtractionSkill |
A skill that uses text analytics for key phrase extraction. |
| KeywordMarkerTokenFilter |
Marks terms as keywords. This token filter is implemented using Apache Lucene. |
| KeywordTokenizer |
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. |
| KnowledgeBase | |
| KnowledgeBaseAgenticReasoningActivityRecord |
Represents an agentic reasoning activity record. |
| KnowledgeBaseAzureBlobActivityArguments |
Represents the arguments the azure blob retrieval activity was run with. |
| KnowledgeBaseAzureBlobActivityRecord |
Represents a azure blob retrieval activity record. |
| KnowledgeBaseAzureBlobReference |
Represents an Azure Blob Storage document reference. |
| KnowledgeBaseAzureOpenAIModel |
Specifies the Azure OpenAI resource used to do query planning. |
| KnowledgeBaseErrorAdditionalInfo |
The resource management error additional info. |
| KnowledgeBaseErrorDetail |
The error details. |
| KnowledgeBaseIndexedOneLakeActivityArguments |
Represents the arguments the indexed OneLake retrieval activity was run with. |
| KnowledgeBaseIndexedOneLakeActivityRecord |
Represents a indexed OneLake retrieval activity record. |
| KnowledgeBaseIndexedOneLakeReference |
Represents an Azure Blob Storage document reference. |
| KnowledgeBaseIndexedSharePointActivityArguments |
Represents the arguments the indexed SharePoint retrieval activity was run with. |
| KnowledgeBaseIndexedSharePointActivityRecord |
Represents a indexed SharePoint retrieval activity record. |
| KnowledgeBaseIndexedSharePointReference |
Represents an Azure Blob Storage document reference. |
| KnowledgeBaseMessage |
The natural language message style object. |
| KnowledgeBaseMessageImageContent |
Text message type. |
| KnowledgeBaseMessageImageContentImage | |
| KnowledgeBaseMessageTextContent |
Text message type. |
| KnowledgeBaseModelAnswerSynthesisActivityRecord |
Represents an LLM answer synthesis activity record. |
| KnowledgeBaseModelQueryPlanningActivityRecord |
Represents an LLM query planning activity record. |
| KnowledgeBaseRemoteSharePointActivityArguments |
Represents the arguments the remote SharePoint retrieval activity was run with. |
| KnowledgeBaseRemoteSharePointActivityRecord |
Represents a remote SharePoint retrieval activity record. |
| KnowledgeBaseRemoteSharePointReference |
Represents a remote SharePoint document reference. |
| KnowledgeBaseRetrievalRequest |
The input contract for the retrieval request. |
| KnowledgeBaseRetrievalResponse |
The output contract for the retrieval response. |
| KnowledgeBaseSearchIndexActivityArguments |
Represents the arguments the search index retrieval activity was run with. |
| KnowledgeBaseSearchIndexActivityRecord |
Represents a search index retrieval activity record. |
| KnowledgeBaseSearchIndexFieldReference | |
| KnowledgeBaseSearchIndexReference |
Represents an Azure Search document reference. |
| KnowledgeBaseWebActivityArguments |
Represents the arguments the web retrieval activity was run with. |
| KnowledgeBaseWebActivityRecord |
Represents a web retrieval activity record. |
| KnowledgeBaseWebReference |
Represents a web document reference. |
| KnowledgeRetrievalClientOptions |
Client options used to configure Cognitive Search API requests. |
| KnowledgeRetrievalIntent |
An intended query to execute without model query planning. |
| KnowledgeRetrievalLowReasoningEffort |
Run knowledge retrieval with low reasoning effort. |
| KnowledgeRetrievalMediumReasoningEffort |
Run knowledge retrieval with medium reasoning effort. |
| KnowledgeRetrievalMinimalReasoningEffort |
Run knowledge retrieval with minimal reasoning effort. |
| KnowledgeRetrievalReasoningEffort | |
| KnowledgeRetrievalSemanticIntent |
An intended query to execute without model query planning. |
| KnowledgeSourceAzureOpenAIVectorizer |
Specifies the Azure OpenAI resource used to vectorize a query string. |
| KnowledgeSourceIngestionParameters |
Consolidates all general ingestion settings for knowledge sources. |
| KnowledgeSourceReference | |
| KnowledgeSourceStatistics |
Statistical information about knowledge source synchronization history. |
| KnowledgeSourceStatus |
Represents the status and synchronization history of a knowledge source. |
| LanguageDetectionSkill |
A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis. |
| LengthTokenFilter |
Removes words that are too long or too short. This token filter is implemented using Apache Lucene. |
| LimitTokenFilter |
Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene. |
| ListIndexStatsSummary |
Response from a request to retrieve stats summary of all indexes. If successful, it includes the stats of each index in the service. |
| ListKnowledgeBasesOptions | |
| ListKnowledgeSourcesOptions | |
| ListSearchResultsPageSettings |
Arguments for retrieving the next page of search results. |
| LuceneStandardAnalyzer |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. |
| LuceneStandardTokenizer |
Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. |
| MagnitudeScoringFunction |
Defines a function that boosts scores based on the magnitude of a numeric field. |
| MagnitudeScoringParameters |
Provides parameter values to a magnitude scoring function. |
| MappingCharFilter |
A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene. |
| MergeSkill |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. |
| MicrosoftLanguageStemmingTokenizer |
Divides text using language-specific rules and reduces words to their base forms. |
| MicrosoftLanguageTokenizer |
Divides text using language-specific rules. |
| NGramTokenFilter |
Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. |
| NGramTokenizer |
Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
| NativeBlobSoftDeleteDeletionDetectionPolicy |
Defines a data deletion detection policy utilizing Azure Blob Storage's native soft delete feature for deletion detection. |
| NoAuthAzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer with no authentication. |
| OcrSkill |
A skill that extracts text from image files. |
| OutputFieldMappingEntry |
Output field mapping for a skill. |
| PIIDetectionSkill |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it. |
| PathHierarchyTokenizer |
Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene. |
| PatternAnalyzer |
Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene. |
| PatternCaptureTokenFilter |
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene. |
| PatternReplaceCharFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene. |
| PatternReplaceTokenFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene. |
| PatternTokenizer |
Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene. |
| PhoneticTokenFilter |
Create tokens for phonetic matches. This token filter is implemented using Apache Lucene. |
| QueryAnswerResult |
An answer is a text passage extracted from the contents of the most relevant documents that matched the query. Answers are extracted from the top search results. Answer candidates are scored and the top answers are selected. |
| QueryCaptionResult |
Captions are the most representative passages from the document relatively to the search query. They are often used as document summary. Captions are only returned for queries of type |
| QueryResultDocumentInnerHit |
Detailed scoring information for an individual element of a complex collection. |
| QueryResultDocumentRerankerInput |
The raw concatenated strings that were sent to the semantic enrichment process. |
| QueryResultDocumentSemanticField |
Description of fields that were sent to the semantic enrichment process, as well as how they were used |
| QueryResultDocumentSubscores |
The breakdown of subscores between the text and vector query components of the search query for this document. Each vector query is shown as a separate object in the same order they were received. |
| QueryRewritesDebugInfo |
Contains debugging information specific to query rewrites. |
| QueryRewritesValuesDebugInfo |
Contains debugging information specific to query rewrites. |
| RemoteSharePointKnowledgeSource |
Configuration for remote SharePoint knowledge source. |
| RemoteSharePointKnowledgeSourceParameters |
Parameters for remote SharePoint knowledge source. |
| RemoteSharePointKnowledgeSourceParams |
Specifies runtime parameters for a remote SharePoint knowledge source |
| RescoringOptions |
Contains the options for rescoring. |
| ResetDocumentsOptions |
Options for reset docs operation. |
| ResetSkillsOptions |
Options for reset skills operation. |
| ResourceCounter |
Represents a resource's usage and quota. |
| RetrieveKnowledgeOptions | |
| ScalarQuantizationCompression |
Contains configuration options specific to the scalar quantization compression method used during indexing and querying. |
| ScalarQuantizationParameters |
Contains the parameters specific to Scalar Quantization. |
| ScoringProfile |
Defines parameters for a search index that influence scoring in search queries. |
| SearchAlias |
Represents an index alias, which describes a mapping from the alias name to an index. The alias name can be used in place of the index name for supported operations. |
| SearchClientOptions |
Client options used to configure AI Search API requests. |
| SearchDocumentsPageResult |
Response containing search page results from an index. |
| SearchDocumentsResult |
Response containing search results from an index. |
| SearchDocumentsResultBase |
Response containing search results from an index. |
| SearchIndex |
Represents a search index definition, which describes the fields and search behavior of an index. |
| SearchIndexClientOptions |
Client options used to configure AI Search API requests. |
| SearchIndexFieldReference | |
| SearchIndexKnowledgeSource |
Knowledge Source targeting a search index. |
| SearchIndexKnowledgeSourceParameters |
Parameters for search index knowledge source. |
| SearchIndexKnowledgeSourceParams |
Specifies runtime parameters for a search index knowledge source |
| SearchIndexStatistics |
Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date. |
| SearchIndexer |
Represents an indexer. |
| SearchIndexerCache | |
| SearchIndexerClientOptions |
Client options used to configure AI Search API requests. |
| SearchIndexerDataContainer |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed. |
| SearchIndexerDataNoneIdentity |
Clears the identity property of a datasource. |
| SearchIndexerDataSourceConnection |
Represents a datasource definition, which can be used to configure an indexer. |
| SearchIndexerDataUserAssignedIdentity |
Specifies the identity for a datasource to use. |
| SearchIndexerError |
Represents an item- or document-level indexing error. |
| SearchIndexerIndexProjection |
Definition of additional projections to secondary search indexes. |
| SearchIndexerIndexProjectionParameters |
A dictionary of index projection-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
| SearchIndexerIndexProjectionSelector |
Description for what data to store in the designated search index. |
| SearchIndexerKnowledgeStore |
Definition of additional projections to azure blob, table, or files, of enriched data. |
| SearchIndexerKnowledgeStoreBlobProjectionSelector |
Abstract class to share properties between concrete selectors. |
| SearchIndexerKnowledgeStoreFileProjectionSelector |
Projection definition for what data to store in Azure Files. |
| SearchIndexerKnowledgeStoreObjectProjectionSelector |
Projection definition for what data to store in Azure Blob. |
| SearchIndexerKnowledgeStoreParameters |
A dictionary of knowledge store-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
| SearchIndexerKnowledgeStoreProjection |
Container object for various projection selectors. |
| SearchIndexerKnowledgeStoreProjectionSelector |
Abstract class to share properties between concrete selectors. |
| SearchIndexerKnowledgeStoreTableProjectionSelector |
Description for what data to store in Azure Tables. |
| SearchIndexerLimits | |
| SearchIndexerSkillset |
A list of skills. |
| SearchIndexerStatus |
Represents the current status and execution history of an indexer. |
| SearchIndexerWarning |
Represents an item-level warning. |
| SearchIndexingBufferedSenderOptions |
Options for SearchIndexingBufferedSender. |
| SearchResourceEncryptionKey |
A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure AI Search, such as indexes and synonym maps. |
| SearchScoreThreshold |
The results of the vector query will filter based on the '@search.score' value. Note this is the @search.score returned as part of the search response. The threshold direction will be chosen for higher @search.score. |
| SearchServiceStatistics |
Response from a get service statistics request. If successful, it includes service level counters and limits. |
| SearchSuggester |
Defines how the Suggest API should apply to a group of fields in the index. |
| SemanticConfiguration |
Defines a specific configuration to be used in the context of semantic capabilities. |
| SemanticDebugInfo |
Debug options for semantic search queries. |
| SemanticField |
A field that is used as part of the semantic configuration. |
| SemanticPrioritizedFields |
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers. |
| SemanticSearch |
Defines parameters for a search index that influence semantic capabilities. |
| SemanticSearchOptions |
Defines options for semantic search queries |
| SentimentSkill |
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1. |
| SentimentSkillV3 |
Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. |
| ServiceCounters |
Represents service-level resource counters and quotas. |
| ServiceLimits |
Represents various service level limits. |
| ShaperSkill |
A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields). |
| SharePointSensitivityLabelInfo |
Information about the sensitivity label applied to a SharePoint document. |
| ShingleTokenFilter |
Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene. |
| Similarity |
Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results. |
| SimpleField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| SingleVectorFieldResult |
A single vector field result. Both @search.score and vector similarity values are returned. Vector similarity is related to @search.score by an equation. |
| SnowballTokenFilter |
A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene. |
| SoftDeleteColumnDeletionDetectionPolicy |
Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column. |
| SplitSkill |
A skill to split a string into chunks of text. |
| SqlIntegratedChangeTrackingPolicy |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database. |
| StemmerOverrideTokenFilter |
Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene. |
| StemmerTokenFilter |
Language specific stemming filter. This token filter is implemented using Apache Lucene. |
| StopAnalyzer |
Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene. |
| StopwordsTokenFilter |
Removes stop words from a token stream. This token filter is implemented using Apache Lucene. |
| SuggestDocumentsResult |
Response containing suggestion query results from an index. |
| SuggestRequest |
Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors. |
| SynchronizationState |
Represents the current state of an ongoing synchronization that spans multiple indexer runs. |
| SynonymMap |
Represents a synonym map definition. |
| SynonymTokenFilter |
Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene. |
| TagScoringFunction |
Defines a function that boosts scores of documents with string values matching a given list of tags. |
| TagScoringParameters |
Provides parameter values to a tag scoring function. |
| TextResult |
The BM25 or Classic score for the text portion of the query. |
| TextTranslationSkill |
A skill to translate text from one language to another. |
| TextWeights |
Defines weights on index fields for which matches should boost scoring in search queries. |
| TokenAuthAzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer with a managed identity. |
| TruncateTokenFilter |
Truncates the terms to a specific length. This token filter is implemented using Apache Lucene. |
| UaxUrlEmailTokenizer |
Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene. |
| UniqueTokenFilter |
Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene. |
| VectorSearch |
Contains configuration options related to vector search. |
| VectorSearchOptions |
Defines options for vector search queries |
| VectorSearchProfile |
Defines a combination of configurations to use with vector search. |
| VectorSimilarityThreshold |
The results of the vector query will be filtered based on the vector similarity metric. Note this is the canonical definition of similarity metric, not the 'distance' version. The threshold direction (larger or smaller) will be chosen automatically according to the metric used by the field. |
| VectorizableImageBinaryQuery |
The query parameters to use for vector search when a base 64 encoded binary of an image that needs to be vectorized is provided. |
| VectorizableImageUrlQuery |
The query parameters to use for vector search when an url that represents an image value that needs to be vectorized is provided. |
| VectorizableTextQuery |
The query parameters to use for vector search when a text value that needs to be vectorized is provided. |
| VectorizedQuery |
The query parameters to use for vector search when a raw vector value is provided. |
| VectorsDebugInfo | |
| VisionVectorizeSkill |
Allows you to generate a vector embedding for a given image or text input using the Azure AI Services Vision Vectorize API. |
| WebApiParameters |
Specifies the properties for connecting to a user-defined vectorizer. |
| WebApiSkill |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. |
| WebApiVectorizer |
Specifies a user-defined vectorizer for generating the vector embedding of a query string. Integration of an external vectorizer is achieved using the custom Web API interface of a skillset. |
| WebKnowledgeSource |
Knowledge Source targeting web results. |
| WebKnowledgeSourceDomain |
Configuration for web knowledge source domain. |
| WebKnowledgeSourceDomains |
Domain allow/block configuration for web knowledge source. |
| WebKnowledgeSourceParameters |
Parameters for web knowledge source. |
| WebKnowledgeSourceParams |
Specifies runtime parameters for a web knowledge source |
| WordDelimiterTokenFilter |
Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene. |
Type Aliases
| AIFoundryModelCatalogName |
Defines values for AIFoundryModelCatalogName. Known values supported by the serviceOpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32 |
| AliasIterator |
An iterator for listing the aliases that exist in the Search service. This will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| AnalyzeTextOptions |
Options for analyze text operation. |
| AutocompleteMode |
Defines values for AutocompleteMode. |
| AutocompleteOptions |
Options for retrieving completion text for a partial searchText. |
| AzureMachineLearningVectorizerParameters |
Specifies the properties for connecting to an AML vectorizer. |
| AzureOpenAIModelName |
Defines values for AzureOpenAIModelName. Known values supported by the servicetext-embedding-ada-002 |
| BaseKnowledgeRetrievalIntent | |
| BaseKnowledgeRetrievalOutputMode |
Defines values for KnowledgeRetrievalOutputMode. Known values supported by the serviceextractiveData: Return data from the knowledge sources directly without generative alteration. |
| BlobIndexerDataToExtract | |
| BlobIndexerImageAction | |
| BlobIndexerPDFTextRotationAlgorithm | |
| BlobIndexerParsingMode | |
| CharFilter |
Contains the possible cases for CharFilter. |
| CharFilterName |
Defines values for CharFilterName. Known values supported by the servicehtml_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html |
| ChatCompletionExtraParametersBehavior |
Defines values for ChatCompletionExtraParametersBehavior. Known values supported by the servicepassThrough: Passes any extra parameters directly to the model. |
| ChatCompletionResponseFormatType |
Defines values for ChatCompletionResponseFormatType. Known values supported by the servicetext |
| CjkBigramTokenFilterScripts |
Defines values for CjkBigramTokenFilterScripts. |
| CognitiveServicesAccount |
Contains the possible cases for CognitiveServicesAccount. |
| ComplexDataType |
Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)' |
| ContentUnderstandingSkillChunkingUnit |
Defines values for ContentUnderstandingSkillChunkingUnit. Known values supported by the servicecharacters: Specifies chunk by characters. |
| ContentUnderstandingSkillExtractionOptions |
Defines values for ContentUnderstandingSkillExtractionOptions. Known values supported by the serviceimages: Specify that image content should be extracted from the document. |
| CountDocumentsOptions |
Options for performing the count operation on the index. |
| CreateAliasOptions |
Options for create alias operation. |
| CreateDataSourceConnectionOptions |
Options for create datasource operation. |
| CreateIndexOptions |
Options for create index operation. |
| CreateIndexerOptions |
Options for create indexer operation. |
| CreateSkillsetOptions |
Options for create skillset operation. |
| CreateSynonymMapOptions |
Options for create synonymmap operation. |
| CustomEntityLookupSkillLanguage | |
| DataChangeDetectionPolicy |
Contains the possible cases for DataChangeDetectionPolicy. |
| DataDeletionDetectionPolicy |
Contains the possible cases for DataDeletionDetectionPolicy. |
| DeleteDocumentsOptions |
Options for the delete documents operation. |
| DocumentIntelligenceLayoutSkillChunkingUnit |
Defines values for DocumentIntelligenceLayoutSkillChunkingUnit. Known values supported by the servicecharacters: Specifies chunk by characters. |
| DocumentIntelligenceLayoutSkillExtractionOptions |
Defines values for DocumentIntelligenceLayoutSkillExtractionOptions. Known values supported by the serviceimages: Specify that image content should be extracted from the document. |
| DocumentIntelligenceLayoutSkillMarkdownHeaderDepth |
Defines values for DocumentIntelligenceLayoutSkillMarkdownHeaderDepth. Known values supported by the serviceh1: Header level 1. |
| DocumentIntelligenceLayoutSkillOutputFormat |
Defines values for DocumentIntelligenceLayoutSkillOutputFormat. Known values supported by the servicetext: Specify the format of the output as text. |
| DocumentIntelligenceLayoutSkillOutputMode |
Defines values for DocumentIntelligenceLayoutSkillOutputMode. Known values supported by the serviceoneToMany: Specify that the output should be parsed as 'oneToMany'. |
| EdgeNGramTokenFilterSide |
Defines values for EdgeNGramTokenFilterSide. |
| EntityCategory | |
| EntityRecognitionSkillLanguage | |
| ExcludedODataTypes | |
| ExhaustiveKnnAlgorithmConfiguration |
Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index. |
| ExtractDocumentKey | |
| GetAliasOptions |
Options for get alias operation. |
| GetDataSourceConnectionOptions |
Options for get datasource operation. |
| GetIndexOptions |
Options for get index operation. |
| GetIndexStatisticsOptions |
Options for get index statistics operation. |
| GetIndexStatsSummaryResponse |
Contains response data for the getIndexStatsSummary operation. |
| GetIndexerOptions |
Options for get indexer operation. |
| GetIndexerStatusOptions |
Options for get indexer status operation. |
| GetServiceStatisticsOptions |
Options for get service statistics operation. |
| GetSkillSetOptions |
Options for get skillset operation. |
| GetSynonymMapsOptions |
Options for get synonymmaps operation. |
| HnswAlgorithmConfiguration |
Contains configuration options specific to the hnsw approximate nearest neighbors algorithm used during indexing time. |
| HybridCountAndFacetMode |
Defines values for HybridCountAndFacetMode. Known values supported by the servicecountRetrievableResults: Only include documents that were matched within the 'maxTextRecallSize' retrieval window when computing 'count' and 'facets'. |
| ImageAnalysisSkillLanguage | |
| ImageDetail | |
| IndexActionType |
Defines values for IndexActionType. |
| IndexDocumentsAction |
Represents an index action that operates on a document. |
| IndexIterator |
An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| IndexNameIterator |
An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| IndexProjectionMode |
Defines values for IndexProjectionMode. Known values supported by the serviceskipIndexingParentDocuments: The source document will be skipped from writing into the indexer's target index. |
| IndexStatisticsSummaryIterator |
An iterator for statistics summaries for each index in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| IndexedSharePointContainerName |
Defines values for IndexedSharePointContainerName. Known values supported by the servicedefaultSiteLibrary: Index content from the site's default document library. |
| IndexerExecutionEnvironment | |
| IndexerExecutionStatus |
Defines values for IndexerExecutionStatus. |
| IndexerExecutionStatusDetail |
Defines values for IndexerExecutionStatusDetail. Known values supported by the serviceresetDocs: Indicates that the reset that occurred was for a call to ResetDocs. |
| IndexerPermissionOption |
Defines values for IndexerPermissionOption. Known values supported by the serviceuserIds: Indexer to ingest ACL userIds from data source to index. |
| IndexerResyncOption |
Defines values for IndexerResyncOption. Known values supported by the servicepermissions: Indexer to re-ingest pre-selected permissions data from data source to index. |
| IndexerStatus |
Defines values for IndexerStatus. |
| IndexingMode |
Defines values for IndexingMode. Known values supported by the serviceindexingAllDocs: The indexer is indexing all documents in the datasource. |
| KeyPhraseExtractionSkillLanguage | |
| KnowledgeBaseActivityRecord | |
| KnowledgeBaseIterator |
An iterator for listing the knowledge bases that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| KnowledgeBaseMessageContent | |
| KnowledgeBaseModel | |
| KnowledgeBaseReference | |
| KnowledgeBaseRetrievalActivityRecord | |
| KnowledgeRetrievalOutputMode |
Defines values for KnowledgeRetrievalOutputMode. Known values supported by the serviceextractiveData: Return data from the knowledge sources directly without generative alteration. |
| KnowledgeRetrievalReasoningEffortUnion | |
| KnowledgeSource | |
| KnowledgeSourceContentExtractionMode |
Defines values for KnowledgeSourceContentExtractionMode. Known values supported by the serviceminimal: Extracts only essential metadata while deferring most content processing. |
| KnowledgeSourceIngestionPermissionOption |
Defines values for KnowledgeSourceIngestionPermissionOption. Known values supported by the serviceuserIds: Ingest explicit user identifiers alongside document content. |
| KnowledgeSourceIterator |
An iterator for listing the knowledge sources that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| KnowledgeSourceKind |
Defines values for KnowledgeSourceKind. Known values supported by the servicesearchIndex: A knowledge source that retrieves data from a Search Index. |
| KnowledgeSourceParams | |
| KnowledgeSourceSynchronizationStatus |
Defines values for KnowledgeSourceSynchronizationStatus. Known values supported by the servicecreating: The knowledge source is being provisioned. |
| KnowledgeSourceVectorizer | |
| LexicalAnalyzer |
Contains the possible cases for Analyzer. |
| LexicalAnalyzerName |
Defines values for LexicalAnalyzerName. Known values supported by the servicear.microsoft: Microsoft analyzer for Arabic. |
| LexicalNormalizer |
Contains the possible cases for LexicalNormalizer. |
| LexicalNormalizerName |
Defines values for LexicalNormalizerName. Known values supported by the serviceasciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html |
| LexicalTokenizer |
Contains the possible cases for Tokenizer. |
| LexicalTokenizerName |
Defines values for LexicalTokenizerName. Known values supported by the serviceclassic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html |
| ListAliasesOptions |
Options for list aliases operation. |
| ListDataSourceConnectionsOptions |
Options for a list data sources operation. |
| ListIndexersOptions |
Options for a list indexers operation. |
| ListIndexesOptions |
Options for a list indexes operation. |
| ListSkillsetsOptions |
Options for a list skillsets operation. |
| ListSynonymMapsOptions |
Options for a list synonymMaps operation. |
| MarkdownHeaderDepth |
Defines values for MarkdownHeaderDepth. Known values supported by the serviceh1: Indicates that headers up to a level of h1 will be considered while grouping markdown content. |
| MarkdownParsingSubmode |
Defines values for MarkdownParsingSubmode. Known values supported by the serviceoneToMany: Indicates that each section of the markdown file (up to a specified depth) will be parsed into individual search documents. This can result in a single markdown file producing multiple search documents. This is the default sub-mode. |
| MergeDocumentsOptions |
Options for the merge documents operation. |
| MergeOrUploadDocumentsOptions |
Options for the merge or upload documents operation. |
| MicrosoftStemmingTokenizerLanguage |
Defines values for MicrosoftStemmingTokenizerLanguage. |
| MicrosoftTokenizerLanguage |
Defines values for MicrosoftTokenizerLanguage. |
| NarrowedModel |
Narrows the Model type to include only the selected Fields |
| OcrLineEnding |
Defines values for OcrLineEnding. Known values supported by the servicespace: Lines are separated by a single space character. |
| OcrSkillLanguage | |
| PIIDetectionSkillMaskingMode | |
| PermissionFilter |
Defines values for PermissionFilter. Known values supported by the serviceuserIds: Field represents user IDs that should be used to filter document access on queries. |
| PhoneticEncoder |
Defines values for PhoneticEncoder. |
| QueryAnswer |
A value that specifies whether answers should be returned as part of the search response.
This parameter is only valid if the query type is 'semantic'. If set to |
| QueryCaption |
A value that specifies whether captions should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set, the query returns captions extracted from key passages in the highest ranked documents. When Captions is 'extractive', highlighting is enabled by default. Defaults to 'none'. |
| QueryDebugMode |
Defines values for QueryDebugMode. Known values supported by the servicedisabled: No query debugging information will be returned. |
| QueryLanguage |
Defines values for QueryLanguage. Known values supported by the servicenone: Query language not specified. |
| QueryRewrites |
Defines options for query rewrites. |
| QuerySpeller |
Defines values for QuerySpellerType. Known values supported by the servicenone: Speller not enabled. |
| QueryType |
Defines values for QueryType. |
| RankingOrder |
Defines values for RankingOrder. Known values supported by the serviceBoostedRerankerScore: Sets sort order as BoostedRerankerScore |
| RegexFlags | |
| ResetIndexerOptions |
Options for reset indexer operation. |
| RunIndexerOptions |
Options for run indexer operation. |
| ScoringFunction |
Contains the possible cases for ScoringFunction. |
| ScoringFunctionAggregation |
Defines values for ScoringFunctionAggregation. |
| ScoringFunctionInterpolation |
Defines values for ScoringFunctionInterpolation. |
| ScoringStatistics |
Defines values for ScoringStatistics. |
| SearchField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| SearchFieldArray |
If |
| SearchFieldDataType |
Defines values for SearchFieldDataType. Known values supported by the service:Edm.String: Indicates that a field contains a string. Edm.Int32: Indicates that a field contains a 32-bit signed integer. Edm.Int64: Indicates that a field contains a 64-bit signed integer. Edm.Double: Indicates that a field contains an IEEE double-precision floating point number. Edm.Boolean: Indicates that a field contains a Boolean value (true or false). Edm.DateTimeOffset: Indicates that a field contains a date/time value, including timezone information. Edm.GeographyPoint: Indicates that a field contains a geo-location in terms of longitude and latitude. Edm.ComplexType: Indicates that a field contains one or more complex objects that in turn have sub-fields of other types. Edm.Single: Indicates that a field contains a single-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Single). Edm.Half: Indicates that a field contains a half-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Half). Edm.Int16: Indicates that a field contains a 16-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Int16). Edm.SByte: Indicates that a field contains a 8-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.SByte). Edm.Byte: Indicates that a field contains a 8-bit unsigned integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Byte). |
| SearchIndexAlias |
Search Alias object. |
| SearchIndexPermissionFilterOption |
Defines values for SearchIndexPermissionFilterOption. Known values supported by the serviceenabled |
| SearchIndexerDataIdentity |
Contains the possible cases for SearchIndexerDataIdentity. |
| SearchIndexerDataSourceType | |
| SearchIndexerSkill |
Contains the possible cases for Skill. |
| SearchIndexingBufferedSenderDeleteDocumentsOptions |
Options for SearchIndexingBufferedSenderDeleteDocuments. |
| SearchIndexingBufferedSenderFlushDocumentsOptions |
Options for SearchIndexingBufferedSenderFlushDocuments. |
| SearchIndexingBufferedSenderMergeDocumentsOptions |
Options for SearchIndexingBufferedSenderMergeDocuments. |
| SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions |
Options for SearchIndexingBufferedSenderMergeOrUploadDocuments. |
| SearchIndexingBufferedSenderUploadDocumentsOptions |
Options for SearchIndexingBufferedSenderUploadDocuments. |
| SearchIterator |
An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
| SearchMode |
Defines values for SearchMode. |
| SearchOptions |
Options for committing a full search request. |
| SearchPick |
Deeply pick fields of T using valid AI Search OData $select paths. |
| SearchRequestOptions |
Parameters for filtering, sorting, faceting, paging, and other search query behaviors. |
| SearchRequestQueryTypeOptions | |
| SearchResult |
Contains a document found by a search query, plus associated metadata. |
| SelectArray |
If |
| SelectFields |
Produces a union of valid AI Search OData $select paths for T using a post-order traversal of the field tree rooted at T. |
| SemanticErrorMode | |
| SemanticErrorReason | |
| SemanticFieldState |
Defines values for SemanticFieldState. Known values supported by the serviceused: The field was fully used for semantic enrichment. |
| SemanticQueryRewritesResultType |
Defines values for SemanticQueryRewritesResultType. Known values supported by the serviceoriginalQueryOnly: Query rewrites were not successfully generated for this request. Only the original query was used to retrieve the results. |
| SemanticSearchResultsType | |
| SentimentSkillLanguage | |
| SimilarityAlgorithm |
Contains the possible cases for Similarity. |
| SnowballTokenFilterLanguage |
Defines values for SnowballTokenFilterLanguage. |
| SplitSkillEncoderModelName |
Defines values for SplitSkillEncoderModelName. Known values supported by the servicer50k_base: Refers to a base model trained with a 50,000 token vocabulary, often used in general natural language processing tasks. |
| SplitSkillLanguage | |
| SplitSkillUnit |
Defines values for SplitSkillUnit. Known values supported by the servicecharacters: The length will be measured by character. |
| StemmerTokenFilterLanguage |
Defines values for StemmerTokenFilterLanguage. |
| StopwordsList |
Defines values for StopwordsList. |
| SuggestNarrowedModel | |
| SuggestOptions |
Options for retrieving suggestions based on the searchText. |
| SuggestResult |
A result containing a document found by a suggestion query, plus associated metadata. |
| TextSplitMode | |
| TextTranslationSkillLanguage | |
| TokenCharacterKind |
Defines values for TokenCharacterKind. |
| TokenFilter |
Contains the possible cases for TokenFilter. |
| TokenFilterName |
Defines values for TokenFilterName. Known values supported by the servicearabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html |
| UnionToIntersection | |
| UploadDocumentsOptions |
Options for the upload documents operation. |
| VectorEncodingFormat |
Defines values for VectorEncodingFormat. Known values supported by the servicepackedBit: Encoding format representing bits packed into a wider data type. |
| VectorFilterMode | |
| VectorQuery |
The query parameters for vector and hybrid search queries. |
| VectorQueryKind | |
| VectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing and/or querying. |
| VectorSearchAlgorithmKind | |
| VectorSearchAlgorithmMetric | |
| VectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
| VectorSearchCompressionKind |
Defines values for VectorSearchCompressionKind. Known values supported by the servicescalarQuantization: Scalar Quantization, a type of compression method. In scalar quantization, the original vectors values are compressed to a narrower type by discretizing and representing each component of a vector using a reduced set of quantized values, thereby reducing the overall data size. |
| VectorSearchCompressionRescoreStorageMethod |
Defines values for VectorSearchCompressionRescoreStorageMethod. Known values supported by the servicepreserveOriginals: This option preserves the original full-precision vectors. Choose this option for maximum flexibility and highest quality of compressed search results. This consumes more storage but allows for rescoring and oversampling. |
| VectorSearchCompressionTarget |
Defines values for VectorSearchCompressionTarget. Known values supported by the serviceint8 |
| VectorSearchVectorizer |
Contains configuration options on how to vectorize text vector queries. |
| VectorSearchVectorizerKind |
Defines values for VectorSearchVectorizerKind. Known values supported by the serviceazureOpenAI: Generate embeddings using an Azure OpenAI resource at query time. |
| VectorThreshold |
The threshold used for vector queries. |
| VisualFeature | |
| WebApiSkills | |
Enums
Functions
| create |
Helper method to create a SynonymMap object. This is a NodeJS only method. |
| odata(Template |
Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:
For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter |
Variables
| DEFAULT_BATCH_SIZE | Default Batch Size |
| DEFAULT_FLUSH_WINDOW | Default window flush interval |
| DEFAULT_RETRY_COUNT | Default number of times to retry. |
Function Details
createSynonymMapFromFile(string, string)
Helper method to create a SynonymMap object. This is a NodeJS only method.
function createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>
Parameters
- name
-
string
Name of the SynonymMap.
- filePath
-
string
Path of the file that contains the Synonyms (seperated by new lines)
Returns
Promise<SynonymMap>
SynonymMap object
odata(TemplateStringsArray, unknown[])
Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:
import { odata } from "@azure/search-documents";
const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;
For more information on supported syntax see: https://learn.microsoft.com/azure/search/search-query-odata-filter
function odata(strings: TemplateStringsArray, values: unknown[]): string
Parameters
- strings
-
TemplateStringsArray
Array of strings for the expression
- values
-
unknown[]
Array of values for the expression
Returns
string
Variable Details
DEFAULT_BATCH_SIZE
Default Batch Size
DEFAULT_BATCH_SIZE: number
Type
number
DEFAULT_FLUSH_WINDOW
Default window flush interval
DEFAULT_FLUSH_WINDOW: number
Type
number
DEFAULT_RETRY_COUNT
Default number of times to retry.
DEFAULT_RETRY_COUNT: number
Type
number