com.azure.search.documents.indexes.models
Package containing the data models for SearchServiceClient. Client that can be used to manage and query indexes and documents, as well as manage other resources, on a search service.
Classes
| AnalyzeTextOptions |
Specifies some text and analysis components used to break that text into tokens. |
| AnalyzedTokenInfo |
Information about a token returned by an analyzer. |
| AsciiFoldingTokenFilter |
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. |
| AzureOpenAIEmbeddingSkill |
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource. |
| AzureOpenAIModelName |
The Azure Open AI model name that will be called. |
| AzureOpenAIVectorizer |
Specifies the Azure OpenAI resource used to vectorize a query string. |
| AzureOpenAIVectorizerParameters |
Specifies the parameters for connecting to the Azure OpenAI resource. |
| BM25SimilarityAlgorithm |
Ranking function based on the Okapi BM25 similarity algorithm. |
| BinaryQuantizationCompression |
Contains configuration options specific to the binary quantization compression method used during indexing and querying. |
| BlobIndexerDataToExtract |
Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content when "image |
| BlobIndexerImageAction |
Determines how to process embedded images and image files in Azure blob storage. |
| BlobIndexerParsingMode |
Represents the parsing mode for indexing from an Azure blob data source. |
| BlobIndexerPdfTextRotationAlgorithm |
Determines algorithm for text extraction from PDF files in Azure blob storage. |
| CharFilter |
Base type for character filters. |
| CharFilterName |
Defines the names of all character filters supported by the search engine. |
| CjkBigramTokenFilter |
Forms bigrams of CJK terms that are generated from the standard tokenizer. |
| ClassicSimilarityAlgorithm |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. |
| ClassicTokenizer |
Grammar-based tokenizer that is suitable for processing most European-language documents. |
| CognitiveServicesAccount |
Base type for describing any Azure AI service resource attached to a skillset. |
| CognitiveServicesAccountKey |
The multi-region account key of an Azure AI service resource that's attached to a skillset. |
| CommonGramTokenFilter |
Construct bigrams for frequently occurring terms while indexing. |
| ConditionalSkill |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. |
| CorsOptions |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index. |
| CustomAnalyzer |
Allows you to take control over the process of converting text into indexable/searchable tokens. |
| CustomEntity |
An object that contains information about the matches that were found, and related metadata. |
| CustomEntityAlias |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name. |
| CustomEntityLookupSkill |
A skill looks for text from a custom, user-defined list of words and phrases. |
| CustomEntityLookupSkillLanguage |
The language codes supported for input text by Custom |
| CustomNormalizer |
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. |
| DataChangeDetectionPolicy |
Base type for data change detection policies. |
| DataDeletionDetectionPolicy |
Base type for data deletion detection policies. |
| DefaultCognitiveServicesAccount |
An empty object that represents the default Azure AI service resource for a skillset. |
| DictionaryDecompounderTokenFilter |
Decomposes compound words found in many Germanic languages. |
| DistanceScoringFunction |
Defines a function that boosts scores based on distance from a geographic location. |
| DistanceScoringParameters |
Provides parameter values to a distance scoring function. |
| DocumentExtractionSkill |
A skill that extracts content from a file within the enrichment pipeline. |
| DocumentIntelligenceLayoutSkill |
A skill that extracts content and layout information, via Azure AI Services, from files within the enrichment pipeline. |
| DocumentIntelligenceLayoutSkillChunkingProperties |
Controls the cardinality for chunking the content. |
| DocumentIntelligenceLayoutSkillChunkingUnit |
Controls the cardinality of the chunk unit. |
| DocumentIntelligenceLayoutSkillExtractionOptions |
Controls the cardinality of the content extracted from the document by the skill. |
| DocumentIntelligenceLayoutSkillMarkdownHeaderDepth |
The depth of headers in the markdown output. |
| DocumentIntelligenceLayoutSkillOutputFormat |
Controls the cardinality of the output format. |
| DocumentIntelligenceLayoutSkillOutputMode |
Controls the cardinality of the output produced by the skill. |
| EdgeNGramTokenFilter |
Generates n-grams of the given size(s) starting from the front or the back of an input token. |
| EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of the given size(s). |
| ElisionTokenFilter |
Removes elisions. |
| EntityCategory |
A string indicating what entity categories to return. |
| EntityLinkingSkill |
Using the Text Analytics API, extracts linked entities from text. |
| EntityRecognitionSkill |
Text analytics entity recognition. |
| EntityRecognitionSkillLanguage |
Deprecated. |
| ExhaustiveKnnAlgorithmConfiguration |
Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index. |
| ExhaustiveKnnParameters |
Contains the parameters specific to exhaustive KNN algorithm. |
| FieldBuilderOptions |
Additional parameters to build SearchField. |
| FieldMapping |
Defines a mapping between a field in a data source and a target field in an index. |
| FieldMappingFunction |
Represents a function that transforms a value from a data source before indexing. |
| FreshnessScoringFunction |
Defines a function that boosts scores based on the value of a date-time field. |
| FreshnessScoringParameters |
Provides parameter values to a freshness scoring function. |
| HighWaterMarkChangeDetectionPolicy |
Defines a data change detection policy that captures changes based on the value of a high water mark column. |
| HnswAlgorithmConfiguration |
Contains configuration options specific to the HNSW approximate nearest neighbors algorithm used during indexing and querying. |
| HnswParameters |
Contains the parameters specific to the HNSW algorithm. |
| ImageAnalysisSkill |
A skill that analyzes image files. |
| ImageAnalysisSkillLanguage |
The language codes supported for input by Image |
| ImageDetail |
A string indicating which domain-specific details to return. |
| IndexDocumentsBatch<T> |
Contains a batch of document write actions to send to the index. |
| IndexProjectionMode |
Defines behavior of the index projections in relation to the rest of the indexer. |
| IndexerExecutionEnvironment |
Specifies the environment in which the indexer should execute. |
| IndexerExecutionResult |
Represents the result of an individual indexer execution. |
| IndexingParameters |
Represents parameters for indexer execution. |
| IndexingParametersConfiguration |
A dictionary of indexer-specific configuration properties. |
| IndexingSchedule |
Represents a schedule for indexer execution. |
| InputFieldMappingEntry |
Input field mapping for a skill. |
| KeepTokenFilter |
A token filter that only keeps tokens with text contained in a specified list of words. |
| KeyPhraseExtractionSkill |
A skill that uses text analytics for key phrase extraction. |
| KeyPhraseExtractionSkillLanguage |
The language codes supported for input text by Key |
| KeywordMarkerTokenFilter |
Marks terms as keywords. |
| KeywordTokenizer |
Emits the entire input as a single token. |
| LanguageDetectionSkill |
A skill that detects the language of input text and reports a single language code for every document submitted on the request. |
| LengthTokenFilter |
Removes words that are too long or too short. |
| LexicalAnalyzer |
Base type for analyzers. |
| LexicalAnalyzerName |
Defines the names of all text analyzers supported by the search engine. |
| LexicalNormalizer |
Base type for normalizers. |
| LexicalNormalizerName |
Defines the names of all text normalizers supported by the search engine. |
| LexicalTokenizer |
Base type for tokenizers. |
| LexicalTokenizerName |
Defines the names of all tokenizers supported by the search engine. |
| LimitTokenFilter |
Limits the number of tokens while indexing. |
| LuceneStandardAnalyzer |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. |
| LuceneStandardTokenizer |
Breaks text following the Unicode Text Segmentation rules. |
| MagnitudeScoringFunction |
Defines a function that boosts scores based on the magnitude of a numeric field. |
| MagnitudeScoringParameters |
Provides parameter values to a magnitude scoring function. |
| MappingCharFilter |
A character filter that applies mappings defined with the mappings option. |
| MergeSkill |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. |
| MicrosoftLanguageStemmingTokenizer |
Divides text using language-specific rules and reduces words to their base forms. |
| MicrosoftLanguageTokenizer |
Divides text using language-specific rules. |
| NGramTokenFilter |
Generates n-grams of the given size(s). |
| NGramTokenizer |
Tokenizes the input into n-grams of the given size(s). |
| OcrLineEnding |
Defines the sequence of characters to use between the lines of text recognized by the OCR skill. |
| OcrSkill |
A skill that extracts text from image files. |
| OcrSkillLanguage |
The language codes supported for input by Ocr |
| OutputFieldMappingEntry |
Output field mapping for a skill. |
| PathHierarchyTokenizer |
Tokenizer for path-like hierarchies. |
| PatternAnalyzer |
Flexibly separates text into terms via a regular expression pattern. |
| PatternCaptureTokenFilter |
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. |
| PatternReplaceCharFilter |
A character filter that replaces characters in the input string. |
| PatternReplaceTokenFilter |
A character filter that replaces characters in the input string. |
| PatternTokenizer |
Tokenizer that uses regex pattern matching to construct distinct tokens. |
| PhoneticTokenFilter |
Create tokens for phonetic matches. |
| PiiDetectionSkill |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it. |
| PiiDetectionSkillMaskingMode |
A string indicating what masking |
| RankingOrder |
Represents score to use for sort order of documents. |
| RegexFlags |
Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer. |
| RescoringOptions |
Contains the options for rescoring. |
| ResourceCounter |
Represents a resource's usage and quota. |
| ScalarQuantizationCompression |
Contains configuration options specific to the scalar quantization compression method used during indexing and querying. |
| ScalarQuantizationParameters |
Contains the parameters specific to Scalar Quantization. |
| ScoringFunction |
Base type for functions that can modify document scores during ranking. |
| ScoringProfile |
Defines parameters for a search index that influence scoring in search queries. |
| SearchField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
| SearchFieldDataType |
Defines the data type of a field in a search index. |
| SearchIndex |
Represents a search index definition, which describes the fields and search behavior of an index. |
| SearchIndexStatistics |
Statistics for a given index. |
| SearchIndexer |
Represents an indexer. |
| SearchIndexerDataContainer |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed. |
| SearchIndexerDataIdentity |
Abstract base type for data identities. |
| SearchIndexerDataNoneIdentity |
Clears the identity property of a datasource. |
| SearchIndexerDataSourceConnection |
Represents a datasource definition, which can be used to configure an indexer. |
| SearchIndexerDataSourceType |
Defines the type of a datasource. |
| SearchIndexerDataUserAssignedIdentity |
Specifies the identity for a datasource to use. |
| SearchIndexerError |
Represents an item- or document-level indexing error. |
| SearchIndexerIndexProjection |
Definition of additional projections to secondary search indexes. |
| SearchIndexerIndexProjectionSelector |
Description for what data to store in the designated search index. |
| SearchIndexerIndexProjectionsParameters |
A dictionary of index projection-specific configuration properties. |
| SearchIndexerKnowledgeStore |
Definition of additional projections to azure blob, table, or files, of enriched data. |
| SearchIndexerKnowledgeStoreBlobProjectionSelector |
Abstract class to share properties between concrete selectors. |
| SearchIndexerKnowledgeStoreFileProjectionSelector |
Projection definition for what data to store in Azure Files. |
| SearchIndexerKnowledgeStoreObjectProjectionSelector |
Projection definition for what data to store in Azure Blob. |
| SearchIndexerKnowledgeStoreParameters |
A dictionary of knowledge store-specific configuration properties. |
| SearchIndexerKnowledgeStoreProjection |
Container object for various projection selectors. |
| SearchIndexerKnowledgeStoreProjectionSelector |
Abstract class to share properties between concrete selectors. |
| SearchIndexerKnowledgeStoreTableProjectionSelector |
Description for what data to store in Azure Tables. |
| SearchIndexerLimits |
The Search |
| SearchIndexerSkill |
Base type for skills. |
| SearchIndexerSkillset |
A list of skills. |
| SearchIndexerStatus |
Represents the current status and execution history of an indexer. |
| SearchIndexerWarning |
Represents an item-level warning. |
| SearchResourceEncryptionKey |
A customer-managed encryption key in Azure Key Vault. |
| SearchServiceCounters |
Represents service-level resource counters and quotas. |
| SearchServiceLimits |
Represents various service level limits. |
| SearchServiceStatistics |
Response from a get service statistics request. |
| SearchSuggester |
Defines how the Suggest API should apply to a group of fields in the index. |
| SemanticConfiguration |
Defines a specific configuration to be used in the context of semantic capabilities. |
| SemanticField |
A field that is used as part of the semantic configuration. |
| SemanticPrioritizedFields |
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers. |
| SemanticSearch |
Defines parameters for a search index that influence semantic capabilities. |
| SentimentSkill |
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1. |
| SentimentSkillLanguage |
Deprecated. |
| ShaperSkill |
A skill for reshaping the outputs. |
| ShingleTokenFilter |
Creates combinations of tokens as a single token. |
| SimilarityAlgorithm |
Base type for similarity algorithms. |
| SnowballTokenFilter |
A filter that stems words using a Snowball-generated stemmer. |
| SoftDeleteColumnDeletionDetectionPolicy |
Defines a data deletion detection policy that implements a soft-deletion strategy. |
| SplitSkill |
A skill to split a string into chunks of text. |
| SplitSkillLanguage |
The language codes supported for input text by Split |
| SqlIntegratedChangeTrackingPolicy |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database. |
| StemmerOverrideTokenFilter |
Provides the ability to override other stemming filters with custom dictionary-based stemming. |
| StemmerTokenFilter |
Language specific stemming filter. |
| StopAnalyzer |
Divides text at non-letters; Applies the lowercase and stopword token filters. |
| StopwordsTokenFilter |
Removes stop words from a token stream. |
| SynonymMap |
Represents a synonym map definition. |
| SynonymTokenFilter |
Matches single or multi-word synonyms in a token stream. |
| TagScoringFunction |
Defines a function that boosts scores of documents with string values matching a given list of tags. |
| TagScoringParameters |
Provides parameter values to a tag scoring function. |
| TextSplitMode |
A value indicating which split mode to perform. |
| TextTranslationSkill |
A skill to translate text from one language to another. |
| TextTranslationSkillLanguage |
The language codes supported for input text by Text |
| TextWeights |
Defines weights on index fields for which matches should boost scoring in search queries. |
| TokenFilter |
Base type for token filters. |
| TokenFilterName |
Defines the names of all token filters supported by the search engine. |
| TruncateTokenFilter |
Truncates the terms to a specific length. |
| UaxUrlEmailTokenizer |
Tokenizes urls and emails as one token. |
| UniqueTokenFilter |
Filters out tokens with same text as the previous token. |
| VectorEncodingFormat |
The encoding format for interpreting vector field contents. |
| VectorSearch |
Contains configuration options related to vector search. |
| VectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing or querying. |
| VectorSearchAlgorithmKind |
The algorithm used for indexing and querying. |
| VectorSearchAlgorithmMetric |
The similarity metric to use for vector comparisons. |
| VectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
| VectorSearchCompressionKind |
The compression method used for indexing and querying. |
| VectorSearchCompressionRescoreStorageMethod |
The storage method for the original full-precision vectors used for rescoring and internal index operations. |
| VectorSearchCompressionTarget |
The quantized data type of compressed vector values. |
| VectorSearchProfile |
Defines a combination of configurations to use with vector search. |
| VectorSearchVectorizer |
Specifies the vectorization method to be used during query time. |
| VectorSearchVectorizerKind |
The vectorization method to be used during query time. |
| VisualFeature |
The strings indicating what visual feature types to return. |
| WebApiSkill |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. |
| WebApiVectorizer |
Specifies a user-defined vectorizer for generating the vector embedding of a query string. |
| WebApiVectorizerParameters |
Specifies the properties for connecting to a user-defined vectorizer. |
| WordDelimiterTokenFilter |
Splits words into subwords and performs optional transformations on subword groups. |
Enums
| CjkBigramTokenFilterScripts |
Scripts that can be ignored by Cjk |
| EdgeNGramTokenFilterSide |
Specifies which side of the input an n-gram should be generated from. |
| EntityRecognitionSkillVersion |
Represents the version of EntityRecognitionSkill. |
| IndexerExecutionStatus |
Represents the status of an individual indexer execution. |
| IndexerStatus |
Represents the overall indexer status. |
| MicrosoftStemmingTokenizerLanguage |
Lists the languages supported by the Microsoft language stemming tokenizer. |
| MicrosoftTokenizerLanguage |
Lists the languages supported by the Microsoft language tokenizer. |
| PhoneticEncoder |
Identifies the type of phonetic encoder to use with a Phonetic |
| ScoringFunctionAggregation |
Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile. |
| ScoringFunctionInterpolation |
Defines the function used to interpolate score boosting across a range of documents. |
| SentimentSkillVersion |
Represents the version of SentimentSkill. |
| SnowballTokenFilterLanguage |
The language to use for a Snowball token filter. |
| StemmerTokenFilterLanguage |
The language to use for a stemmer token filter. |
| StopwordsList |
Identifies a predefined list of language-specific stopwords. |
| TokenCharacterKind |
Represents classes of characters on which a token filter can operate. |