com.azure.search.documents.indexes.models
Package containing the data models for SearchServiceClient. Client that can be used to manage and query indexes and documents, as well as manage other resources, on a search service.
Analyze |
Specifies some text and analysis components used to break that text into tokens. |
Analyzed |
Information about a token returned by an analyzer. |
Ascii |
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. |
Azure |
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource. |
Azure |
The Azure Open AI model name that will be called. |
Azure |
Specifies the Azure OpenAI resource used to vectorize a query string. |
Azure |
Specifies the parameters for connecting to the Azure OpenAI resource. |
BM25Similarity |
Ranking function based on the Okapi BM25 similarity algorithm. |
Binary |
Contains configuration options specific to the binary quantization compression method used during indexing and querying. |
Blob |
Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content when "image |
Blob |
Determines how to process embedded images and image files in Azure blob storage. |
Blob |
Represents the parsing mode for indexing from an Azure blob data source. |
Blob |
Determines algorithm for text extraction from PDF files in Azure blob storage. |
Char |
Base type for character filters. |
Char |
Defines the names of all character filters supported by the search engine. |
Cjk |
Forms bigrams of CJK terms that are generated from the standard tokenizer. |
Classic |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. |
Classic |
Grammar-based tokenizer that is suitable for processing most European-language documents. |
Cognitive |
Base type for describing any Azure AI service resource attached to a skillset. |
Cognitive |
The multi-region account key of an Azure AI service resource that's attached to a skillset. |
Common |
Construct bigrams for frequently occurring terms while indexing. |
Conditional |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. |
Cors |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index. |
Custom |
Allows you to take control over the process of converting text into indexable/searchable tokens. |
Custom |
An object that contains information about the matches that were found, and related metadata. |
Custom |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name. |
Custom |
A skill looks for text from a custom, user-defined list of words and phrases. |
Custom |
The language codes supported for input text by Custom |
Data |
Base type for data change detection policies. |
Data |
Base type for data deletion detection policies. |
Default |
An empty object that represents the default Azure AI service resource for a skillset. |
Dictionary |
Decomposes compound words found in many Germanic languages. |
Distance |
Defines a function that boosts scores based on distance from a geographic location. |
Distance |
Provides parameter values to a distance scoring function. |
Document |
A skill that extracts content from a file within the enrichment pipeline. |
Edge |
Generates n-grams of the given size(s) starting from the front or the back of an input token. |
Edge |
Tokenizes the input from an edge into n-grams of the given size(s). |
Elision |
Removes elisions. |
Entity |
A string indicating what entity categories to return. |
Entity |
Using the Text Analytics API, extracts linked entities from text. |
Entity |
Text analytics entity recognition. |
Entity |
Deprecated. |
Exhaustive |
Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index. |
Exhaustive |
Contains the parameters specific to exhaustive KNN algorithm. |
Field |
Additional parameters to build SearchField. |
Field |
Defines a mapping between a field in a data source and a target field in an index. |
Field |
Represents a function that transforms a value from a data source before indexing. |
Freshness |
Defines a function that boosts scores based on the value of a date-time field. |
Freshness |
Provides parameter values to a freshness scoring function. |
High |
Defines a data change detection policy that captures changes based on the value of a high water mark column. |
Hnsw |
Contains configuration options specific to the HNSW approximate nearest neighbors algorithm used during indexing and querying. |
Hnsw |
Contains the parameters specific to the HNSW algorithm. |
Image |
A skill that analyzes image files. |
Image |
The language codes supported for input by Image |
Image |
A string indicating which domain-specific details to return. |
Index |
Contains a batch of document write actions to send to the index. |
Index |
Defines behavior of the index projections in relation to the rest of the indexer. |
Indexer |
Specifies the environment in which the indexer should execute. |
Indexer |
Represents the result of an individual indexer execution. |
Indexing |
Represents parameters for indexer execution. |
Indexing |
A dictionary of indexer-specific configuration properties. |
Indexing |
Represents a schedule for indexer execution. |
Input |
Input field mapping for a skill. |
Keep |
A token filter that only keeps tokens with text contained in a specified list of words. |
Key |
A skill that uses text analytics for key phrase extraction. |
Key |
The language codes supported for input text by Key |
Keyword |
Marks terms as keywords. |
Keyword |
Emits the entire input as a single token. |
Language |
A skill that detects the language of input text and reports a single language code for every document submitted on the request. |
Length |
Removes words that are too long or too short. |
Lexical |
Base type for analyzers. |
Lexical |
Defines the names of all text analyzers supported by the search engine. |
Lexical |
Base type for tokenizers. |
Lexical |
Defines the names of all tokenizers supported by the search engine. |
Limit |
Limits the number of tokens while indexing. |
Lucene |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. |
Lucene |
Breaks text following the Unicode Text Segmentation rules. |
Magnitude |
Defines a function that boosts scores based on the magnitude of a numeric field. |
Magnitude |
Provides parameter values to a magnitude scoring function. |
Mapping |
A character filter that applies mappings defined with the mappings option. |
Merge |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. |
Microsoft |
Divides text using language-specific rules and reduces words to their base forms. |
Microsoft |
Divides text using language-specific rules. |
NGram |
Generates n-grams of the given size(s). |
NGram |
Tokenizes the input into n-grams of the given size(s). |
Ocr |
Defines the sequence of characters to use between the lines of text recognized by the OCR skill. |
Ocr |
A skill that extracts text from image files. |
Ocr |
The language codes supported for input by Ocr |
Output |
Output field mapping for a skill. |
Path |
Tokenizer for path-like hierarchies. |
Pattern |
Flexibly separates text into terms via a regular expression pattern. |
Pattern |
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. |
Pattern |
A character filter that replaces characters in the input string. |
Pattern |
A character filter that replaces characters in the input string. |
Pattern |
Tokenizer that uses regex pattern matching to construct distinct tokens. |
Phonetic |
Create tokens for phonetic matches. |
Pii |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it. |
Pii |
A string indicating what masking |
Regex |
Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer. |
Resource |
Represents a resource's usage and quota. |
Scalar |
Contains configuration options specific to the scalar quantization compression method used during indexing and querying. |
Scalar |
Contains the parameters specific to Scalar Quantization. |
Scoring |
Base type for functions that can modify document scores during ranking. |
Scoring |
Defines parameters for a search index that influence scoring in search queries. |
Search |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
Search |
Defines the data type of a field in a search index. |
Search |
Represents a search index definition, which describes the fields and search behavior of an index. |
Search |
Statistics for a given index. |
Search |
Represents an indexer. |
Search |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed. |
Search |
Abstract base type for data identities. |
Search |
Clears the identity property of a datasource. |
Search |
Represents a datasource definition, which can be used to configure an indexer. |
Search |
Defines the type of a datasource. |
Search |
Specifies the identity for a datasource to use. |
Search |
Represents an item- or document-level indexing error. |
Search |
Definition of additional projections to secondary search indexes. |
Search |
Description for what data to store in the designated search index. |
Search |
A dictionary of index projection-specific configuration properties. |
Search |
Definition of additional projections to azure blob, table, or files, of enriched data. |
Search |
Abstract class to share properties between concrete selectors. |
Search |
Projection definition for what data to store in Azure Files. |
Search |
Projection definition for what data to store in Azure Blob. |
Search |
A dictionary of knowledge store-specific configuration properties. |
Search |
Container object for various projection selectors. |
Search |
Abstract class to share properties between concrete selectors. |
Search |
Description for what data to store in Azure Tables. |
Search |
The Search |
Search |
Base type for skills. |
Search |
A list of skills. |
Search |
Represents the current status and execution history of an indexer. |
Search |
Represents an item-level warning. |
Search |
A customer-managed encryption key in Azure Key Vault. |
Search |
Represents service-level resource counters and quotas. |
Search |
Represents various service level limits. |
Search |
Response from a get service statistics request. |
Search |
Defines how the Suggest API should apply to a group of fields in the index. |
Semantic |
Defines a specific configuration to be used in the context of semantic capabilities. |
Semantic |
A field that is used as part of the semantic configuration. |
Semantic |
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers. |
Semantic |
Defines parameters for a search index that influence semantic capabilities. |
Sentiment |
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1. |
Sentiment |
Deprecated. |
Shaper |
A skill for reshaping the outputs. |
Shingle |
Creates combinations of tokens as a single token. |
Similarity |
Base type for similarity algorithms. |
Snowball |
A filter that stems words using a Snowball-generated stemmer. |
Soft |
Defines a data deletion detection policy that implements a soft-deletion strategy. |
Split |
A skill to split a string into chunks of text. |
Split |
The language codes supported for input text by Split |
Sql |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database. |
Stemmer |
Provides the ability to override other stemming filters with custom dictionary-based stemming. |
Stemmer |
Language specific stemming filter. |
Stop |
Divides text at non-letters; Applies the lowercase and stopword token filters. |
Stopwords |
Removes stop words from a token stream. |
Synonym |
Represents a synonym map definition. |
Synonym |
Matches single or multi-word synonyms in a token stream. |
Tag |
Defines a function that boosts scores of documents with string values matching a given list of tags. |
Tag |
Provides parameter values to a tag scoring function. |
Text |
A value indicating which split mode to perform. |
Text |
A skill to translate text from one language to another. |
Text |
The language codes supported for input text by Text |
Text |
Defines weights on index fields for which matches should boost scoring in search queries. |
Token |
Base type for token filters. |
Token |
Defines the names of all token filters supported by the search engine. |
Truncate |
Truncates the terms to a specific length. |
Uax |
Tokenizes urls and emails as one token. |
Unique |
Filters out tokens with same text as the previous token. |
Vector |
The encoding format for interpreting vector field contents. |
Vector |
Contains configuration options related to vector search. |
Vector |
Contains configuration options specific to the algorithm used during indexing or querying. |
Vector |
The algorithm used for indexing and querying. |
Vector |
The similarity metric to use for vector comparisons. |
Vector |
Contains configuration options specific to the compression method used during indexing or querying. |
Vector |
The compression method used for indexing and querying. |
Vector |
The quantized data type of compressed vector values. |
Vector |
Defines a combination of configurations to use with vector search. |
Vector |
Specifies the vectorization method to be used during query time. |
Vector |
The vectorization method to be used during query time. |
Visual |
The strings indicating what visual feature types to return. |
Web |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. |
Web |
Specifies a user-defined vectorizer for generating the vector embedding of a query string. |
Web |
Specifies the properties for connecting to a user-defined vectorizer. |
Word |
Splits words into subwords and performs optional transformations on subword groups. |
Cjk |
Scripts that can be ignored by Cjk |
Edge |
Specifies which side of the input an n-gram should be generated from. |
Entity |
Represents the version of EntityRecognitionSkill. |
Indexer |
Represents the status of an individual indexer execution. |
Indexer |
Represents the overall indexer status. |
Microsoft |
Lists the languages supported by the Microsoft language stemming tokenizer. |
Microsoft |
Lists the languages supported by the Microsoft language tokenizer. |
Phonetic |
Identifies the type of phonetic encoder to use with a Phonetic |
Scoring |
Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile. |
Scoring |
Defines the function used to interpolate score boosting across a range of documents. |
Sentiment |
Represents the version of SentimentSkill. |
Snowball |
The language to use for a Snowball token filter. |
Stemmer |
The language to use for a stemmer token filter. |
Stopwords |
Identifies a predefined list of language-specific stopwords. |
Token |
Represents classes of characters on which a token filter can operate. |
Azure SDK for Java-feedback
Azure SDK for Java is een open source project. Selecteer een koppeling om feedback te geven: