Share via


TokenizerName Struct

Definition

Defines the names of all tokenizers supported by Azure Cognitive Search. https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search

[Newtonsoft.Json.JsonConverter(typeof(Microsoft.Azure.Search.Serialization.ExtensibleEnumConverter<Microsoft.Azure.Search.Models.TokenizerName>))]
public struct TokenizerName : IEquatable<Microsoft.Azure.Search.Models.TokenizerName>
[<Newtonsoft.Json.JsonConverter(typeof(Microsoft.Azure.Search.Serialization.ExtensibleEnumConverter<Microsoft.Azure.Search.Models.TokenizerName>))>]
type TokenizerName = struct
Public Structure TokenizerName
Implements IEquatable(Of TokenizerName)
Inheritance
TokenizerName
Attributes
Newtonsoft.Json.JsonConverterAttribute
Implements

Fields

Classic

Grammar-based tokenizer that is suitable for processing most European-language documents. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html

EdgeNGram

Tokenizes the input from an edge into n-grams of the given size(s). https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html

Keyword

Emits the entire input as a single token. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html

Letter

Divides text at non-letters. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html

Lowercase

Divides text at non-letters and converts them to lower case. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NGram

Tokenizes the input into n-grams of the given size(s). http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html

PathHierarchy

Tokenizer for path-like hierarchies. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html

Pattern

Tokenizer that uses regex pattern matching to construct distinct tokens. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html

Standard

Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html

UaxUrlEmail

Tokenizes urls and emails as one token. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html

Whitespace

Divides text at whitespace. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html

Methods

Equals(Object)

Determines whether the specified object is equal to the current object.

Equals(TokenizerName)

Compares the TokenizerName for equality with another TokenizerName.

GetHashCode()

Serves as the default hash function.

ToString()

Returns a string representation of the TokenizerName.

Operators

Equality(TokenizerName, TokenizerName)

Compares two TokenizerName values for equality.

Explicit(TokenizerName to String)

Defines explicit conversion from TokenizerName to string.

Implicit(String to TokenizerName)

Defines implicit conversion from string to TokenizerName.

Inequality(TokenizerName, TokenizerName)

Compares two TokenizerName values for inequality.

Applies to