ClassicTokenizer Class

Definition

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene. http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html

[Newtonsoft.Json.JsonObject("#Microsoft.Azure.Search.ClassicTokenizer")]
public class ClassicTokenizer : Microsoft.Azure.Search.Models.Tokenizer
[<Newtonsoft.Json.JsonObject("#Microsoft.Azure.Search.ClassicTokenizer")>]
type ClassicTokenizer = class
    inherit Tokenizer
Public Class ClassicTokenizer
Inherits Tokenizer
Inheritance
ClassicTokenizer
Attributes
Newtonsoft.Json.JsonObjectAttribute

Constructors

ClassicTokenizer()

Initializes a new instance of the ClassicTokenizer class.

ClassicTokenizer(String, Nullable<Int32>)

Initializes a new instance of the ClassicTokenizer class.

Properties

MaxTokenLength

Gets or sets the maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.

Name

Gets or sets the name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

(Inherited from Tokenizer)

Methods

Validate()

Validate the object.

Applies to