Dela via


MicrosoftLanguageTokenizer Constructors

Definition

Overloads

MicrosoftLanguageTokenizer()

Initializes a new instance of the MicrosoftLanguageTokenizer class.

MicrosoftLanguageTokenizer(String, Nullable<Int32>, Nullable<Boolean>, Nullable<MicrosoftTokenizerLanguage>)

Initializes a new instance of the MicrosoftLanguageTokenizer class.

MicrosoftLanguageTokenizer()

Source:
MicrosoftLanguageTokenizer.cs

Initializes a new instance of the MicrosoftLanguageTokenizer class.

public MicrosoftLanguageTokenizer ();
Public Sub New ()

Applies to

MicrosoftLanguageTokenizer(String, Nullable<Int32>, Nullable<Boolean>, Nullable<MicrosoftTokenizerLanguage>)

Source:
MicrosoftLanguageTokenizer.cs

Initializes a new instance of the MicrosoftLanguageTokenizer class.

public MicrosoftLanguageTokenizer (string name, int? maxTokenLength = default, bool? isSearchTokenizer = default, Microsoft.Azure.Search.Models.MicrosoftTokenizerLanguage? language = default);
new Microsoft.Azure.Search.Models.MicrosoftLanguageTokenizer : string * Nullable<int> * Nullable<bool> * Nullable<Microsoft.Azure.Search.Models.MicrosoftTokenizerLanguage> -> Microsoft.Azure.Search.Models.MicrosoftLanguageTokenizer
Public Sub New (name As String, Optional maxTokenLength As Nullable(Of Integer) = Nothing, Optional isSearchTokenizer As Nullable(Of Boolean) = Nothing, Optional language As Nullable(Of MicrosoftTokenizerLanguage) = Nothing)

Parameters

name
String

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

maxTokenLength
Nullable<Int32>

The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.

isSearchTokenizer
Nullable<Boolean>

A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.

language
Nullable<MicrosoftTokenizerLanguage>

The language to use. The default is English. Possible values include: 'bangla', 'bulgarian', 'catalan', 'chineseSimplified', 'chineseTraditional', 'croatian', 'czech', 'danish', 'dutch', 'english', 'french', 'german', 'greek', 'gujarati', 'hindi', 'icelandic', 'indonesian', 'italian', 'japanese', 'kannada', 'korean', 'malay', 'malayalam', 'marathi', 'norwegianBokmaal', 'polish', 'portuguese', 'portugueseBrazilian', 'punjabi', 'romanian', 'russian', 'serbianCyrillic', 'serbianLatin', 'slovenian', 'spanish', 'swedish', 'tamil', 'telugu', 'thai', 'ukrainian', 'urdu', 'vietnamese'

Applies to