@Mehboob Ahmad Thanks for asking question!
You should also consider language analyzers when content consists of non-Western language strings. While the default analyzer (Standard Lucene) is language-agnostic, the concept of using spaces and special characters (hyphens and slashes) to separate strings is more applicable to Western languages than non-Western ones.
For example, in Chinese, Japanese, Korean (CJK), and other Asian languages, a space isn't necessarily a word delimiter.
Consider the following Japanese string. Because it has no spaces, a language-agnostic analyzer would likely analyze the entire string as one token, when in fact the string is actually a phrase.
これは私たちの銀河系の中ではもっとも重く明るいクラスの球状星団です。(This is the heaviest and brightest group of spherical stars in our galaxy.)
For the example above, a successful query would have to include the full token, or a partial token using a suffix wildcard, resulting in an unnatural and limiting search experience.
A better experience is to search for individual words: 明るい (Bright), 私たちの (Our), 銀河系 (Galaxy).
Using one of the Japanese analyzers available in Azure AI Search is more likely to unlock this behavior because those analyzers are better equipped at splitting the chunk of text into meaningful words in the target language.
For more details refer- Add language analyzers to string fields in an Azure AI Search index
Hope this helps, please let us know, if further query happy to assist.