Language support for conversational language understanding
Article
Use this article to learn about the languages currently supported by CLU feature.
Multi-lingual option
Tip
See How to train a model for information on which training mode you should use for multilingual projects.
With conversational language understanding, you can train a model in one language and use to predict intents and entities from utterances in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
You can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the tag utterances page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
You aren't expected to add the same number of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
When you add data in another language, you shouldn't expect it to negatively affect other languages.
List and prebuilt components in multiple languages
Projects with multiple languages enabled will allow you to specify synonyms per language for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
"query": "{query}"
"language": "{language code}"
If you do not provide a language, it will fall back to the default language of your project.
Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted.
Languages supported by conversational language understanding
Conversational language understanding supports utterances in the following languages:
The Azure AI conversational language understanding service (CLU) enables you to train an Azure AI Language model that apps can use to extract meaning from natural language.