Greetings!
When using a custom model based on the Thai language in Azure Speech Service to recognize mixed Thai and English sentences, you may encounter inaccuracies with English recognition. This issue arises because the base model is not designed to support multiple languages. Although using a Phrase List might slightly improve recognition, it is not specifically intended for this purpose and will generally favour Thai phrases.
To improve the recognition accuracy for mixed language audio, you should use Azure Speech's continuous language identification feature. This feature allows the service to identify the language dynamically during continuous recognition. However, it is important to use full sentences rather than isolated words to enable effective language switching.
For more details on language identification and how to implement it, refer to the following documentation:
Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.
Please do not forget to "up-vote" wherever the information provided helps you, as this can be beneficial to other community members.