As per https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-support?tabs=stt "The table in this section summarizes the locales supported for speech to text (real-time and batch transcription)."
Regarding your comments, real-time transcription leverages the Universal model, which is designed to cover a broad range of languages. It may support sentence detection even when full Unicode character recognition isn't available, which is why some Indic languages may appear to handle sentence endings but not produce expected text outputs. Sentence-ending recognition (e.g., punctuation or pauses) might still work because it relies on acoustic and prosodic cues rather than text encoding.
Fast transcription indeed uses a separate, optimized model for specific languages. The language support for this model tends to be more limited but is faster and more efficient for bulk transcription.
If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.
hth
Marcin