Hi @Hasan Ali
Currently, Azure Speech-to-Text supports various languages, but the documentation on semantic end pointing may not specify which ones are applicable. I recommend reviewing the latest Language and voice support for the Speech service to confirm if your target languages are supported for semantic end pointing. To reduce latency, consider the following best practices: Ensure streaming is properly configured in your implementation to provide earlier access to interim results. If possible, batch multiple requests together to significantly improve performance. Avoid mixing different workloads on a single endpoint to prevent delays due to queuing. Check your SDK configuration settings to ensure they are optimized for low latency (such as setting the appropriate output format). Further details are available in the Performance and latency.
Short utterances can be problematic due to the inherent processing time needed to interpret and respond. Stream audio starting from the first received chunk to make the interaction feel more immediate. Consider using the Speech SDK’s capabilities for more efficient buffering and streaming of audio data.
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.