Why is the OpenAI service provided by Microsoft so slow?

hongwei xiao 0 Reputation points
2023-11-12T08:53:03.8133333+00:00

Azure OpenAI 服务为什么接口速度这么慢? 是微软的云计算资源不够吗?
还是说有不同的价格等级对应不同的响应速度?
比起我体验过的其它平台提供的ChatGPT3.5的服务速度要慢很多很多了,别人的基本上是秒级开始响应输出内容了。
感觉花相同的钱提供这种不对应的服务呢,现在的速度跟学生体验版相差不大。希望能有人解释一下这是为什么?

Edit : translate to English

Why is the Azure OpenAI service interface so slow? Are Microsoft's cloud computing resources insufficient?

Or are there different price levels corresponding to different response speeds?

Compared with the ChatGPT3.5 service provided by other platforms that I have experienced, the service speed is much slower. Others basically start responding to the output content in seconds.

It feels like spending the same money to provide this kind of non-corresponding service, the current speed is not much different from the student trial version. Hope someone can explain why this is?

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
2,482 questions
{count} votes

1 answer

Sort by: Most helpful
  1. AshokPeddakotla-MSFT 29,906 Reputation points
    2023-11-13T05:32:58.1133333+00:00

    hongwei xiao Greetings & Welcome to Microsoft Q&A forum!

    Why is the Azure OpenAI service interface so slow? Are Microsoft's cloud computing resources insufficient? Or are there different price levels corresponding to different response speeds?

    Azure OpenAI Service runs on Microsoft's cloud computing resources, which are highly scalable and can handle large workloads. Therefore, it is unlikely that the slow response time is due to insufficient resources.

    Compared with the ChatGPT3.5 service provided by other platforms that I have experienced, the service speed is much slower. Others basically start responding to the output content in seconds.

    If you're experiencing slow response times consistently, it may be worth investigating the specific factors that are causing the delay. You can consider upgrading to a higher pricing tier to to check the performance of the service and see if that improves.

    You can also try using a different model or adjusting the parameters to see if that improves the response time.

    Do let us know if that helps or have any further queries.