Compartir a través de

Why Azure OpenAI is dumber than OpenAI

Alejandro Flores 0 Puntos de reputación
2025-03-20T00:32:01.6833333+00:00

I'm conducting a comparison between OpenAI and Azure OpenAI using identical models (specifically gpt-4o-mini), and I'm noticing significant performance differences. For example, when processing the text 'My birthday is the first Monday of January', the standard OpenAI service correctly extracts the date as 'January 6th', while Azure OpenAI responds with 'I don't know the answer.' Has anyone else experienced similar performance discrepancies between these services when using the same models? What might explain these differences despite using identical model versions?

Centro de la comunidad | Sin supervisar
0 comentarios No hay comentarios

2 respuestas

Ordenar por: Muy útil
  1. Alejandro Flores 0 Puntos de reputación
    2025-03-25T16:21:04.02+00:00

    I didn't find it useful since I have compared both and stills OpenAI Azure seems dumber. How can I know what filters does it have?

    0 comentarios No hay comentarios

  2. Gao Chen 10,780 Puntos de reputación Personal externo de Microsoft Moderador
    2025-03-20T22:03:31.4666667+00:00

    Hello Alejandro Flores,

    Welcome to Microsoft Q&A!

    I see you’re noticing some performance differences between OpenAI and Azure OpenAI, even though you're using the same model (GPT-4o-mini). There are several factors that could contribute to these differences:

    • OpenAI's models are hosted directly by OpenAI, while Azure OpenAI integrates these models within Microsoft's Azure ecosystem. This integration can introduce variations in latency, resource allocation, and overall performance.
    • The way each service implements the API can affect performance. Differences in request handling, rate limiting, and error management might lead to variations in how responses are generated.
    • Azure allows users to select specific regions for their deployments, which can impact performance based on geographical proximity and network conditions. OpenAI's standard endpoint does not offer this flexibility, potentially leading to different latency experiences.
    • Custom system prompts and configurations can influence how models interpret and respond to queries. If the system prompts or configurations differ between the two services, this could explain the discrepancies in responses.
    • The quality and preprocessing of data fed into the models can vary. Azure's integration with other Azure services might introduce additional preprocessing steps that affect the model's output.
    • During peak times, the load on servers can impact performance. OpenAI's API might experience different load patterns compared to Azure's, affecting response times and accuracy.

    Please note that it's worth conducting controlled experiments to isolate these variables and better understand their impact on performance.

    Regards,

    Gao


    If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".


Su respuesta

Las respuestas pueden ser marcadas como "Aceptadas" por el autor de la pregunta y "Recomendadas" por los moderadores, lo que ayuda a los usuarios a saber que la respuesta ha resuelto el problema del autor.