Hello all,
I am using the chat completions api (tried both gpt-35-turbo and gpt-35-turbo-16k) to process some large texts (>1000 words) kind of like a sentiment analyzer. I have set the top_p=0.0001 to make sure I don't get any variance in the output. I am noticing that for the same models and same prompt the Open AI version of the chat completion api gives me better and more reliable results compared to the Azure version on the api (all api params being the same). Is there a reason for this? Is the Azure model in some way fundamentally different from the direct from Open AI model. Is there something I can do to make the Azure model more reliable? Thanks in advance to anyone who can help?