Why doesn't Azure OpenAI support same token input and output limits as OpenAI?
Sam Cohan
5
Reputation points
I am trying to fine-tune a gpt-35-turbo model (version 613) using a training and validation file that were working fine on the same version of the model on OpenAI but it is failing due to the Azure version having an input token limit of 4096. Any idea why the Azure version has lower limits? Does Microsoft plan to bring capabilities on par with OpenAI or do I need to use another model type?