How to switch between OpenAI and Azure OpenAI endpoints with Python
While OpenAI and Azure OpenAI Service rely on a common Python client library, there are small changes you need to make to your code in order to swap back and forth between endpoints. This article walks you through the common changes and differences you'll experience when working across OpenAI and Azure OpenAI.
This article only shows examples with the new OpenAI Python 1.x API library. For information on migrating from 0.28.1
to 1.x
refer to our migration guide.
Authentication
We recommend using environment variables. If you haven't done this before our Python quickstarts walk you through this configuration.
API key
OpenAI | Azure OpenAI |
|
|
Microsoft Entra authentication
OpenAI | Azure OpenAI |
|
|
Keyword argument for model
OpenAI uses the model
keyword argument to specify what model to use. Azure OpenAI has the concept of unique model deployments. When using Azure OpenAI model
should refer to the underling deployment name you chose when you deployed the model.
OpenAI | Azure OpenAI |
|
|
Azure OpenAI embeddings multiple input support
OpenAI currently allows a larger number of array inputs with text-embedding-ada-002. Azure OpenAI currently supports input arrays up to 16 for text-embedding-ada-002 Version 2. Both require the max input token limit per API request to remain under 8191 for this model.
OpenAI | Azure OpenAI |
|
|
Next steps
- Learn more about how to work with GPT-35-Turbo and the GPT-4 models with our how-to guide.
- For more examples, check out the Azure OpenAI Samples GitHub repository
Feedback
Submit and view feedback for