Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article shows you how to use the v1 Azure OpenAI API. The v1 API simplifies authentication, removes the need for dated api-version parameters, and supports cross-provider model calls.
Note
New API response objects might be added to the API response at any time. We recommend you only parse the response objects you require.
Prerequisites
- An Azure subscription - Create one for free
- A Foundry resource or Azure OpenAI resource deployed in a supported region
- At least one model deployment
- For Microsoft Entra ID authentication: the
Cognitive Services OpenAI Userrole assigned to your identity. For more information, see Role-based access control for Azure OpenAI
API evolution
Previously, Azure OpenAI received monthly updates of new API versions. Taking advantage of new features required constantly updating code and environment variables with each new API release. Azure OpenAI also required the extra step of using Azure specific clients which created overhead when migrating code between OpenAI and Azure OpenAI.
Starting in August 2025, you can opt in to the next generation v1 Azure OpenAI APIs which add support for:
- Ongoing access to the latest features with no need to specify new
api-version's each month. - Faster API release cycle with new features launching more frequently.
- OpenAI client support with minimal code changes to swap between OpenAI and Azure OpenAI when using key-based authentication.
- OpenAI client support for token based authentication and automatic token refresh without the need to take a dependency on a separate Azure OpenAI client.
- Make chat completions calls with models from other providers like DeepSeek and Grok which support the v1 chat completions syntax.
Access to new API calls that are still in preview will be controlled by passing feature specific preview headers allowing you to opt in to the features you want, without having to swap API versions. Alternatively, some features will indicate preview status through their API path and don't require an additional header.
Examples:
/openai/v1/evalsis in preview and requires passing an"aoai-evals":"preview"header./openai/v1/fine_tuning/alpha/graders/is in preview and requires no custom header due to the presence ofalphain the API path.
For the initial v1 Generally Available (GA) API launch, only a subset of the inference and authoring API capabilities are supported. All GA features are supported for use in production. Support for more capabilities is being added rapidly.
Code changes
v1 API
API Key:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/"
)
response = client.responses.create(
model="gpt-4.1-nano", # Replace with your model deployment name
input="This is a test.",
)
print(response.model_dump_json(indent=2))
Key differences from the previous API:
OpenAI()client is used instead ofAzureOpenAI().base_urlpasses the Azure OpenAI endpoint and/openai/v1is appended to the endpoint address.api-versionis no longer a required parameter with the v1 GA API.
API Key with environment variables:
Set the following environment variables before running the code:
| Variable | Value |
|---|---|
OPENAI_BASE_URL |
https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/ |
OPENAI_API_KEY |
Your Azure OpenAI API key |
Then create the client without parameters:
client = OpenAI()
Microsoft Entra ID:
Important
Handling automatic token refresh was previously handled through use of the AzureOpenAI() client. The v1 API removes this dependency, by adding automatic token refresh support to the OpenAI() client.
from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = OpenAI(
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
api_key = token_provider
)
response = client.responses.create(
model="gpt-4.1-nano",
input= "This is a test"
)
print(response.model_dump_json(indent=2))
base_urlpasses the Azure OpenAI endpoint and/openai/v1is appended to the endpoint address.api_keyparameter is set totoken_provider, enabling automatic retrieval and refresh of an authentication token instead of using a static API key.
Model support
For Azure OpenAI models we recommend using the Responses API, however, the v1 API also allows you to make chat completions calls with models from other providers like DeepSeek and Grok which support the OpenAI v1 chat completions syntax.
base_url will accept both https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/ and https://YOUR-RESOURCE-NAME.services.ai.azure.com/openai/v1/ formats.
Note
Responses API also works with Foundry Models sold directly by Azure, such as Microsoft AI, DeepSeek, and Grok models. To learn how to use the Responses API with these models, see How to generate text responses with Microsoft Foundry Models.
from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = OpenAI(
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
api_key=token_provider,
)
completion = client.chat.completions.create(
model="MAI-DS-R1", # Replace with your model deployment name.
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me about the attention is all you need paper"}
]
)
#print(completion.choices[0].message)
print(completion.model_dump_json(indent=2))
v1 API support
Status
Generally Available features are supported for use in production.
| API Path | Status |
|---|---|
/openai/v1/chat/completions |
Generally Available |
/openai/v1/embeddings |
Generally Available |
/openai/v1/evals |
Preview |
/openai/v1/files |
Generally Available |
/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy |
Preview |
/openai/v1/fine_tuning/alpha/graders/ |
Preview |
/openai/v1/fine_tuning/ |
Generally Available |
/openai/v1/models |
Generally Available |
/openai/v1/responses |
Generally Available |
/openai/v1/vector_stores |
Generally Available |
Preview headers
| API Path | Header |
|---|---|
/openai/v1/evals |
"aoai-evals":"preview" |
/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy |
"aoai-copy-ft-checkpoints" : "preview" |
API version changelog
The following sections summarize changes between API versions.
Changes between v1 preview release and 2025-04-01-preview
- v1 preview API
- Video generation support
- NEW Responses API features:
- Remote Model Context Protocol (MCP) servers tool integration
- Support for asynchronous background tasks
- Encrypted reasoning items
- Image generation
Changes between 2025-04-01-preview and 2025-03-01-preview
Changes between 2025-03-01-preview and 2025-02-01-preview
- Responses API
- Computer use
Changes between 2025-02-01-preview and 2025-01-01-preview
- Stored completions (distillation API support).
Changes between 2025-01-01-preview and 2024-12-01-preview
predictionparameter added for predicted outputs support.gpt-4o-audio-previewmodel support.
Changes between 2024-12-01-preview and 2024-10-01-preview
store, andmetadataparameters added for stored completions support.reasoning_effortadded for latest reasoning models.user_security_contextadded for Microsoft Defender for Cloud integration.
Changes between 2024-09-01-preview and 2024-08-01-preview
max_completion_tokensadded to supporto1-previewando1-minimodels.max_tokensdoesn't work with the o1 series models.parallel_tool_callsadded.completion_tokens_details&reasoning_tokensadded.stream_options&include_usageadded.
Changes between 2024-07-01-preview and 2024-08-01-preview API specification
- Structured outputs support.
- Large file upload API added.
- On your data changes:
- Mongo DB integration.
role_informationparameter removed.rerank_scoreadded to citation object.- AML datasource removed.
- AI Search vectorization integration improvements.
Changes between 2024-05-01-preview and 2024-07-01-preview API specification
- Batch API support added
- Vector store chunking strategy parameters
max_num_resultsthat the file search tool should output.
Changes between 2024-04-01-preview and 2024-05-01-preview API specification
- Assistants v2 support - File search tool and vector storage
- Fine-tuning checkpoints, seed, events
- On your data updates
- DALL-E 2 now supports model deployment and can be used with the latest preview API.
- Content filtering updates
Changes between 2024-03-01-preview and 2024-04-01-preview API specification
- Breaking Change: Enhancements parameters removed. This impacts the
gpt-4Version:vision-previewmodel. - timestamp_granularities parameter added.
audioWordobject added.- Additional TTS
response_formats: wav & pcm.
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
404 Not Found when calling the v1 API |
Incorrect base_url format |
Verify the URL ends with /openai/v1/. Both https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/ and https://YOUR-RESOURCE-NAME.services.ai.azure.com/openai/v1/ are valid. |
401 Unauthorized with Entra ID |
Missing or incorrect role assignment | Assign the Cognitive Services OpenAI User role to your identity. Role assignments can take up to 5 minutes to propagate. |
AzureOpenAI() client doesn't work with v1 |
v1 API uses the OpenAI() client |
Replace AzureOpenAI() with OpenAI() and set base_url to your Azure endpoint with /openai/v1/ appended. |
api-version parameter rejected |
v1 API doesn't use api-version |
Remove any api-version query parameters from your requests. The v1 API doesn't require or accept them. |
| Preview features not available | Missing preview header | For preview APIs like /openai/v1/evals, pass the required preview header (for example, "aoai-evals":"preview"). See Preview headers. |
Known issues
- The
2025-04-01-previewAzure OpenAI spec uses OpenAPI 3.1. It's a known issue that this version isn't fully supported by Azure API Management.