Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Note
This document refers to the Microsoft Foundry (classic) portal.
🔍 View the Microsoft Foundry (new) documentation to learn about the new portal.
This article provides guidance on migrating your applications from the Azure AI Inference SDK to the OpenAI SDK. The OpenAI SDK offers broader compatibility, access to the latest OpenAI features, and simplified code with unified patterns across Azure OpenAI and Foundry Models.
Note
The OpenAI SDK refers to the client libraries (such as the Python openai package or JavaScript openai npm package) that connect to OpenAI v1 API endpoints. These SDKs have their own versioning separate from the API version - for example, the Go OpenAI SDK is currently at v3, but it still connects to the OpenAI v1 API endpoints with /openai/v1/ in the URL path.
Benefits of migrating
Migrating to the OpenAI SDK provides several advantages:
- Broader model support: Works with Azure OpenAI in Foundry Models and other Foundry Models from providers like DeepSeek and Grok
- Unified API: Uses the same SDK libraries and clients for both OpenAI and Azure OpenAI endpoints
- Latest features: Access to the newest OpenAI features without waiting for Azure-specific updates
- Simplified authentication: Built-in support for both API key and Microsoft Entra ID authentication
- Implicit API versioning: The v1 API eliminates the need to frequently update
api-versionparameters
Key differences
The following table shows the main differences between the two SDKs:
| Aspect | Azure AI Inference SDK | OpenAI SDK |
|---|---|---|
| Client class | ChatCompletionsClient |
OpenAI |
| Endpoint format | https://<resource>.services.ai.azure.com/models |
https://<resource>.openai.azure.com/openai/v1/ |
| API version | Required in URL or parameter | Not required (uses v1 API) |
| Model parameter | Optional (for multi-model endpoints) | Required (deployment name) |
| Authentication | Azure credentials only | API key or Azure credentials |
Setup
Install the OpenAI SDK:
pip install openai
For Microsoft Entra ID authentication, also install:
pip install azure-identity
Client configuration
With API key authentication:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
base_url="https://<resource>.openai.azure.com/openai/v1/",
)
With Microsoft Entra ID authentication:
from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default"
)
client = OpenAI(
base_url="https://<resource>.openai.azure.com/openai/v1/",
api_key=token_provider,
)
Chat completions
completion = client.chat.completions.create(
model="DeepSeek-V3.1", # Required: your deployment name
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is Azure AI?"}
]
)
print(completion.choices[0].message.content)
Streaming
stream = client.chat.completions.create(
model="DeepSeek-V3.1",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a poem about Azure."}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Embeddings
from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default")
client = OpenAI(
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
api_key = token_provider,
)
response = client.embeddings.create(
input = "How do I use Python in VS Code?",
model = "text-embedding-3-large" // Use the name of your deployment
)
print(response.data[0].embedding)
Setup
Install the OpenAI SDK:
dotnet add package OpenAI
For Microsoft Entra ID authentication, also install:
dotnet add package Azure.Identity
Client configuration
With API key authentication:
using OpenAI;
using OpenAI.Chat;
using System.ClientModel;
ChatClient client = new(
model: "gpt-4o-mini", // Your deployment name
credential: new ApiKeyCredential(Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY")),
options: new OpenAIClientOptions() {
Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1/")
}
);
With Microsoft Entra ID authentication:
using Azure.Identity;
using OpenAI;
using OpenAI.Chat;
using System.ClientModel.Primitives;
#pragma warning disable OPENAI001
BearerTokenPolicy tokenPolicy = new(
new DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default"
);
ChatClient client = new(
model: "gpt-4o-mini", // Your deployment name
authenticationPolicy: tokenPolicy,
options: new OpenAIClientOptions() {
Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1/")
}
);
Chat completions
using OpenAI.Chat;
ChatCompletion completion = client.CompleteChat(
new SystemChatMessage("You are a helpful assistant."),
new UserChatMessage("What is Azure AI?")
);
Console.WriteLine(completion.Content[0].Text);
Streaming
using OpenAI.Chat;
CollectionResult<StreamingChatCompletionUpdate> updates = client.CompleteChatStreaming(
new SystemChatMessage("You are a helpful assistant."),
new UserChatMessage("Write a poem about Azure.")
);
foreach (StreamingChatCompletionUpdate update in updates)
{
foreach (ChatMessageContentPart part in update.ContentUpdate)
{
Console.Write(part.Text);
}
}
Embeddings
using OpenAI;
using OpenAI.Embeddings;
using System.ClientModel;
EmbeddingClient client = new(
"text-embedding-3-small",
credential: new ApiKeyCredential("API-KEY"),
options: new OpenAIClientOptions()
{
Endpoint = new Uri("https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1")
}
);
string input = "This is a test";
OpenAIEmbedding embedding = client.GenerateEmbedding(input);
ReadOnlyMemory<float> vector = embedding.ToFloats();
Console.WriteLine($"Embeddings: [{string.Join(", ", vector.ToArray())}]");
Setup
Install the OpenAI SDK:
npm install openai
For Microsoft Entra ID authentication, also install:
npm install @azure/identity
Client configuration
With API key authentication:
import { OpenAI } from "openai";
const client = new OpenAI({
baseURL: "https://<resource>.openai.azure.com/openai/v1/",
apiKey: process.env.AZURE_OPENAI_API_KEY
});
With Microsoft Entra ID authentication:
import { DefaultAzureCredential, getBearerTokenProvider } from "@azure/identity";
import { OpenAI } from "openai";
const tokenProvider = getBearerTokenProvider(
new DefaultAzureCredential(),
'https://cognitiveservices.azure.com/.default'
);
const client = new OpenAI({
baseURL: "https://<resource>.openai.azure.com/openai/v1/",
apiKey: tokenProvider
});
Chat completions
const completion = await client.chat.completions.create({
model: "DeepSeek-V3.1", // Required: your deployment name
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is Azure AI?" }
]
});
console.log(completion.choices[0].message.content);
Streaming
const stream = await client.chat.completions.create({
model: "DeepSeek-V3.1",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Write a poem about Azure." }
],
stream: true
});
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}
Embeddings
import OpenAI from "openai";
import { getBearerTokenProvider, DefaultAzureCredential } from "@azure/identity";
const tokenProvider = getBearerTokenProvider(
new DefaultAzureCredential(),
'https://cognitiveservices.azure.com/.default');
const client = new OpenAI({
baseURL: "https://<resource>.openai.azure.com/openai/v1/",
apiKey: tokenProvider
});
const embedding = await client.embeddings.create({
model: "text-embedding-3-large", // Required: your deployment name
input: "The quick brown fox jumped over the lazy dog",
encoding_format: "float",
});
console.log(embedding);
Setup
Add the OpenAI SDK to your project. Check the OpenAI Java GitHub repository for the latest version and installation instructions.
For Microsoft Entra ID authentication, also add:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-identity</artifactId>
<version>1.18.0</version>
</dependency>
Client configuration
With API key authentication:
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
OpenAIClient client = OpenAIOkHttpClient.builder()
.baseUrl("https://<resource>.openai.azure.com/openai/v1/")
.apiKey(System.getenv("AZURE_OPENAI_API_KEY"))
.build();
With Microsoft Entra ID authentication:
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.azure.identity.DefaultAzureCredential;
import com.azure.identity.DefaultAzureCredentialBuilder;
DefaultAzureCredential tokenCredential = new DefaultAzureCredentialBuilder().build();
OpenAIClient client = OpenAIOkHttpClient.builder()
.baseUrl("https://<resource>.openai.azure.com/openai/v1/")
.credential(BearerTokenCredential.create(
AuthenticationUtil.getBearerTokenSupplier(
tokenCredential,
"https://cognitiveservices.azure.com/.default"
)
))
.build();
Chat completions
import com.openai.models.chat.completions.*;
ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
.addSystemMessage("You are a helpful assistant.")
.addUserMessage("What is Azure AI?")
.model("DeepSeek-V3.1") // Required: your deployment name
.build();
ChatCompletion completion = client.chat().completions().create(params);
System.out.println(completion.choices().get(0).message().content());
Streaming
import com.openai.models.chat.completions.*;
import java.util.stream.Stream;
ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
.addSystemMessage("You are a helpful assistant.")
.addUserMessage("Write a poem about Azure.")
.model("DeepSeek-V3.1") // Required: your deployment name
.build();
Stream<ChatCompletionChunk> stream = client.chat().completions().createStreaming(params);
stream.forEach(chunk -> {
if (chunk.choices() != null && !chunk.choices().isEmpty()) {
String content = chunk.choices().get(0).delta().content();
if (content != null) {
System.out.print(content);
}
}
});
Embeddings
package com.openai.example;
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.models.embeddings.EmbeddingCreateParams;
import com.openai.models.embeddings.EmbeddingModel;
public final class EmbeddingsExample {
private EmbeddingsExample() {}
public static void main(String[] args) {
// Configures using one of:
// - The `OPENAI_API_KEY` environment variable
// - The `OPENAI_BASE_URL` and `AZURE_OPENAI_KEY` environment variables
OpenAIClient client = OpenAIOkHttpClient.fromEnv();
EmbeddingCreateParams createParams = EmbeddingCreateParams.builder()
.input("The quick brown fox jumped over the lazy dog")
.model(EmbeddingModel.TEXT_EMBEDDING_3_SMALL)
.build();
System.out.println(client.embeddings().create(createParams));
}
}
Setup
Install the OpenAI SDK:
go get github.com/openai/openai-go/v3
For Microsoft Entra ID authentication, also install:
go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
Client configuration
With API key authentication:
import (
"github.com/openai/openai-go/v3"
"github.com/openai/openai-go/v3/option"
)
client := openai.NewClient(
option.WithBaseURL("https://<resource>.openai.azure.com/openai/v1/"),
option.WithAPIKey(os.Getenv("AZURE_OPENAI_API_KEY")),
)
With Microsoft Entra ID authentication:
import (
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/openai/openai-go/v3"
"github.com/openai/openai-go/v3/azure"
"github.com/openai/openai-go/v3/option"
)
tokenCredential, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
panic(err)
}
client := openai.NewClient(
option.WithBaseURL("https://<resource>.openai.azure.com/openai/v1/"),
azure.WithTokenCredential(tokenCredential),
)
Chat completions
import (
"context"
"fmt"
"github.com/openai/openai-go/v3"
)
chatCompletion, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
Messages: []openai.ChatCompletionMessageParamUnion{
openai.SystemMessage("You are a helpful assistant."),
openai.UserMessage("What is Azure AI?"),
},
Model: "DeepSeek-V3.1", // Required: your deployment name
})
if err != nil {
panic(err.Error())
}
fmt.Println(chatCompletion.Choices[0].Message.Content)
Streaming
import (
"context"
"fmt"
"github.com/openai/openai-go/v3"
)
stream := client.Chat.Completions.NewStreaming(context.TODO(), openai.ChatCompletionNewParams{
Messages: []openai.ChatCompletionMessageParamUnion{
openai.SystemMessage("You are a helpful assistant."),
openai.UserMessage("Write a poem about Azure."),
},
Model: "DeepSeek-V3.1", // Required: your deployment name
})
for stream.Next() {
chunk := stream.Current()
if len(chunk.Choices) > 0 && chunk.Choices[0].Delta.Content != "" {
fmt.Print(chunk.Choices[0].Delta.Content)
}
}
if err := stream.Err(); err != nil {
panic(err.Error())
}
Embeddings
package main
import (
"context"
"fmt"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/openai/openai-go/v3"
"github.com/openai/openai-go/v3/azure"
"github.com/openai/openai-go/v3/option"
)
func main() {
tokenCredential, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("Error creating credential:%s", err)
}
// Create a client with Azure OpenAI endpoint and Entra ID credentials
client := openai.NewClient(
option.WithBaseURL("https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/"),
azure.WithTokenCredential(tokenCredential),
)
inputText := "The quick brown fox jumped over the lazy dog"
// Make the embedding request synchronously
resp, err := client.Embeddings.New(context.Background(), openai.EmbeddingNewParams{
Model: openai.EmbeddingModel("text-embedding-3-large"), // Use your deployed model name on Azure
Input: openai.EmbeddingNewParamsInputUnion{
OfArrayOfStrings: []string{inputText},
},
})
if err != nil {
log.Fatalf("Failed to get embedding: %s", err)
}
if len(resp.Data) == 0 {
log.Fatalf("No embedding data returned.")
}
// Print embedding information
embedding := resp.Data[0].Embedding
fmt.Printf("Embedding Length: %d\n", len(embedding))
fmt.Println("Embedding Values:")
for _, value := range embedding {
fmt.Printf("%f, ", value)
}
fmt.Println()
}
Common migration patterns
Model parameter handling
- Azure AI Inference SDK: The
modelparameter is optional for single-model endpoints but required for multimodel endpoints. - OpenAI SDK: The
modelparameter is always required and should be set to your deployment name.
Endpoint URL format
- Azure AI Inference SDK: Uses
https://<resource>.services.ai.azure.com/models. - OpenAI SDK: Uses
https://<resource>.openai.azure.com/openai/v1(connects to the OpenAI v1 API).
Response structure
The response structure is similar but has some differences:
- Azure AI Inference SDK: Returns
ChatCompletionsobject withchoices[].message.content. - OpenAI SDK: Returns
ChatCompletionobject withchoices[].message.content.
Both SDKs provide similar access patterns to response data, including:
- Message content
- Token usage
- Model information
- Finish reason
Migration checklist
Use this checklist to ensure a smooth migration:
- Install the OpenAI SDK for your programming language
- Update authentication code (API key or Microsoft Entra ID)
- Change endpoint URLs from
.services.ai.azure.com/modelsto.openai.azure.com/openai/v1/ - Update client initialization code
- Always specify the
modelparameter with your deployment name - Update request method calls (
complete→chat.completions.create) - Update streaming code if applicable
- Update error handling to use OpenAI SDK exceptions
- Test all functionality thoroughly
- Update documentation and code comments
Troubleshooting
Authentication failures
If you experience authentication failures:
- Verify your API key is correct and isn't expired
- For Microsoft Entra ID, ensure your application has the correct permissions
- Check that the credential scope is set to
https://cognitiveservices.azure.com/.default
Endpoint errors
If you receive endpoint errors:
- Verify the endpoint URL format includes
/openai/v1/at the end. - Ensure your resource name is correct.
- Check that the model deployment exists and is active.
Model not found errors
If you receive "model not found" errors:
- Verify you're using your deployment name, not the model name.
- Check that the deployment is active in your Microsoft Foundry resource.
- Ensure the deployment name matches exactly (case-sensitive).