Catatan
Akses ke halaman ini memerlukan otorisasi. Anda dapat mencoba masuk atau mengubah direktori.
Akses ke halaman ini memerlukan otorisasi. Anda dapat mencoba mengubah direktori.
Pustaka klien Azure OpenAI untuk .NET adalah pendamping pustaka klien OpenAI resmi untuk .NET. Pustaka Azure OpenAI mengonfigurasi klien untuk digunakan dengan Azure OpenAI dan menyediakan dukungan ekstensi yang diketikan dengan tegas untuk model permintaan dan respons dalam skenario Azure OpenAI.
Rilis stabil:
Kode sumber | Paket (NuGet) | Dokumentasi referensi paketDokumentasi referensi API | Contoh
Rilis pratinjau:
Rilis pratinjau memiliki akses ke fitur terbaru.
Kode sumber | Paket (NuGet) | Dokumentasi referensi API | Dokumentasi referensi paketSampel
Dukungan versi Azure OpenAI API
Tidak seperti pustaka klien Azure OpenAI untuk Python dan JavaScript, paket Azure OpenAI .NET terbatas untuk menargetkan subset tertentu dari versi Azure OpenAI API. Umumnya setiap paket .NET Azure OpenAI membuka akses ke fitur rilis Azure OpenAI API yang lebih baru. Memiliki akses ke versi API terbaru memengaruhi ketersediaan fitur.
Pilihan versi dikontrol oleh AzureOpenAIClientOptions.ServiceVersion
enum.
Rilis stabil saat ini menargetkan:
2024-06-01
Rilis pratinjau saat ini dapat menargetkan:
2024-08-01-preview
2024-09-01-preview
2024-10-01-preview
2024-12-01-preview
2025-01-01-preview
2025-03-01-preview
Instalasi
dotnet add package Azure.AI.OpenAI --prerelease
Paket Azure.AI.OpenAI
ini dibangun di atas paket OpenAI resmi, yang disertakan sebagai dependensi.
Otentikasi
Untuk berinteraksi dengan Azure OpenAI atau OpenAI, buat instans AzureOpenAIClient
dengan salah satu pendekatan berikut:
Pendekatan autentikasi tanpa kunci yang aman adalah menggunakan ID Microsoft Entra (sebelumnya Azure Active Directory) melalui pustaka Azure Identity. Untuk menggunakan pustaka:
dotnet add package Azure.Identity
Gunakan jenis kredensial yang diinginkan dari pustaka. Misalnya, DefaultAzureCredential
:
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
ChatClient chatClient = openAIClient.GetChatClient("my-gpt-4o-mini-deployment");
Untuk informasi selengkapnya tentang autentikasi tanpa kunci Azure OpenAI, lihat artikel panduan awal "Mulai menggunakan blok penyusun keamanan Azure OpenAI".
Suara
AzureOpenAIClient.GetAudioClient
Transkripsi
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
AudioClient client = openAIClient.GetAudioClient("whisper");
string audioFilePath = Path.Combine("Assets", "speech.mp3");
AudioTranscriptionOptions options = new()
{
ResponseFormat = AudioTranscriptionFormat.Verbose,
TimestampGranularities = AudioTimestampGranularities.Word | AudioTimestampGranularities.Segment,
};
AudioTranscription transcription = client.TranscribeAudio(audioFilePath, options);
Console.WriteLine("Transcription:");
Console.WriteLine($"{transcription.Text}");
Console.WriteLine();
Console.WriteLine($"Words:");
foreach (TranscribedWord word in transcription.Words)
{
Console.WriteLine($" {word.Word,15} : {word.StartTime.TotalMilliseconds,5:0} - {word.EndTime.TotalMilliseconds,5:0}");
}
Console.WriteLine();
Console.WriteLine($"Segments:");
foreach (TranscribedSegment segment in transcription.Segments)
{
Console.WriteLine($" {segment.Text,90} : {segment.StartTime.TotalMilliseconds,5:0} - {segment.EndTime.TotalMilliseconds,5:0}");
}
Teks ke Ucapan (TTS)
using Azure.AI.OpenAI;
using Azure.Identity;
using OpenAI.Audio;
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
AudioClient client = openAIClient.GetAudioClient("tts-hd"); //Replace with your Azure OpenAI model deployment
string input = "Testing, testing, 1, 2, 3";
BinaryData speech = client.GenerateSpeech(input, GeneratedSpeechVoice.Alloy);
using FileStream stream = File.OpenWrite($"{Guid.NewGuid()}.mp3");
speech.ToStream().CopyTo(stream);
Percakapan
AzureOpenAIClient.GetChatClient
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
ChatClient chatClient = openAIClient.GetChatClient("my-gpt-4o-deployment");
ChatCompletion completion = chatClient.CompleteChat(
[
// System messages represent instructions or other guidance about how the assistant should behave
new SystemChatMessage("You are a helpful assistant that talks like a pirate."),
// User messages represent user input, whether historical or the most recent input
new UserChatMessage("Hi, can you help me?"),
// Assistant messages in a request represent conversation history for responses
new AssistantChatMessage("Arrr! Of course, me hearty! What can I do for ye?"),
new UserChatMessage("What's the best way to train a parrot?"),
]);
Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
Mengalirkan pesan chat secara langsung
Metode CompleteChatStreaming
dan CompleteChatStreamingAsync
digunakan dalam penyelesaian obrolan streaming, yang mengembalikan ResultCollection<StreamingChatCompletionUpdate>
atau AsyncCollectionResult<StreamingChatCompletionUpdate>
alih-alih ClientResult<ChatCompletion>
.
Koleksi hasil ini dapat diulang menggunakan foreach atau await foreach, dengan setiap pembaruan tiba saat data baru tersedia dari respon yang diterima secara streaming.
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
ChatClient chatClient = openAIClient.GetChatClient("my-gpt-4o-deployment");
CollectionResult<StreamingChatCompletionUpdate> completionUpdates = chatClient.CompleteChatStreaming(
[
new SystemChatMessage("You are a helpful assistant that talks like a pirate."),
new UserChatMessage("Hi, can you help me?"),
new AssistantChatMessage("Arrr! Of course, me hearty! What can I do for ye?"),
new UserChatMessage("What's the best way to train a parrot?"),
]);
foreach (StreamingChatCompletionUpdate completionUpdate in completionUpdates)
{
foreach (ChatMessageContentPart contentPart in completionUpdate.ContentUpdate)
{
Console.Write(contentPart.Text);
}
}
Pemadatan
AzureOpenAIClient.GetEmbeddingClient
using Azure.AI.OpenAI;
using Azure.Identity;
using OpenAI.Embeddings;
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
EmbeddingClient client = openAIClient.GetEmbeddingClient("text-embedding-3-large"); //Replace with your model deployment name
string description = "This is a test embedding";
OpenAIEmbedding embedding = client.GenerateEmbedding(description);
ReadOnlyMemory<float> vector = embedding.ToFloats();
Console.WriteLine(string.Join(", ", vector.ToArray()));
Penyempurnaan
Saat ini tidak didukung dengan paket Azure OpenAI .NET.
Batch
Saat ini tidak didukung dengan paket Azure OpenAI .NET.
Gambar
AzureOpenAIClient.GetImageClient
using Azure.AI.OpenAI;
using Azure.Identity;
using OpenAI.Images;
AzureOpenAIClient openAIClient = new(
new Uri("https://your-azure-openai-resource.com"),
new DefaultAzureCredential());
ImageClient client = openAIClient.GetImageClient("dall-e-3"); // replace with your model deployment name.
string prompt = "A rabbit eating pancakes.";
ImageGenerationOptions options = new()
{
Quality = GeneratedImageQuality.High,
Size = GeneratedImageSize.W1792xH1024,
Style = GeneratedImageStyle.Vivid,
ResponseFormat = GeneratedImageFormat.Bytes
};
GeneratedImage image = client.GenerateImage(prompt, options);
BinaryData bytes = image.ImageBytes;
using FileStream stream = File.OpenWrite($"{Guid.NewGuid()}.png");
bytes.ToStream().CopyTo(stream);
Model Penalaran
using Azure.AI.OpenAI;
using Azure.AI.OpenAI.Chat;
using Azure.Identity;
using OpenAI.Chat;
AzureOpenAIClient openAIClient = new(
new Uri("https://YOUR-RESOURCE-NAME.openai.azure.com/"),
new DefaultAzureCredential());
ChatClient chatClient = openAIClient.GetChatClient("o3-mini");
// Create ChatCompletionOptions and set the reasoning effort level
ChatCompletionOptions options = new ChatCompletionOptions
{
ReasoningEffortLevel = ChatReasoningEffortLevel.Low,
MaxOutputTokenCount = 100000
};
#pragma warning disable AOAI001 //currently required to use MaxOutputTokenCount
options.SetNewMaxCompletionTokensPropertyEnabled(true);
ChatCompletion completion = chatClient.CompleteChat(
[
new UserChatMessage("Testing 1,2,3")
],
options); // Pass the options to the CompleteChat method
Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
Penyelesaian (warisan)
Tidak didukung dengan paket .NET Azure OpenAI.
Penanganan kesalahan
Kode kesalahan
Kode Status | Jenis Kesalahan |
---|---|
400 | Bad Request Error |
401 | Authentication Error |
403 | Permission Denied Error |
404 | Not Found Error |
422 | Unprocessable Entity Error |
429 | Rate Limit Error |
500 | Internal Server Error |
503 | Service Unavailable |
504 | Gateway Timeout |
Pengulangan
Kelas klien akan secara otomatis mencoba kembali kesalahan berikut hingga tiga kali lagi menggunakan backoff eksponensial:
- 408 Waktu Permintaan Habis
- 429 Terlalu Banyak Permintaan
- Kesalahan Server Internal 500
- 502 Gateway Buruk
- 503 Layanan Tidak Tersedia
- 504 Waktu Gateway Habis
Kode sumber | Paket (pkg.go.dev) | Dokumentasi referensi API | Dokumentasi referensi PaketSampel
Dukungan versi Azure OpenAI API
Tidak seperti pustaka klien Azure OpenAI untuk Python dan JavaScript, pustaka Azure OpenAI Go ditargetkan ke versi Api Azure OpenAI tertentu. Memiliki akses ke versi API terbaru memengaruhi ketersediaan fitur.
Target versi Azure OpenAI API saat ini: 2025-01-01-preview
Ini didefinisikan dalam file custom_client.go.
Instalasi
Instal modul azopenai
dan azidentity
dengan go get:
go get github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai
# optional
go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
Otentikasi
Modul azidentity digunakan untuk autentikasi Azure Active Directory dengan Azure OpenAI.
package main
import (
"log"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
)
func main() {
dac, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
// NOTE: this constructor creates a client that connects to an Azure OpenAI endpoint.
// To connect to the public OpenAI endpoint, use azopenai.NewClientForOpenAI
client, err := azopenai.NewClient("https://<your-azure-openai-host>.openai.azure.com", dac, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
_ = client
}
Untuk informasi selengkapnya tentang autentikasi tanpa kunci Azure OpenAI, lihat Menggunakan Azure OpenAI tanpa kunci.
Suara
Klien.GenerateSpeechDariTeks
ackage main
import (
"context"
"fmt"
"io"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
)
func main() {
openAIKey := os.Getenv("OPENAI_API_KEY")
// Ex: "https://api.openai.com/v1"
openAIEndpoint := os.Getenv("OPENAI_ENDPOINT")
modelDeploymentID := "tts-1"
if openAIKey == "" || openAIEndpoint == "" || modelDeploymentID == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(openAIKey)
client, err := azopenai.NewClientForOpenAI(openAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
audioResp, err := client.GenerateSpeechFromText(context.Background(), azopenai.SpeechGenerationOptions{
Input: to.Ptr("i am a computer"),
Voice: to.Ptr(azopenai.SpeechVoiceAlloy),
ResponseFormat: to.Ptr(azopenai.SpeechGenerationResponseFormatFlac),
DeploymentName: to.Ptr("tts-1"),
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
defer audioResp.Body.Close()
audioBytes, err := io.ReadAll(audioResp.Body)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
fmt.Fprintf(os.Stderr, "Got %d bytes of FLAC audio\n", len(audioBytes))
}
Client.GetAudioTranscription
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
)
func main() {
azureOpenAIKey := os.Getenv("AOAI_AUDIO_API_KEY")
// Ex: "https://<your-azure-openai-host>.openai.azure.com"
azureOpenAIEndpoint := os.Getenv("AOAI_AUDIO_ENDPOINT")
modelDeploymentID := os.Getenv("AOAI_AUDIO_MODEL")
if azureOpenAIKey == "" || azureOpenAIEndpoint == "" || modelDeploymentID == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(azureOpenAIKey)
client, err := azopenai.NewClientWithKeyCredential(azureOpenAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
mp3Bytes, err := os.ReadFile("testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.mp3")
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
resp, err := client.GetAudioTranscription(context.TODO(), azopenai.AudioTranscriptionOptions{
File: mp3Bytes,
// this will return _just_ the translated text. Other formats are available, which return
// different or additional metadata. See [azopenai.AudioTranscriptionFormat] for more examples.
ResponseFormat: to.Ptr(azopenai.AudioTranscriptionFormatText),
DeploymentName: &modelDeploymentID,
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
fmt.Fprintf(os.Stderr, "Transcribed text: %s\n", *resp.Text)
}
Percakapan
Client.GetChatCompletions
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
)
func main() {
azureOpenAIKey := os.Getenv("AOAI_CHAT_COMPLETIONS_API_KEY")
modelDeploymentID := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
// Ex: "https://<your-azure-openai-host>.openai.azure.com"
azureOpenAIEndpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")
if azureOpenAIKey == "" || modelDeploymentID == "" || azureOpenAIEndpoint == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(azureOpenAIKey)
// In Azure OpenAI you must deploy a model before you can use it in your client. For more information
// see here: https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource
client, err := azopenai.NewClientWithKeyCredential(azureOpenAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
// This is a conversation in progress.
// NOTE: all messages, regardless of role, count against token usage for this API.
messages := []azopenai.ChatRequestMessageClassification{
// You set the tone and rules of the conversation with a prompt as the system role.
&azopenai.ChatRequestSystemMessage{Content: azopenai.NewChatRequestSystemMessageContent("You are a helpful assistant. You will talk like a pirate.")},
// The user asks a question
&azopenai.ChatRequestUserMessage{Content: azopenai.NewChatRequestUserMessageContent("Can you help me?")},
// The reply would come back from the ChatGPT. You'd add it to the conversation so we can maintain context.
&azopenai.ChatRequestAssistantMessage{Content: azopenai.NewChatRequestAssistantMessageContent("Arrrr! Of course, me hearty! What can I do for ye?")},
// The user answers the question based on the latest reply.
&azopenai.ChatRequestUserMessage{Content: azopenai.NewChatRequestUserMessageContent("What's the best way to train a parrot?")},
// from here you'd keep iterating, sending responses back from ChatGPT
}
gotReply := false
resp, err := client.GetChatCompletions(context.TODO(), azopenai.ChatCompletionsOptions{
// This is a conversation in progress.
// NOTE: all messages count against token usage for this API.
Messages: messages,
DeploymentName: &modelDeploymentID,
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
for _, choice := range resp.Choices {
gotReply = true
if choice.ContentFilterResults != nil {
fmt.Fprintf(os.Stderr, "Content filter results\n")
if choice.ContentFilterResults.Error != nil {
fmt.Fprintf(os.Stderr, " Error:%v\n", choice.ContentFilterResults.Error)
}
fmt.Fprintf(os.Stderr, " Hate: sev: %v, filtered: %v\n", *choice.ContentFilterResults.Hate.Severity, *choice.ContentFilterResults.Hate.Filtered)
fmt.Fprintf(os.Stderr, " SelfHarm: sev: %v, filtered: %v\n", *choice.ContentFilterResults.SelfHarm.Severity, *choice.ContentFilterResults.SelfHarm.Filtered)
fmt.Fprintf(os.Stderr, " Sexual: sev: %v, filtered: %v\n", *choice.ContentFilterResults.Sexual.Severity, *choice.ContentFilterResults.Sexual.Filtered)
fmt.Fprintf(os.Stderr, " Violence: sev: %v, filtered: %v\n", *choice.ContentFilterResults.Violence.Severity, *choice.ContentFilterResults.Violence.Filtered)
}
if choice.Message != nil && choice.Message.Content != nil {
fmt.Fprintf(os.Stderr, "Content[%d]: %s\n", *choice.Index, *choice.Message.Content)
}
if choice.FinishReason != nil {
// this choice's conversation is complete.
fmt.Fprintf(os.Stderr, "Finish reason[%d]: %s\n", *choice.Index, *choice.FinishReason)
}
}
if gotReply {
fmt.Fprintf(os.Stderr, "Got chat completions reply\n")
}
}
Client.GetChatCompletionsStream
package main
import (
"context"
"errors"
"fmt"
"io"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
)
func main() {
azureOpenAIKey := os.Getenv("AOAI_CHAT_COMPLETIONS_API_KEY")
modelDeploymentID := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
// Ex: "https://<your-azure-openai-host>.openai.azure.com"
azureOpenAIEndpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")
if azureOpenAIKey == "" || modelDeploymentID == "" || azureOpenAIEndpoint == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(azureOpenAIKey)
// In Azure OpenAI you must deploy a model before you can use it in your client. For more information
// see here: https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource
client, err := azopenai.NewClientWithKeyCredential(azureOpenAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
// This is a conversation in progress.
// NOTE: all messages, regardless of role, count against token usage for this API.
messages := []azopenai.ChatRequestMessageClassification{
// You set the tone and rules of the conversation with a prompt as the system role.
&azopenai.ChatRequestSystemMessage{Content: azopenai.NewChatRequestSystemMessageContent("You are a helpful assistant. You will talk like a pirate and limit your responses to 20 words or less.")},
// The user asks a question
&azopenai.ChatRequestUserMessage{Content: azopenai.NewChatRequestUserMessageContent("Can you help me?")},
// The reply would come back from the ChatGPT. You'd add it to the conversation so we can maintain context.
&azopenai.ChatRequestAssistantMessage{Content: azopenai.NewChatRequestAssistantMessageContent("Arrrr! Of course, me hearty! What can I do for ye?")},
// The user answers the question based on the latest reply.
&azopenai.ChatRequestUserMessage{Content: azopenai.NewChatRequestUserMessageContent("What's the best way to train a parrot?")},
// from here you'd keep iterating, sending responses back from ChatGPT
}
resp, err := client.GetChatCompletionsStream(context.TODO(), azopenai.ChatCompletionsStreamOptions{
// This is a conversation in progress.
// NOTE: all messages count against token usage for this API.
Messages: messages,
N: to.Ptr[int32](1),
DeploymentName: &modelDeploymentID,
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
defer resp.ChatCompletionsStream.Close()
gotReply := false
for {
chatCompletions, err := resp.ChatCompletionsStream.Read()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
for _, choice := range chatCompletions.Choices {
gotReply = true
text := ""
if choice.Delta.Content != nil {
text = *choice.Delta.Content
}
role := ""
if choice.Delta.Role != nil {
role = string(*choice.Delta.Role)
}
fmt.Fprintf(os.Stderr, "Content[%d], role %q: %q\n", *choice.Index, role, text)
}
}
if gotReply {
fmt.Fprintf(os.Stderr, "Got chat completions streaming reply\n")
}
}
Pemadatan
Client.GetEmbeddings
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
)
func main() {
azureOpenAIKey := os.Getenv("AOAI_EMBEDDINGS_API_KEY")
modelDeploymentID := os.Getenv("AOAI_EMBEDDINGS_MODEL")
// Ex: "https://<your-azure-openai-host>.openai.azure.com"
azureOpenAIEndpoint := os.Getenv("AOAI_EMBEDDINGS_ENDPOINT")
if azureOpenAIKey == "" || modelDeploymentID == "" || azureOpenAIEndpoint == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(azureOpenAIKey)
// In Azure OpenAI you must deploy a model before you can use it in your client. For more information
// see here: https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource
client, err := azopenai.NewClientWithKeyCredential(azureOpenAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
resp, err := client.GetEmbeddings(context.TODO(), azopenai.EmbeddingsOptions{
Input: []string{"Testing, testing, 1,2,3."},
DeploymentName: &modelDeploymentID,
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
for _, embed := range resp.Data {
// embed.Embedding contains the embeddings for this input index.
fmt.Fprintf(os.Stderr, "Got embeddings for input %d\n", *embed.Index)
}
}
Pembuatan Gambar
Client.GetImageGenerations
package main
import (
"context"
"fmt"
"log"
"net/http"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
)
func main() {
azureOpenAIKey := os.Getenv("AOAI_DALLE_API_KEY")
// Ex: "https://<your-azure-openai-host>.openai.azure.com"
azureOpenAIEndpoint := os.Getenv("AOAI_DALLE_ENDPOINT")
azureDeployment := os.Getenv("AOAI_DALLE_MODEL")
if azureOpenAIKey == "" || azureOpenAIEndpoint == "" || azureDeployment == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(azureOpenAIKey)
client, err := azopenai.NewClientWithKeyCredential(azureOpenAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
resp, err := client.GetImageGenerations(context.TODO(), azopenai.ImageGenerationOptions{
Prompt: to.Ptr("a cat"),
ResponseFormat: to.Ptr(azopenai.ImageGenerationResponseFormatURL),
DeploymentName: &azureDeployment,
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
for _, generatedImage := range resp.Data {
// the underlying type for the generatedImage is dictated by the value of
// ImageGenerationOptions.ResponseFormat. In this example we used `azopenai.ImageGenerationResponseFormatURL`,
// so the underlying type will be ImageLocation.
resp, err := http.Head(*generatedImage.URL)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
_ = resp.Body.Close()
fmt.Fprintf(os.Stderr, "Image generated, HEAD request on URL returned %d\n", resp.StatusCode)
}
}
Penyelesaian (warisan)
Client.GetChatCompletions
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
)
func main() {
azureOpenAIKey := os.Getenv("AOAI_COMPLETIONS_API_KEY")
modelDeployment := os.Getenv("AOAI_COMPLETIONS_MODEL")
// Ex: "https://<your-azure-openai-host>.openai.azure.com"
azureOpenAIEndpoint := os.Getenv("AOAI_COMPLETIONS_ENDPOINT")
if azureOpenAIKey == "" || modelDeployment == "" || azureOpenAIEndpoint == "" {
fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
return
}
keyCredential := azcore.NewKeyCredential(azureOpenAIKey)
// In Azure OpenAI you must deploy a model before you can use it in your client. For more information
// see here: https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource
client, err := azopenai.NewClientWithKeyCredential(azureOpenAIEndpoint, keyCredential, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
resp, err := client.GetCompletions(context.TODO(), azopenai.CompletionsOptions{
Prompt: []string{"What is Azure OpenAI, in 20 words or less"},
MaxTokens: to.Ptr(int32(2048)),
Temperature: to.Ptr(float32(0.0)),
DeploymentName: &modelDeployment,
}, nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
log.Printf("ERROR: %s", err)
return
}
for _, choice := range resp.Choices {
fmt.Fprintf(os.Stderr, "Result: %s\n", *choice.Text)
}
}
Penanganan kesalahan
Semua metode yang mengirim permintaan HTTP kembali *azcore.ResponseError
ketika permintaan ini gagal.
ResponseError
memiliki detail kesalahan dan respons mentah dari layanan.
Penebangan kayu
Modul ini menggunakan implementasi pengelogan dalam azcore. Untuk mengaktifkan pengelogan untuk semua modul Azure SDK, atur AZURE_SDK_GO_LOGGING ke semua. Secara default, pencatat menulis ke stderr. Gunakan paket azcore/log untuk mengontrol output log. Misalnya, hanya mencatat peristiwa permintaan dan respons HTTP, dan mencetaknya ke stdout:
import azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log"
// Print log events to stdout
azlog.SetListener(func(cls azlog.Event, msg string) {
fmt.Println(msg)
})
// Includes only requests and responses in credential logs
azlog.SetEvents(azlog.EventRequest, azlog.EventResponse)
Kode sumber | Artefak (Maven) | Dokumentasi referensi API | Dokumentasi referensi paketContoh
Dukungan versi Azure OpenAI API
Tidak seperti pustaka klien Azure OpenAI untuk Python dan JavaScript, untuk memastikan kompatibilitas paket Azure OpenAI Java terbatas untuk menargetkan subset tertentu dari versi Azure OpenAI API. Umumnya setiap paket Azure OpenAI Java membuka akses ke fitur rilis Azure OpenAI API yang lebih baru. Memiliki akses ke versi API terbaru memengaruhi ketersediaan fitur.
Pilihan versi dikontrol oleh OpenAIServiceVersion
enum.
API pratinjau Azure OpenAI terbaru yang didukung adalah:
-2025-01-01-preview
Rilis stabil (GA) terbaru yang didukung adalah:
-2024-06-01
Instalasi
Detail paket
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-ai-openai</artifactId>
<version>1.0.0-beta.16</version>
</dependency>
Otentikasi
Untuk berinteraksi dengan Azure OpenAI di Azure AI Foundry Models, Anda harus membuat instans kelas klien, OpenAIAsyncClient
atau OpenAIClient
dengan menggunakan OpenAIClientBuilder
. Untuk mengonfigurasi klien untuk digunakan dengan Azure OpenAI, berikan URI titik akhir yang valid ke sumber daya Azure OpenAI bersama dengan kredensial kunci, kredensial token, atau kredensial Azure Identity yang memiliki izin untuk menggunakan sumber daya Azure OpenAI.
Autentikasi dengan MICROSOFT Entra ID memerlukan beberapa penyiapan awal:
Tambahkan paket Azure Identity:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-identity</artifactId>
<version>1.13.3</version>
</dependency>
Setelah penyiapan, Anda dapat memilih jenis kredensial mana yang ingin Anda gunakan. Sebagai contoh, DefaultAzureCredential
dapat digunakan untuk mengautentikasi klien: Atur nilai ID klien, ID penyewa, dan rahasia klien aplikasi ID Microsoft Entra sebagai variabel lingkungan: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET.
Otorisasi paling mudah menggunakan DefaultAzureCredential. Ini menemukan kredensial terbaik untuk digunakan di lingkungan operasionalnya.
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
OpenAIClient client = new OpenAIClientBuilder()
.credential(defaultCredential)
.endpoint("{endpoint}")
.buildClient();
Untuk informasi selengkapnya tentang autentikasi tanpa kunci Azure OpenAI, lihat Menggunakan Azure OpenAI tanpa kunci.
Suara
client.getAudioTranscription
String fileName = "{your-file-name}";
Path filePath = Paths.get("{your-file-path}" + fileName);
byte[] file = BinaryData.fromFile(filePath).toBytes();
AudioTranscriptionOptions transcriptionOptions = new AudioTranscriptionOptions(file)
.setResponseFormat(AudioTranscriptionFormat.JSON);
AudioTranscription transcription = client.getAudioTranscription("{deploymentOrModelName}", fileName, transcriptionOptions);
System.out.println("Transcription: " + transcription.getText());
klien.generateUcapanDariTeks
Teks ke ucapan (TTS)
String deploymentOrModelId = "{azure-open-ai-deployment-model-id}";
SpeechGenerationOptions options = new SpeechGenerationOptions(
"Today is a wonderful day to build something people love!",
SpeechVoice.ALLOY);
BinaryData speech = client.generateSpeechFromText(deploymentOrModelId, options);
// Checkout your generated speech in the file system.
Path path = Paths.get("{your-local-file-path}/speech.wav");
Files.write(path, speech.toBytes());
Percakapan
client.getChatCompletions
List<ChatRequestMessage> chatMessages = new ArrayList<>();
chatMessages.add(new ChatRequestSystemMessage("You are a helpful assistant. You will talk like a pirate."));
chatMessages.add(new ChatRequestUserMessage("Can you help me?"));
chatMessages.add(new ChatRequestAssistantMessage("Of course, me hearty! What can I do for ye?"));
chatMessages.add(new ChatRequestUserMessage("What's the best way to train a parrot?"));
ChatCompletions chatCompletions = client.getChatCompletions("{deploymentOrModelName}",
new ChatCompletionsOptions(chatMessages));
System.out.printf("Model ID=%s is created at %s.%n", chatCompletions.getId(), chatCompletions.getCreatedAt());
for (ChatChoice choice : chatCompletions.getChoices()) {
ChatResponseMessage message = choice.getMessage();
System.out.printf("Index: %d, Chat Role: %s.%n", choice.getIndex(), message.getRole());
System.out.println("Message:");
System.out.println(message.getContent());
}
Siaran Langsung
List<ChatRequestMessage> chatMessages = new ArrayList<>();
chatMessages.add(new ChatRequestSystemMessage("You are a helpful assistant. You will talk like a pirate."));
chatMessages.add(new ChatRequestUserMessage("Can you help me?"));
chatMessages.add(new ChatRequestAssistantMessage("Of course, me hearty! What can I do for ye?"));
chatMessages.add(new ChatRequestUserMessage("What's the best way to train a parrot?"));
ChatCompletions chatCompletions = client.getChatCompletions("{deploymentOrModelName}",
new ChatCompletionsOptions(chatMessages));
System.out.printf("Model ID=%s is created at %s.%n", chatCompletions.getId(), chatCompletions.getCreatedAt());
for (ChatChoice choice : chatCompletions.getChoices()) {
ChatResponseMessage message = choice.getMessage();
System.out.printf("Index: %d, Chat Role: %s.%n", choice.getIndex(), message.getRole());
System.out.println("Message:");
System.out.println(message.getContent());
}
Penyelesaian obrolan dengan gambar
List<ChatRequestMessage> chatMessages = new ArrayList<>();
chatMessages.add(new ChatRequestSystemMessage("You are a helpful assistant that describes images"));
chatMessages.add(new ChatRequestUserMessage(Arrays.asList(
new ChatMessageTextContentItem("Please describe this image"),
new ChatMessageImageContentItem(
new ChatMessageImageUrl("https://raw.githubusercontent.com/MicrosoftDocs/azure-ai-docs/main/articles/ai-services/openai/media/how-to/generated-seattle.png"))
)));
ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions(chatMessages);
ChatCompletions chatCompletions = client.getChatCompletions("{deploymentOrModelName}", chatCompletionsOptions);
System.out.println("Chat completion: " + chatCompletions.getChoices().get(0).getMessage().getContent());
Pemadatan
client.getEmbeddings
EmbeddingsOptions embeddingsOptions = new EmbeddingsOptions(
Arrays.asList("Your text string goes here"));
Embeddings embeddings = client.getEmbeddings("{deploymentOrModelName}", embeddingsOptions);
for (EmbeddingItem item : embeddings.getData()) {
System.out.printf("Index: %d.%n", item.getPromptIndex());
for (Float embedding : item.getEmbedding()) {
System.out.printf("%f;", embedding);
}
}
Pembuatan gambar
ImageGenerationOptions imageGenerationOptions = new ImageGenerationOptions(
"A drawing of the Seattle skyline in the style of Van Gogh");
ImageGenerations images = client.getImageGenerations("{deploymentOrModelName}", imageGenerationOptions);
for (ImageGenerationData imageGenerationData : images.getData()) {
System.out.printf(
"Image location URL that provides temporary access to download the generated image is %s.%n",
imageGenerationData.getUrl());
}
Menangani kesalahan
Aktifkan pengelogan klien
Untuk memecahkan masalah dengan pustaka Azure OpenAI, penting untuk terlebih dahulu mengaktifkan pengelogan untuk memantau perilaku aplikasi. Kesalahan dan peringatan dalam log umumnya memberikan wawasan yang berguna tentang apa yang salah dan terkadang menyertakan tindakan korektif untuk memperbaiki masalah. Pustaka klien Azure untuk Java memiliki dua opsi pengelogan:
- Kerangka kerja pencatatan log bawaan.
- Dukungan untuk pengelogan menggunakan antarmuka SLF4J.
Lihat instruksi dalam dokumen referensi ini tentang cara [mengonfigurasi pengelogan di Azure SDK for Java][logging_overview].
Mengaktifkan pengelogan permintaan/respons HTTP
Meninjau permintaan HTTP yang dikirim atau respons yang diterima melalui kawat ke/dari Azure OpenAI dapat berguna dalam memecahkan masalah. Untuk mengaktifkan pengelogan payload permintaan dan respons HTTP, [OpenAIClient][openai_client] dapat dikonfigurasi seperti yang ditunjukkan di bawah ini. Jika tidak ada SLF4J Logger
di jalur kelas, atur variabel lingkungan [AZURE_LOG_LEVEL][azure_log_level] di komputer Anda untuk mengaktifkan pengelogan.
OpenAIClient openAIClient = new OpenAIClientBuilder()
.endpoint("{endpoint}")
.credential(new AzureKeyCredential("{key}"))
.httpLogOptions(new HttpLogOptions().setLogLevel(HttpLogDetailLevel.BODY_AND_HEADERS))
.buildClient();
// or
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
OpenAIClient configurationClientAad = new OpenAIClientBuilder()
.credential(credential)
.endpoint("{endpoint}")
.httpLogOptions(new HttpLogOptions().setLogLevel(HttpLogDetailLevel.BODY_AND_HEADERS))
.buildClient();
Anda dapat mengonfigurasi pencatatan log untuk permintaan dan respons HTTP di seluruh aplikasi Anda dengan mengatur variabel lingkungan berikut. Perhatikan bahwa perubahan ini akan mengaktifkan pengelogan untuk setiap klien Azure yang mendukung pengelogan permintaan/respons HTTP.
Nama variabel lingkungan: AZURE_HTTP_LOG_DETAIL_LEVEL
Nilai | Tingkat pengelogan |
---|---|
tidak ada | Pengelogan permintaan/respons HTTP dinonaktifkan |
dasar | Hanya mencatat URL, metode HTTP, dan waktu untuk menyelesaikan permintaan. |
header | Mencatat semuanya di BASIC, ditambah semua header permintaan dan respons. |
tubuh | Mencatat semua yang ada di BASIC, ditambah semua isi permintaan dan respons. |
Isi dan Tajuk | Mencatat semua yang ada di HEADERS dan BODY. |
Nota
Saat mencatat isi permintaan dan respons, pastikan bahwa mereka tidak berisi informasi rahasia. Saat mencatat header, pustaka klien memiliki sekumpulan header default yang dianggap aman untuk dicatat tetapi set ini dapat diperbarui dengan memperbarui opsi log di penyusun seperti yang ditunjukkan di bawah ini.
clientBuilder.httpLogOptions(new HttpLogOptions().addAllowedHeaderName("safe-to-log-header-name"))
Pemecahan masalah pengecualian
Metode Azure OpenAI mengeluarkan[HttpResponseException
atau subkelasnya saat terjadi kegagalan.
Kesalahan HttpResponseException
yang dilemparkan oleh perpustakaan klien OpenAI mencakup objek kesalahan respons yang terperinci, memberikan wawasan spesifik yang berguna tentang apa yang tidak beres dan menawarkan tindakan korektif untuk memperbaiki masalah umum.
Informasi kesalahan ini dapat ditemukan di dalam properti HttpResponseException
pesan objek.
Berikut adalah contoh cara menangkapnya dengan klien sinkron
List<ChatRequestMessage> chatMessages = new ArrayList<>();
chatMessages.add(new ChatRequestSystemMessage("You are a helpful assistant. You will talk like a pirate."));
chatMessages.add(new ChatRequestUserMessage("Can you help me?"));
chatMessages.add(new ChatRequestAssistantMessage("Of course, me hearty! What can I do for ye?"));
chatMessages.add(new ChatRequestUserMessage("What's the best way to train a parrot?"));
try {
ChatCompletions chatCompletions = client.getChatCompletions("{deploymentOrModelName}",
new ChatCompletionsOptions(chatMessages));
} catch (HttpResponseException e) {
System.out.println(e.getMessage());
// Do something with the exception
}
Dengan klien asinkron, Anda dapat menangkap dan menangani pengecualian dalam panggilan balik kesalahan:
asyncClient.getChatCompletions("{deploymentOrModelName}", new ChatCompletionsOptions(chatMessages))
.doOnSuccess(ignored -> System.out.println("Success!"))
.doOnError(
error -> error instanceof ResourceNotFoundException,
error -> System.out.println("Exception: 'getChatCompletions' could not be performed."));
Kesalahan autentikasi
Azure OpenAI mendukung autentikasi ID Microsoft Entra.
OpenAIClientBuilder
memiliki metode untuk mengatur credential
. Untuk memberikan kredensial yang valid, Anda dapat menggunakan azure-identity
dependensi.
Paket kode sumber | (npm) | Referensi |
Dukungan versi Azure OpenAI API
Ketersediaan fitur di Azure OpenAI bergantung pada versi REST API apa yang Anda targetkan. Untuk fitur terbaru, targetkan API pratinjau terbaru.
GA API terbaru | API Pratinjau Terbaru |
---|---|
2024-10-21 |
2025-03-01-preview |
Instalasi
npm install openai
Otentikasi
Ada beberapa cara untuk mengautentikasi dengan Azure OpenAI menggunakan token ID Microsoft Entra. Cara defaultnya adalah dengan menggunakan DefaultAzureCredential
kelas dari @azure/identity
paket.
import { DefaultAzureCredential } from "@azure/identity";
const credential = new DefaultAzureCredential();
Objek ini kemudian diteruskan sebagai bagian dari objek AzureClientOptions
ke konstruktor klien AzureOpenAI
dan AssistantsClient
.
Namun, untuk mengautentikasi klien AzureOpenAI
, kita perlu menggunakan fungsi getBearerTokenProvider
dari paket @azure/identity
. Fungsi ini membuat penyedia token yang AzureOpenAI
menggunakan secara internal untuk mendapatkan token untuk setiap permintaan. Penyedia token dibuat sebagai berikut:
import { AzureOpenAI } from 'openai';
import { DefaultAzureCredential, getBearerTokenProvider } from "@azure/identity";
const credential = new DefaultAzureCredential();
const endpoint = "https://your-azure-openai-resource.com";
const apiVersion = "2024-10-21"
const scope = "https://cognitiveservices.azure.com/.default";
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
const deployment = "gpt-35-turbo";
const client = new AzureOpenAI({
endpoint,
apiVersion,
deployment,
azureADTokenProvider
});
Untuk informasi selengkapnya tentang autentikasi tanpa kunci Azure OpenAI, lihat artikel panduan awal "Mulai menggunakan blok penyusun keamanan Azure OpenAI".
Konfigurasi
Objek AzureClientOptions
memperluas objek OpenAI ClientOptions
. Objek klien khusus Azure ini digunakan untuk mengonfigurasi koneksi dan perilaku klien Azure OpenAI. Ini mencakup properti untuk menentukan properti yang khusus bagi Azure.
Harta benda | Detail lebih lanjut |
---|---|
apiVersion: string |
Menentukan versi API yang akan digunakan. |
azureADTokenProvider: (() => Promise<string>) |
Fungsi yang mengembalikan token akses untuk Microsoft Entra (sebelumnya dikenal sebagai Azure Active Directory), dipanggil pada setiap permintaan. |
Penyebaran: string |
Penyebaran model. Jika disediakan, atur URL klien dasar untuk menyertakan /deployments/{deployment} . Titik akhir yang bukan untuk penyebaran tidak dapat digunakan (tidak didukung dengan API Assistant). |
Titik Akhir: string |
Endpoint Azure OpenAI Anda dengan format berikut: https://RESOURCE-NAME.azure.openai.com/ . |
Suara
Transkripsi
import { createReadStream } from "fs";
const result = await client.audio.transcriptions.create({
model: '',
file: createReadStream(audioFilePath),
});
Percakapan
chat.completions.create
const result = await client.chat.completions.create({ messages, model: '', max_tokens: 100 });
Siaran Langsung
const stream = await client.chat.completions.create({ model: '', messages, max_tokens: 100, stream: true });
Pemadatan
const embeddings = await client.embeddings.create({ input, model: '' });
Pembuatan gambar
const results = await client.images.generate({ prompt, model: '', n, size });
Penanganan kesalahan
Kode kesalahan
Kode Status | Jenis Kesalahan |
---|---|
400 | Bad Request Error |
401 | Authentication Error |
403 | Permission Denied Error |
404 | Not Found Error |
422 | Unprocessable Entity Error |
429 | Rate Limit Error |
500 | Internal Server Error |
503 | Service Unavailable |
504 | Gateway Timeout |
Pengulangan
Kesalahan berikut secara otomatis dihentikan dua kali secara default dengan backoff eksponensial singkat:
- Kesalahan Koneksi
- 408 Waktu Permintaan Habis
- Batas Tarif 429
-
>=
500 Kesalahan Internal
Gunakan maxRetries
untuk mengatur/menonaktifkan perilaku kerja pengulangan.
// Configure the default for all requests:
const client = new AzureOpenAI({
maxRetries: 0, // default is 2
});
// Or, configure per-request:
await client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in Node.js?' }], model: '' }, {
maxRetries: 5,
});
Paket kode | sumber pustaka (PyPi) | Referensi |
Nota
Pustaka ini dikelola oleh OpenAI. Lihat riwayat rilis untuk melacak pembaruan terbaru di pustaka.
Dukungan versi Azure OpenAI API
Ketersediaan fitur di Azure OpenAI bergantung pada versi REST API apa yang Anda targetkan. Untuk fitur terbaru, targetkan API pratinjau terbaru.
GA API terbaru | API Pratinjau Terbaru |
---|---|
2024-10-21 |
2025-03-01-preview |
Instalasi
pip install openai
Untuk versi terbaru:
pip install openai --upgrade
Otentikasi
import os
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_ad_token_provider=token_provider,
api_version="2024-10-21"
)
Untuk informasi selengkapnya tentang autentikasi tanpa kunci Azure OpenAI, lihat artikel panduan awal "Mulai menggunakan blok penyusun keamanan Azure OpenAI".
Suara
audio.speech.create()
Fungsi ini saat ini memerlukan versi API pratinjau.
Atur api_version="2024-10-01-preview"
untuk menggunakan fungsi ini.
# from openai import AzureOpenAI
# client = AzureOpenAI()
from pathlib import Path
import os
speech_file_path = Path("speech.mp3")
response = client.audio.speech.create(
model="tts-hd", #Replace with model deployment name
voice="alloy",
input="Testing, testing, 1,2,3."
)
response.write_to_file(speech_file_path)
audio.transcriptions.create() // membuat transkripsi audio
# from openai import AzureOpenAI
# client = AzureOpenAI()
audio_file = open("speech1.mp3", "rb")
transcript = client.audio.transcriptions.create(
model="whisper", # Replace with model deployment name
file=audio_file
)
print(transcript)
Percakapan
chat.completions.create()
# from openai import AzureOpenAI
# client = AzureOpenAI()
completion = client.chat.completions.create(
model="gpt-4o", # Replace with your model dpeloyment name.
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "When was Microsoft founded?"}
]
)
#print(completion.choices[0].message)
print(completion.model_dump_json(indent=2)
chat.completions.create() - streaming
# from openai import AzureOpenAI
# client = AzureOpenAI()
completion = client.chat.completions.create(
model="gpt-4o", # Replace with your model dpeloyment name.
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "When was Microsoft founded?"}
],
stream=True
)
for chunk in completion:
if chunk.choices and chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end='',)
chat.completions.create() - input gambar
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://raw.githubusercontent.com/MicrosoftDocs/azure-ai-docs/main/articles/ai-services/openai/media/how-to/generated-seattle.png",
}
},
],
}
],
max_tokens=300,
)
print(completion.model_dump_json(indent=2))
Pemadatan
embeddings.create()
# from openai import AzureOpenAI
# client = AzureOpenAI()
embedding = client.embeddings.create(
model="text-embedding-3-large", # Replace with your model deployment name
input="Attenion is all you need",
encoding_format="float"
)
print(embedding)
Penyempurnaan
Menyempurnakan dengan artikel panduan Python
Batch
Artikel cara membuat batch dengan Python
Gambar
images.generate()
# from openai import AzureOpenAI
# client = AzureOpenAI()
generate_image = client.images.generate(
model="dall-e-3", #replace with your model deployment name
prompt="A rabbit eating pancakes",
n=1,
size="1024x1024",
quality = "hd",
response_format = "url",
style = "vivid"
)
print(generate_image.model_dump_json(indent=2))
Respons API
Lihat dokumentasi API Respons .
Penyelesaian (warisan)
completions.create()
# from openai import AzureOpenAI
# client = AzureOpenAI()
legacy_completion = client.completions.create(
model="gpt-35-turbo-instruct", # Replace with model deployment name
prompt="Hello World!",
max_tokens=100,
temperature=0
)
print(legacy_completion.model_dump_json(indent=2))
Penanganan kesalahan
# from openai import AzureOpenAI
# client = AzureOpenAI()
import openai
try:
client.fine_tuning.jobs.create(
model="gpt-4o",
training_file="file-test",
)
except openai.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except openai.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except openai.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
Kode kesalahan
Kode Status | Jenis Kesalahan |
---|---|
400 | BadRequestError |
401 | AuthenticationError |
403 | PermissionDeniedError |
404 | NotFoundError |
422 | UnprocessableEntityError |
429 | RateLimitError |
>=500 | InternalServerError |
Tidak tersedia | APIConnectionError |
ID Permintaan
Untuk mengambil ID permintaan Anda, Anda dapat menggunakan atribut _request_id
yang sesuai dengan header respons x-request-id
.
print(completion._request_id)
print(legacy_completion._request_id)
Pengulangan
Kesalahan berikut secara otomatis dihentikan dua kali secara default dengan backoff eksponensial singkat:
- Kesalahan Koneksi
- 408 Waktu Permintaan Habis
- Batas Tarif 429
-
>=
500 Kesalahan Internal
Gunakan max_retries
untuk mengatur/menonaktifkan perilaku kerja pengulangan.
# For all requests
from openai import AzureOpenAI
client = AzureOpenAI(
max_retries=0
)
# max retires for specific requests
client.with_options(max_retries=5).chat.completions.create(
messages=[
{
"role": "user",
"content": "When was Microsoft founded?",
}
],
model="gpt-4o",
)
Langkah selanjutnya
- Untuk melihat model apa yang saat ini didukung, lihat halaman model Azure OpenAI