กิจกรรม
17 มี.ค. 21 - 21 มี.ค. 10
แอปอัจฉริยะ เข้าร่วมชุด meetup เพื่อสร้างโซลูชัน AI ที่ปรับขนาดได้ตามกรณีการใช้งานจริงกับนักพัฒนาและผู้เชี่ยวชาญร่วมกัน
ลงทะเบียนตอนนี้เบราว์เซอร์นี้ไม่ได้รับการสนับสนุนอีกต่อไป
อัปเกรดเป็น Microsoft Edge เพื่อใช้ประโยชน์จากคุณลักษณะล่าสุด เช่น การอัปเดตความปลอดภัยและการสนับสนุนด้านเทคนิค
หมายเหตุ
This feature is currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Azure OpenAI GPT-4o Realtime API for speech and audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions. The GPT-4o audio realtime
API is designed to handle real-time, low-latency conversational interactions, making it a great fit for use cases involving live interactions between a user and a model, such as customer support agents, voice assistants, and real-time translators.
Most users of the Realtime API need to deliver and receive audio from an end-user in real time, including applications that use WebRTC or a telephony system. The Realtime API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
The GPT 4o real-time models are available for global deployments.
gpt-4o-realtime-preview
(version 2024-12-17
)gpt-4o-mini-realtime-preview
(version 2024-12-17
)gpt-4o-realtime-preview
(version 2024-10-01
)See the models and versions documentation for more information.
Support for the Realtime API was first added in API version 2024-10-01-preview
. Use the latest 2024-12-17
model version.
To deploy the gpt-4o-mini-realtime-preview
model in the Azure AI Foundry portal:
gpt-4o-mini-realtime-preview
model and then select Deploy to selected resource.2024-12-17
model version.Now that you have a deployment of the gpt-4o-mini-realtime-preview
model, you can interact with it in real time in the Azure AI Foundry portal Real-time audio playground or Realtime API.
To chat with your deployed gpt-4o-mini-realtime-preview
model in the Azure AI Foundry Real-time audio playground, follow these steps:
Go to the Azure OpenAI Service page in Azure AI Foundry portal. Make sure you're signed in with the Azure subscription that has your Azure OpenAI Service resource and the deployed gpt-4o-mini-realtime-preview
model.
Select the Real-time audio playground from under Playgrounds in the left pane.
Select your deployed gpt-4o-mini-realtime-preview
model from the Deployment dropdown.
Select Enable microphone to allow the browser to access your microphone. If you already granted permission, you can skip this step.
Optionally, you can edit contents in the Give the model instructions and context text box. Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses.
Optionally, change settings such as threshold, prefix padding, and silence duration.
Select Start listening to start the session. You can speak into the microphone to start a chat.
You can interrupt the chat at any time by speaking. You can end the chat by selecting the Stop listening button.
gpt-4o-mini-realtime-preview
model with your Azure OpenAI resource. For more information, see Create a resource and deploy a model with Azure OpenAI.For the recommended keyless authentication with Microsoft Entra ID, you need to:
Cognitive Services User
role to your user account. You can assign roles in the Azure portal under Access control (IAM) > Add role assignment.To deploy the gpt-4o-mini-realtime-preview
model in the Azure AI Foundry portal:
gpt-4o-mini-realtime-preview
model and then select Deploy to selected resource.2024-12-17
model version.Now that you have a deployment of the gpt-4o-mini-realtime-preview
model, you can interact with it in real time in the Azure AI Foundry portal Real-time audio playground or Realtime API.
Create a new folder realtime-audio-quickstart
to contain the application and open Visual Studio Code in that folder with the following command:
mkdir realtime-audio-quickstart && cd realtime-audio-quickstart
Create the package.json
with the following command:
npm init -y
Install the real-time audio client library for JavaScript with:
npm install https://github.com/Azure-Samples/aoai-realtime-audio-sdk/releases/download/js/v0.5.2/rt-client-0.5.2.tgz
For the recommended keyless authentication with Microsoft Entra ID, install the @azure/identity
package with:
npm install @azure/identity
You need to retrieve the following information to authenticate your application with your Azure OpenAI resource:
Variable name | Value |
---|---|
AZURE_OPENAI_ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. |
AZURE_OPENAI_DEPLOYMENT_NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal. |
OPENAI_API_VERSION |
Learn more about API Versions. You can change the version in code or use an environment variable. |
Learn more about keyless authentication and setting environment variables.
ข้อควรระวัง
To use the recommended keyless authentication with the SDK, make sure that the AZURE_OPENAI_API_KEY
environment variable isn't set.
Create the index.js
file with the following code:
import { DefaultAzureCredential } from "@azure/identity";
import { LowLevelRTClient } from "rt-client";
import dotenv from "dotenv";
dotenv.config();
async function text_in_audio_out() {
// Set environment variables or edit the corresponding values here.
const endpoint = process.env.AZURE_OPENAI_ENDPOINT || "YourEndpoint";
const deployment = process.env.AZURE_OPENAI_DEPLOYMENT_NAME || "gpt-4o-mini-realtime-preview";
if (!endpoint || !deployment) {
throw new Error("You didn't set the environment variables.");
}
const client = new LowLevelRTClient(new URL(endpoint), new DefaultAzureCredential(), { deployment: deployment });
try {
await client.send({
type: "response.create",
response: {
modalities: ["audio", "text"],
instructions: "Please assist the user."
}
});
for await (const message of client.messages()) {
switch (message.type) {
case "response.done": {
break;
}
case "error": {
console.error(message.error);
break;
}
case "response.audio_transcript.delta": {
console.log(`Received text delta: ${message.delta}`);
break;
}
case "response.audio.delta": {
const buffer = Buffer.from(message.delta, "base64");
console.log(`Received ${buffer.length} bytes of audio data.`);
break;
}
}
if (message.type === "response.done" || message.type === "error") {
break;
}
}
}
finally {
client.close();
}
}
await text_in_audio_out();
Sign in to Azure with the following command:
az login
Run the JavaScript file.
node index.js
Wait a few moments to get the response.
The script gets a response from the model and prints the transcript and audio data received.
The output will look similar to the following:
Received text delta: Hello
Received text delta: !
Received text delta: How
Received text delta: can
Received text delta: I
Received 4800 bytes of audio data.
Received 7200 bytes of audio data.
Received text delta: help
Received 12000 bytes of audio data.
Received text delta: you
Received text delta: today
Received text delta: ?
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 24000 bytes of audio data.
Our JavaScript web sample on GitHub demonstrates how to use the GPT-4o Realtime API to interact with the model in real time. The sample code includes a simple web interface that captures audio from the user's microphone and sends it to the model for processing. The model responds with text and audio, which the sample code renders in the web interface.
You can run the sample code locally on your machine by following these steps. Refer to the repository on GitHub for the most up-to-date instructions.
If you don't have Node.js installed, download and install the LTS version of Node.js.
Clone the repository to your local machine:
git clone https://github.com/Azure-Samples/aoai-realtime-audio-sdk.git
Go to the javascript/samples/web
folder in your preferred code editor.
cd ./javascript/samples
Run download-pkg.ps1
or download-pkg.sh
to download the required packages.
Go to the web
folder from the ./javascript/samples
folder.
cd ./web
Run npm install
to install package dependencies.
Run npm run dev
to start the web server, navigating any firewall permissions prompts as needed.
Go to any of the provided URIs from the console output (such as http://localhost:5173/
) in a browser.
Enter the following information in the web interface:
/realtime
path. An example structure might be https://my-azure-openai-resource-from-portal.openai.azure.com
.gpt-4o-mini-realtime-preview
model that you deployed in the previous section.Select the Record button to start the session. Accept permissions to use your microphone if prompted.
You should see a << Session Started >>
message in the main output. Then you can speak into the microphone to start a chat.
You can interrupt the chat at any time by speaking. You can end the chat by selecting the Stop button.
gpt-4o-mini-realtime-preview
model with your Azure OpenAI resource. For more information, see Create a resource and deploy a model with Azure OpenAI.For the recommended keyless authentication with Microsoft Entra ID, you need to:
Cognitive Services User
role to your user account. You can assign roles in the Azure portal under Access control (IAM) > Add role assignment.To deploy the gpt-4o-mini-realtime-preview
model in the Azure AI Foundry portal:
gpt-4o-mini-realtime-preview
model and then select Deploy to selected resource.2024-12-17
model version.Now that you have a deployment of the gpt-4o-mini-realtime-preview
model, you can interact with it in real time in the Azure AI Foundry portal Real-time audio playground or Realtime API.
Create a new folder realtime-audio-quickstart
to contain the application and open Visual Studio Code in that folder with the following command:
mkdir realtime-audio-quickstart && cd realtime-audio-quickstart
Create a virtual environment. If you already have Python 3.10 or higher installed, you can create a virtual environment using the following commands:
Activating the Python environment means that when you run python
or pip
from the command line, you then use the Python interpreter contained in the .venv
folder of your application. You can use the deactivate
command to exit the python virtual environment, and can later reactivate it when needed.
เคล็ดลับ
We recommend that you create and activate a new Python environment to use to install the packages you need for this tutorial. Don't install packages into your global python installation. You should always use a virtual or conda environment when installing python packages, otherwise you can break your global installation of Python.
Install the real-time audio client library for Python with:
pip install "https://github.com/Azure-Samples/aoai-realtime-audio-sdk/releases/download/py%2Fv0.5.3/rtclient-0.5.3.tar.gz"
For the recommended keyless authentication with Microsoft Entra ID, install the azure-identity
package with:
pip install azure-identity
You need to retrieve the following information to authenticate your application with your Azure OpenAI resource:
Variable name | Value |
---|---|
AZURE_OPENAI_ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. |
AZURE_OPENAI_DEPLOYMENT_NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal. |
OPENAI_API_VERSION |
Learn more about API Versions. You can change the version in code or use an environment variable. |
Learn more about keyless authentication and setting environment variables.
Create the text-in-audio-out.py
file with the following code:
import base64
import asyncio
from azure.identity.aio import DefaultAzureCredential
from rtclient import (
ResponseCreateMessage,
RTLowLevelClient,
ResponseCreateParams
)
# Set environment variables or edit the corresponding values here.
endpoint = os.environ["AZURE_OPENAI_ENDPOINT"] or "https://<your-resource-name>.openai.azure.com/"
deployment = os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"] or "gpt-4o-mini-realtime-preview"
async def text_in_audio_out():
async with RTLowLevelClient(
url=endpoint,
azure_deployment=deployment,
token_credential=DefaultAzureCredential(),
) as client:
await client.send(
ResponseCreateMessage(
response=ResponseCreateParams(
modalities={"audio", "text"},
instructions="Please assist the user."
)
)
)
done = False
while not done:
message = await client.recv()
match message.type:
case "response.done":
done = True
case "error":
done = True
print(message.error)
case "response.audio_transcript.delta":
print(f"Received text delta: {message.delta}")
case "response.audio.delta":
buffer = base64.b64decode(message.delta)
print(f"Received {len(buffer)} bytes of audio data.")
case _:
pass
async def main():
await text_in_audio_out()
asyncio.run(main())
Run the Python file.
python text-in-audio-out.py
Wait a few moments to get the response.
The script gets a response from the model and prints the transcript and audio data received.
The output will look similar to the following:
Received text delta: Hello
Received text delta: !
Received text delta: How
Received 4800 bytes of audio data.
Received 7200 bytes of audio data.
Received text delta: can
Received 12000 bytes of audio data.
Received text delta: I
Received text delta: assist
Received text delta: you
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received text delta: today
Received text delta: ?
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 28800 bytes of audio data.
gpt-4o-mini-realtime-preview
model with your Azure OpenAI resource. For more information, see Create a resource and deploy a model with Azure OpenAI.For the recommended keyless authentication with Microsoft Entra ID, you need to:
Cognitive Services User
role to your user account. You can assign roles in the Azure portal under Access control (IAM) > Add role assignment.To deploy the gpt-4o-mini-realtime-preview
model in the Azure AI Foundry portal:
gpt-4o-mini-realtime-preview
model and then select Deploy to selected resource.2024-12-17
model version.Now that you have a deployment of the gpt-4o-mini-realtime-preview
model, you can interact with it in real time in the Azure AI Foundry portal Real-time audio playground or Realtime API.
Create a new folder realtime-audio-quickstart
to contain the application and open Visual Studio Code in that folder with the following command:
mkdir realtime-audio-quickstart && cd realtime-audio-quickstart
Create the package.json
with the following command:
npm init -y
Update the package.json
to ECMAScript with the following command:
npm pkg set type=module
Install the real-time audio client library for JavaScript with:
npm install https://github.com/Azure-Samples/aoai-realtime-audio-sdk/releases/download/js/v0.5.2/rt-client-0.5.2.tgz
For the recommended keyless authentication with Microsoft Entra ID, install the @azure/identity
package with:
npm install @azure/identity
You need to retrieve the following information to authenticate your application with your Azure OpenAI resource:
Variable name | Value |
---|---|
AZURE_OPENAI_ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. |
AZURE_OPENAI_DEPLOYMENT_NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal. |
OPENAI_API_VERSION |
Learn more about API Versions. You can change the version in code or use an environment variable. |
Learn more about keyless authentication and setting environment variables.
ข้อควรระวัง
To use the recommended keyless authentication with the SDK, make sure that the AZURE_OPENAI_API_KEY
environment variable isn't set.
Create the index.ts
file with the following code:
import { DefaultAzureCredential } from "@azure/identity";
import { LowLevelRTClient } from "rt-client";
import dotenv from "dotenv";
dotenv.config();
async function text_in_audio_out() {
// Set environment variables or edit the corresponding values here.
const endpoint: string = process.env.AZURE_OPENAI_ENDPOINT || "Your endpoint";
const deployment = process.env.AZURE_OPENAI_DEPLOYMENT_NAME || "gpt-4o-mini-realtime-preview";
if (!endpoint || !deployment) {
throw new Error("You didn't set the environment variables.");
}
const client = new LowLevelRTClient(
new URL(endpoint),
new DefaultAzureCredential(),
{deployment: deployment}
);
try {
await client.send({
type: "response.create",
response: {
modalities: ["audio", "text"],
instructions: "Please assist the user."
}
});
for await (const message of client.messages()) {
switch (message.type) {
case "response.done": {
break;
}
case "error": {
console.error(message.error);
break;
}
case "response.audio_transcript.delta": {
console.log(`Received text delta: ${message.delta}`);
break;
}
case "response.audio.delta": {
const buffer = Buffer.from(message.delta, "base64");
console.log(`Received ${buffer.length} bytes of audio data.`);
break;
}
}
if (message.type === "response.done" || message.type === "error") {
break;
}
}
} finally {
client.close();
}
}
await text_in_audio_out();
Create the tsconfig.json
file to transpile the TypeScript code and copy the following code for ECMAScript.
{
"compilerOptions": {
"module": "NodeNext",
"target": "ES2022", // Supports top-level await
"moduleResolution": "NodeNext",
"skipLibCheck": true, // Avoid type errors from node_modules
"strict": true // Enable strict type-checking options
},
"include": ["*.ts"]
}
Transpile from TypeScript to JavaScript.
tsc
Sign in to Azure with the following command:
az login
Run the code with the following command:
node index.js
Wait a few moments to get the response.
The script gets a response from the model and prints the transcript and audio data received.
The output will look similar to the following:
Received text delta: Hello
Received text delta: !
Received text delta: How
Received text delta: can
Received text delta: I
Received 4800 bytes of audio data.
Received 7200 bytes of audio data.
Received text delta: help
Received 12000 bytes of audio data.
Received text delta: you
Received text delta: today
Received text delta: ?
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 12000 bytes of audio data.
Received 24000 bytes of audio data.
กิจกรรม
17 มี.ค. 21 - 21 มี.ค. 10
แอปอัจฉริยะ เข้าร่วมชุด meetup เพื่อสร้างโซลูชัน AI ที่ปรับขนาดได้ตามกรณีการใช้งานจริงกับนักพัฒนาและผู้เชี่ยวชาญร่วมกัน
ลงทะเบียนตอนนี้การฝึกอบรม
โมดูล
เริ่มต้นใช้งาน Azure OpenAI Service - Training
Azure OpenAI Service ช่วยให้วิศวกรสามารถสร้างโซลูชัน AI สร้างสรรค์ในระดับองค์กรได้
ใบรับรอง
รับรองโดย Microsoft: หลักพื้นฐานของ Azure AI - Certifications
แสดงให้เห็นแนวคิด AI พื้นฐานที่เกี่ยวข้องกับการพัฒนาซอฟต์แวร์และบริการของ Microsoft Azure เพื่อสร้างโซลูชัน AI
เอกสาร
Azure OpenAI Service Realtime API Reference - Azure OpenAI
Learn how to use the Realtime API to interact with the Azure OpenAI service in real-time.
How to use the GPT-4o Realtime API for speech and audio with Azure OpenAI Service - Azure OpenAI
Learn how to use the GPT-4o Realtime API for speech and audio with Azure OpenAI Service.
Quickstart - Getting started with Azure OpenAI audio generation - Azure OpenAI
Walkthrough on how to get started with audio generation using Azure OpenAI.