Quickstart: Get started using Azure OpenAI Assistants (Preview)
Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions.
Important
Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
- An Azure subscription - Create one for free.
- An Azure AI hub resource with a model deployed. For more information about model deployment, see the resource deployment guide.
- An Azure AI project in Azure AI Studio.
Azure AI Studio lets you use Assistants v2 which provides several upgrades such as the file search tool which is faster and supports more files.
Sign in to Azure AI Studio.
Go to your project or create a new project in Azure AI Studio.
From your project overview, select Assistants, located under playgrounds.
The Assistants playground allows you to explore, prototype, and test AI Assistants without needing to run any code. From this page, you can quickly iterate and experiment with new ideas.
The playground provides several options to configure your Assistant. In the following steps, you will use the setup pane to create a new AI assistant.
Name Description Assistant name Your deployment name that is associated with a specific model. Instructions Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. Deployment This is where you set which model deployment to use with your assistant. Functions Create custom function definitions for the models to formulate API calls and structure data outputs based on your specifications. Not used in this quickstart. Code interpreter Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. Files You can upload up to 10,000 files, with a max file size of 512 MB to use with tools. Not used in this quickstart.
Select your deployment from the Deployments dropdown.
From the Assistant setup drop-down, select New assistant.
Give your Assistant a name.
Enter the following instructions "You are an AI assistant that can write code to help answer math questions"
Select a model deployment. We recommend testing with one of the latest gpt-4 models.
Select the toggle enabling code interpreter.
Select Save.
Enter a question for the assistant to answer: "I need to solve the equation
3x + 11 = 14
. Can you help me?"Select the Add and run button
The solution to the equation (3x + 11 = 14) is (x = 1).
While we can see that answer is correct, to confirm that the model used code interpreter to get to this answer, and that the code it wrote is valid rather than just repeating an answer from the model's training data we'll ask another question.
Enter the follow-up question: "Show me the code you ran to get this solution."
Sure. The code is very straightforward
# calculation x = (14 - 11) / 3 x
First, we subtract 11 from 14, then divide the result by 3. This gives us the value of x which is 1.0.
You could also consult the logs in the right-hand panel to confirm that code interpreter was used and to validate the code that was run to generate the response. It is important to remember that while code interpreter gives the model the capability to respond to more complex math questions by converting the questions into code and running in a sandboxed Python environment, you still need to validate the response to confirm that the model correctly translated your question into a valid representation in code.
While using the Assistants playground, keep the following concepts in mind.
An individual assistant can access up to 128 tools including code interpreter
, as well as any custom tools you create via functions.
Chat session also known as a thread within the Assistant's API is where the conversation between the user and assistant occurs. Unlike traditional chat completion calls there is no limit to the number of messages in a thread. The assistant will automatically compress requests to fit the input token limit of the model.
This also means that you are not controlling how many tokens are passed to the model during each turn of the conversation. Managing tokens is abstracted away and handled entirely by the Assistants API.
Select the Clear chat button to delete the current conversation history.
Underneath the text input box there are two buttons:
- Add a message without run.
- Add and run.
Logs provide a detailed snapshot of what the assistant API activity.
By default there are three panels: assistant setup, chat session, and Logs. Show panels allows you to add, remove, and rearrange the panels. If you ever close a panel and need to get it back, use Show panels to restore the lost panel.
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
Alternatively you can delete the assistant, or thread via the Assistant's API.
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples
Reference documentation | Library source code | Package (PyPi) |
- An Azure subscription - Create one for free
- Python 3.8 or later version
- The following Python libraries: os, openai (Version 1.x is required)
- Azure CLI used for passwordless authentication in a local development environment, create the necessary context by signing in with the Azure CLI.
- An Azure OpenAI resource with a compatible model in a supported region.
- We recommend reviewing the Responsible AI transparency note and other Responsible AI resources to familiarize yourself with the capabilities and limitations of the Azure OpenAI Service.
- An Azure OpenAI resource with the
gpt-4 (1106-preview)
model deployed was used testing this example.
For passwordless authentication, you need to
- Use the azure-identity package.
- Assign the
Cognitive Services User
role to your user account. This can be done in the Azure portal under Access control (IAM) > Add role assignment. - Sign in with the Azure CLI such as
az login
.
- Install the OpenAI Python client library with:
pip install openai
- For the recommended passwordless authentication:
pip install azure-identity
Note
- File search can ingest up to 10,000 files per assistant - 500 times more than before. It is fast, supports parallel queries through multi-threaded searches, and features enhanced reranking and query rewriting.
- Vector store is a new object in the API. Once a file is added to a vector store, it's automatically parsed, chunked, and embedded, made ready to be searched. Vector stores can be used across assistants and threads, simplifying file management and billing.
- We've added support for the
tool_choice
parameter which can be used to force the use of a specific tool (like file search, code interpreter, or a function) in a particular run.
Note
This library is maintained by OpenAI. Refer to the release history to track the latest updates to the library.
To successfully make a call against the Azure OpenAI service, you'll need the following:
Variable name | Value |
---|---|
ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. You can also find the endpoint via the Deployments page in Azure AI Studio. An example endpoint is: https://docs-test-001.openai.azure.com/ . |
API-KEY |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2 . |
DEPLOYMENT-NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal or via the Deployments page in Azure AI Studio. |
Go to your resource in the Azure portal. The Keys and Endpoint can be found in the Resource Management section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either KEY1
or KEY2
. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
Create and assign persistent environment variables for your key and endpoint.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
In our code we are going to specify the following values:
Name | Description |
---|---|
Assistant name | Your deployment name that is associated with a specific model. |
Instructions | Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. |
Model | This is where you set which model deployment name to use with your assistant. The retrieval tool requires gpt-35-turbo (1106) or gpt-4 (1106-preview) model. Set this value to your deployment name, not the model name unless it is the same. |
Code interpreter | Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. |
An individual assistant can access up to 128 tools including code interpreter
, as well as any custom tools you create via functions.
Sign in to Azure with az login
then create and run an assistant with the following recommended passwordless Python example:
import os
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from openai import AzureOpenAI
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
client = AzureOpenAI(
azure_ad_token_provider=token_provider,
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_version="2024-05-01-preview",
)
# Create an assistant
assistant = client.beta.assistants.create(
name="Math Assist",
instructions="You are an AI assistant that can write code to help answer math questions.",
tools=[{"type": "code_interpreter"}],
model="gpt-4-1106-preview" # You must replace this value with the deployment name for your model.
)
# Create a thread
thread = client.beta.threads.create()
# Add a user question to the thread
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="I need to solve the equation `3x + 11 = 14`. Can you help me?"
)
# Run the thread and poll for the result
run = client.beta.threads.runs.create_and_poll(
thread_id=thread.id,
assistant_id=assistant.id,
instructions="Please address the user as Jane Doe. The user has a premium account.",
)
print("Run completed with status: " + run.status)
if run.status == "completed":
messages = client.beta.threads.messages.list(thread_id=thread.id)
print(messages.to_json(indent=2))
To use the service API key for authentication, you can create and run an assistant with the following Python example:
import os
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.environ["AZURE_OPENAI_API_KEY"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_version="2024-05-01-preview",
)
# Create an assistant
assistant = client.beta.assistants.create(
name="Math Assist",
instructions="You are an AI assistant that can write code to help answer math questions.",
tools=[{"type": "code_interpreter"}],
model="gpt-4-1106-preview" # You must replace this value with the deployment name for your model.
)
# Create a thread
thread = client.beta.threads.create()
# Add a user question to the thread
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="I need to solve the equation `3x + 11 = 14`. Can you help me?"
)
# Run the thread and poll for the result
run = client.beta.threads.runs.create_and_poll(
thread_id=thread.id,
assistant_id=assistant.id,
instructions="Please address the user as Jane Doe. The user has a premium account.",
)
print("Run completed with status: " + run.status)
if run.status == "completed":
messages = client.beta.threads.messages.list(thread_id=thread.id)
print(messages.to_json(indent=2))
Run completed with status: completed
{
"data": [
{
"id": "msg_4SuWxTubHsHpt5IlBTO5Hyw9",
"assistant_id": "asst_cYqL1RuwLyFV3HU1gkaE2k0K",
"attachments": [],
"content": [
{
"text": {
"annotations": [],
"value": "The solution to the equation \\(3x + 11 = 14\\) is \\(x = 1\\)."
},
"type": "text"
}
],
"created_at": 1716397091,
"metadata": {},
"object": "thread.message",
"role": "assistant",
"run_id": "run_hFgBPbUtO8ZNTnNPC8PgpH1S",
"thread_id": "thread_isb7spwRycI5ueT9E7357aOm"
},
{
"id": "msg_Z32w2E7kY5wEWhZqQWxIbIUB",
"assistant_id": null,
"attachments": [],
"content": [
{
"text": {
"annotations": [],
"value": "I need to solve the equation `3x + 11 = 14`. Can you help me?"
},
"type": "text"
}
],
"created_at": 1716397025,
"metadata": {},
"object": "thread.message",
"role": "user",
"run_id": null,
"thread_id": "thread_isb7spwRycI5ueT9E7357aOm"
}
],
"object": "list",
"first_id": "msg_4SuWxTubHsHpt5IlBTO5Hyw9",
"last_id": "msg_Z32w2E7kY5wEWhZqQWxIbIUB",
"has_more": false
}
In this example we create an assistant with code interpreter enabled. When we ask the assistant a math question it translates the question into python code and executes the code in sandboxed environment in order to determine the answer to the question. The code the model creates and tests to arrive at an answer is:
from sympy import symbols, Eq, solve
# Define the variable
x = symbols('x')
# Define the equation
equation = Eq(3*x + 11, 14)
# Solve the equation
solution = solve(equation, x)
solution
It is important to remember that while code interpreter gives the model the capability to respond to more complex queries by converting the questions into code and running that code iteratively in the Python sandbox until it reaches a solution, you still need to validate the response to confirm that the model correctly translated your question into a valid representation in code.
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples
Reference documentation | Source code | Package (NuGet)
- An Azure subscription - Create one for free
- The .NET 8 SDK
- An Azure OpenAI resource with a compatible model in a supported region.
- We recommend reviewing the Responsible AI transparency note and other Responsible AI resources to familiarize yourself with the capabilities and limitations of the Azure OpenAI Service.
- An Azure OpenAI resource with the
gpt-4 (1106-preview)
model deployed was used testing this example.
In a console window (such as cmd, PowerShell, or Bash), use the
dotnet new
command to create a new console app with the nameazure-openai-quickstart
:dotnet new console -n azure-openai-assistants-quickstart
Change into the directory of the newly created app folder and build the app with the
dotnet build
command:dotnet build
The build output should contain no warnings or errors.
... Build succeeded. 0 Warning(s) 0 Error(s) ...
Install the OpenAI .NET client library with the dotnet add package command:
dotnet add package Azure.AI.OpenAI --prerelease
To successfully make a call against Azure OpenAI, you need an endpoint and a key.
Variable name | Value |
---|---|
ENDPOINT |
The service endpoint can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the endpoint via the Deployments page in Azure AI Studio. An example endpoint is: https://docs-test-001.openai.azure.com/ . |
API-KEY |
This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2 . |
Go to your resource in the Azure portal. The Keys & Endpoint section can be found in the Resource Management section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either KEY1
or KEY2
. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
Create and assign persistent environment variables for your key and endpoint.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
Passwordless authentication is more secure than key-based alternatives and is the recommended approach for connecting to Azure services. If you choose Passwordless authentication, you'll need to complete the following:
Add the
Azure.Identity
package.dotnet add package Azure.Identity
Assign the
Cognitive Services User
role to your user account. This can be done in the Azure portal on your OpenAI resource under Access control (IAM) > Add role assignment.Sign-in to Azure using Visual Studio or the Azure CLI via
az login
.
Update the Program.cs
file with the following code to create an assistant:
using Azure;
using Azure.AI.OpenAI.Assistants;
// Assistants is a beta API and subject to change
// Acknowledge its experimental status by suppressing the matching warning.
string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string key = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");
var openAIClient = new AzureOpenAIClient(new Uri(endpoint), new AzureKeyCredential(key));
// Use for passwordless auth
//var openAIClient = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential());
FileClient fileClient = openAIClient.GetFileClient();
AssistantClient assistantClient = openAIClient.GetAssistantClient();
// First, let's contrive a document we'll use retrieval with and upload it.
using Stream document = BinaryData.FromString("""
{
"description": "This document contains the sale history data for Contoso products.",
"sales": [
{
"month": "January",
"by_product": {
"113043": 15,
"113045": 12,
"113049": 2
}
},
{
"month": "February",
"by_product": {
"113045": 22
}
},
{
"month": "March",
"by_product": {
"113045": 16,
"113055": 5
}
}
]
}
""").ToStream();
OpenAIFileInfo salesFile = await fileClient.UploadFileAsync(
document,
"monthly_sales.json",
FileUploadPurpose.Assistants);
// Now, we'll create a client intended to help with that data
AssistantCreationOptions assistantOptions = new()
{
Name = "Example: Contoso sales RAG",
Instructions =
"You are an assistant that looks up sales data and helps visualize the information based"
+ " on user queries. When asked to generate a graph, chart, or other visualization, use"
+ " the code interpreter tool to do so.",
Tools =
{
new FileSearchToolDefinition(),
new CodeInterpreterToolDefinition(),
},
ToolResources = new()
{
FileSearch = new()
{
NewVectorStores =
{
new VectorStoreCreationHelper([salesFile.Id]),
}
}
},
};
Assistant assistant = await assistantClient.CreateAssistantAsync(deploymentName, assistantOptions);
// Create and run a thread with a user query about the data already associated with the assistant
ThreadCreationOptions threadOptions = new()
{
InitialMessages = { "How well did product 113045 sell in February? Graph its trend over time." }
};
ThreadRun threadRun = await assistantClient.CreateThreadAndRunAsync(assistant.Id, threadOptions);
// Check back to see when the run is done
do
{
Thread.Sleep(TimeSpan.FromSeconds(1));
threadRun = assistantClient.GetRun(threadRun.ThreadId, threadRun.Id);
} while (!threadRun.Status.IsTerminal);
// Finally, we'll print out the full history for the thread that includes the augmented generation
AsyncCollectionResult<ThreadMessage> messages
= assistantClient.GetMessagesAsync(
threadRun.ThreadId,
new MessageCollectionOptions() { Order = MessageCollectionOrder.Ascending });
await foreach (ThreadMessage message in messages)
{
Console.Write($"[{message.Role.ToString().ToUpper()}]: ");
foreach (MessageContent contentItem in message.Content)
{
if (!string.IsNullOrEmpty(contentItem.Text))
{
Console.WriteLine($"{contentItem.Text}");
if (contentItem.TextAnnotations.Count > 0)
{
Console.WriteLine();
}
// Include annotations, if any.
foreach (TextAnnotation annotation in contentItem.TextAnnotations)
{
if (!string.IsNullOrEmpty(annotation.InputFileId))
{
Console.WriteLine($"* File citation, file ID: {annotation.InputFileId}");
}
if (!string.IsNullOrEmpty(annotation.OutputFileId))
{
Console.WriteLine($"* File output, new file ID: {annotation.OutputFileId}");
}
}
}
if (!string.IsNullOrEmpty(contentItem.ImageFileId))
{
OpenAIFileInfo imageInfo = await fileClient.GetFileAsync(contentItem.ImageFileId);
BinaryData imageBytes = await fileClient.DownloadFileAsync(contentItem.ImageFileId);
using FileStream stream = File.OpenWrite($"{imageInfo.Filename}.png");
imageBytes.ToStream().CopyTo(stream);
Console.WriteLine($"<image: {imageInfo.Filename}.png>");
}
}
Console.WriteLine();
}
Run the app using the dotnet run
command:
dotnet run
The console output should resemble the following:
[USER]: How well did product 113045 sell in February? Graph its trend over time.
[ASSISTANT]: Product 113045 sold 22 units in February. Let's visualize its sales trend over the given months (January through March).
I'll create a graph to depict this trend.
[ASSISTANT]: <image: 553380b7-fdb6-49cf-9df6-e8e6700d69f4.png>
The graph above visualizes the sales trend for product 113045 from January to March. As seen, the sales peaked in February with 22 units sold, and fluctuated over the period from January (12 units) to March (16 units).
If you need further analysis or more details, feel free to ask!
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples
Reference documentation | Library source code | Package (npm) |
- An Azure subscription - Create one for free
- Node.js LTS or ESM support.
- Azure CLI used for passwordless authentication in a local development environment, create the necessary context by signing in with the Azure CLI.
- An Azure OpenAI resource with a compatible model in a supported region.
- We recommend reviewing the Responsible AI transparency note and other Responsible AI resources to familiarize yourself with the capabilities and limitations of the Azure OpenAI Service.
- An Azure OpenAI resource with the
gpt-4 (1106-preview)
model deployed was used testing this example.
For keyless authentication, you need to
- Use the
@azure/identity
package. - Assign the
Cognitive Services User
role to your user account. This can be done in the Azure portal under Access control (IAM) > Add role assignment. - Sign in with the Azure CLI such as
az login
.
Create a new folder
assistants-quickstart
to contain the application and open Visual Studio Code in that folder with the following command:mkdir assistants-quickstart && code assistants-quickstart
Create the
package.json
with the following command:npm init -y
Update the
package.json
to ECMAScript with the following command:npm pkg set type=module
Install the OpenAI Assistants client library for JavaScript with:
npm install openai
For the recommended passwordless authentication:
npm install @azure/identity
Variable name | Value |
---|---|
AZURE_OPENAI_ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. |
AZURE_OPENAI_DEPLOYMENT_NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal. |
OPENAI_API_VERSION |
Learn more about API Versions. |
Learn more about keyless authentication and setting environment variables.
Caution
To use the recommended keyless authentication with the SDK, make sure that the AZURE_OPENAI_API_KEY
environment variable isn't set.
In our code we're going to specify the following values:
Name | Description |
---|---|
Assistant name | Your deployment name that is associated with a specific model. |
Instructions | Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. |
Model | This is the deployment name. |
Code interpreter | Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. |
An individual assistant can access up to 128 tools including code interpreter
, and any custom tools you create via functions.
Create the
index.js
file with the following code:const { AzureOpenAI } = require("openai"); const { DefaultAzureCredential, getBearerTokenProvider, } = require("@azure/identity"); // Get environment variables const azureOpenAIEndpoint = process.env.AZURE_OPENAI_ENDPOINT; const azureOpenAIDeployment = process.env.AZURE_OPENAI_DEPLOYMENT_NAME; const azureOpenAIVersion = process.env.OPENAI_API_VERSION; // Check env variables if (!azureOpenAIEndpoint || !azureOpenAIDeployment || !azureOpenAIVersion) { throw new Error( "Please ensure to set AZURE_OPENAI_DEPLOYMENT_NAME and AZURE_OPENAI_ENDPOINT in your environment variables." ); } // Get Azure SDK client const getClient = () => { const credential = new DefaultAzureCredential(); const scope = "https://cognitiveservices.azure.com/.default"; const azureADTokenProvider = getBearerTokenProvider(credential, scope); const assistantsClient = new AzureOpenAI({ endpoint: azureOpenAIEndpoint, apiVersion: azureOpenAIVersion, azureADTokenProvider, }); return assistantsClient; }; const assistantsClient = getClient(); const options = { model: azureOpenAIDeployment, // Deployment name seen in Azure AI Studio name: "Math Tutor", instructions: "You are a personal math tutor. Write and run JavaScript code to answer math questions.", tools: [{ type: "code_interpreter" }], }; const role = "user"; const message = "I need to solve the equation `3x + 11 = 14`. Can you help me?"; // Create an assistant const assistantResponse = await assistantsClient.beta.assistants.create( options ); console.log(`Assistant created: ${JSON.stringify(assistantResponse)}`); // Create a thread const assistantThread = await assistantsClient.beta.threads.create({}); console.log(`Thread created: ${JSON.stringify(assistantThread)}`); // Add a user question to the thread const threadResponse = await assistantsClient.beta.threads.messages.create( assistantThread.id, { role, content: message, } ); console.log(`Message created: ${JSON.stringify(threadResponse)}`); // Run the thread and poll it until it is in a terminal state const runResponse = await assistantsClient.beta.threads.runs.createAndPoll( assistantThread.id, { assistant_id: assistantResponse.id, }, { pollIntervalMs: 500 } ); console.log(`Run created: ${JSON.stringify(runResponse)}`); // Get the messages const runMessages = await assistantsClient.beta.threads.messages.list( assistantThread.id ); for await (const runMessageDatum of runMessages) { for (const item of runMessageDatum.content) { // types are: "image_file" or "text" if (item.type === "text") { console.log(`Message content: ${JSON.stringify(item.text?.value)}`); } } }
Sign in to Azure with the following command:
az login
Run the JavaScript file.
node index.js
Assistant created: {"id":"asst_zXaZ5usTjdD0JGcNViJM2M6N","createdAt":"2024-04-08T19:26:38.000Z","name":"Math Tutor","description":null,"model":"daisy","instructions":"You are a personal math tutor. Write and run JavaScript code to answer math questions.","tools":[{"type":"code_interpreter"}],"fileIds":[],"metadata":{}}
Thread created: {"id":"thread_KJuyrB7hynun4rvxWdfKLIqy","createdAt":"2024-04-08T19:26:38.000Z","metadata":{}}
Message created: {"id":"msg_o0VkXnQj3juOXXRCnlZ686ff","createdAt":"2024-04-08T19:26:38.000Z","threadId":"thread_KJuyrB7hynun4rvxWdfKLIqy","role":"user","content":[{"type":"text","text":{"value":"I need to solve the equation `3x + 11 = 14`. Can you help me?","annotations":[]},"imageFile":{}}],"assistantId":null,"runId":null,"fileIds":[],"metadata":{}}
Created run
Run created: {"id":"run_P8CvlouB8V9ZWxYiiVdL0FND","object":"thread.run","status":"queued","model":"daisy","instructions":"You are a personal math tutor. Write and run JavaScript code to answer math questions.","tools":[{"type":"code_interpreter"}],"metadata":{},"usage":null,"assistantId":"asst_zXaZ5usTjdD0JGcNViJM2M6N","threadId":"thread_KJuyrB7hynun4rvxWdfKLIqy","fileIds":[],"createdAt":"2024-04-08T19:26:39.000Z","expiresAt":"2024-04-08T19:36:39.000Z","startedAt":null,"completedAt":null,"cancelledAt":null,"failedAt":null}
Message content: "The solution to the equation \\(3x + 11 = 14\\) is \\(x = 1\\)."
Message content: "Yes, of course! To solve the equation \\( 3x + 11 = 14 \\), we can follow these steps:\n\n1. Subtract 11 from both sides of the equation to isolate the term with x.\n2. Then, divide by 3 to find the value of x.\n\nLet me calculate that for you."
Message content: "I need to solve the equation `3x + 11 = 14`. Can you help me?"
It's important to remember that while the code interpreter gives the model the capability to respond to more complex queries by converting the questions into code and running that code iteratively in JavaScript until it reaches a solution, you still need to validate the response to confirm that the model correctly translated your question into a valid representation in code.
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples
Reference documentation | Library source code | Package (npm) |
- An Azure subscription - Create one for free
- Node.js LTS or ESM support.
- TypeScript installed globally
- Azure CLI used for passwordless authentication in a local development environment, create the necessary context by signing in with the Azure CLI.
- An Azure OpenAI resource with a compatible model in a supported region.
- We recommend reviewing the Responsible AI transparency note and other Responsible AI resources to familiarize yourself with the capabilities and limitations of the Azure OpenAI Service.
- An Azure OpenAI resource with the
gpt-4 (1106-preview)
model deployed was used testing this example.
For passwordless authentication, you need to
- Use the
@azure/identity
package. - Assign the
Cognitive Services User
role to your user account. This can be done in the Azure portal under Access control (IAM) > Add role assignment. - Sign in with the Azure CLI such as
az login
.
Create a new folder
assistants-quickstart
to contain the application and open Visual Studio Code in that folder with the following command:mkdir assistants-quickstart && code assistants-quickstart
Create the
package.json
with the following command:npm init -y
Update the
package.json
to ECMAScript with the following command:npm pkg set type=module
Install the OpenAI Assistants client library for JavaScript with:
npm install openai
For the recommended passwordless authentication:
npm install @azure/identity
Variable name | Value |
---|---|
AZURE_OPENAI_ENDPOINT |
This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal. |
AZURE_OPENAI_DEPLOYMENT_NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal. |
OPENAI_API_VERSION |
Learn more about API Versions. |
Learn more about keyless authentication and setting environment variables.
Caution
To use the recommended keyless authentication with the SDK, make sure that the AZURE_OPENAI_API_KEY
environment variable isn't set.
In our code we're going to specify the following values:
Name | Description |
---|---|
Assistant name | Your deployment name that is associated with a specific model. |
Instructions | Instructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses. |
Model | This is the deployment name. |
Code interpreter | Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. |
An individual assistant can access up to 128 tools including code interpreter
, and any custom tools you create via functions.
Create the
index.ts
file with the following code:import { AzureOpenAI } from "openai"; import { Assistant, AssistantCreateParams, AssistantTool, } from "openai/resources/beta/assistants"; import { Message, MessagesPage } from "openai/resources/beta/threads/messages"; import { Run } from "openai/resources/beta/threads/runs/runs"; import { Thread } from "openai/resources/beta/threads/threads"; // Add `Cognitive Services User` to identity for Azure OpenAI resource import { DefaultAzureCredential, getBearerTokenProvider, } from "@azure/identity"; // Get environment variables const azureOpenAIEndpoint = process.env.AZURE_OPENAI_ENDPOINT as string; const azureOpenAIDeployment = process.env .AZURE_OPENAI_DEPLOYMENT_NAME as string; const openAIVersion = process.env.OPENAI_API_VERSION as string; // Check env variables if (!azureOpenAIEndpoint || !azureOpenAIDeployment || !openAIVersion) { throw new Error( "Please ensure to set AZURE_OPENAI_DEPLOYMENT_NAME and AZURE_OPENAI_ENDPOINT in your environment variables." ); } // Get Azure SDK client const getClient = (): AzureOpenAI => { const credential = new DefaultAzureCredential(); const scope = "https://cognitiveservices.azure.com/.default"; const azureADTokenProvider = getBearerTokenProvider(credential, scope); const assistantsClient = new AzureOpenAI({ endpoint: azureOpenAIEndpoint, apiVersion: openAIVersion, azureADTokenProvider, }); return assistantsClient; }; const assistantsClient = getClient(); const options: AssistantCreateParams = { model: azureOpenAIDeployment, // Deployment name seen in Azure AI Studio name: "Math Tutor", instructions: "You are a personal math tutor. Write and run JavaScript code to answer math questions.", tools: [{ type: "code_interpreter" } as AssistantTool], }; const role = "user"; const message = "I need to solve the equation `3x + 11 = 14`. Can you help me?"; // Create an assistant const assistantResponse: Assistant = await assistantsClient.beta.assistants.create(options); console.log(`Assistant created: ${JSON.stringify(assistantResponse)}`); // Create a thread const assistantThread: Thread = await assistantsClient.beta.threads.create({}); console.log(`Thread created: ${JSON.stringify(assistantThread)}`); // Add a user question to the thread const threadResponse: Message = await assistantsClient.beta.threads.messages.create(assistantThread.id, { role, content: message, }); console.log(`Message created: ${JSON.stringify(threadResponse)}`); // Run the thread and poll it until it is in a terminal state const runResponse: Run = await assistantsClient.beta.threads.runs.createAndPoll( assistantThread.id, { assistant_id: assistantResponse.id, }, { pollIntervalMs: 500 } ); console.log(`Run created: ${JSON.stringify(runResponse)}`); // Get the messages const runMessages: MessagesPage = await assistantsClient.beta.threads.messages.list(assistantThread.id); for await (const runMessageDatum of runMessages) { for (const item of runMessageDatum.content) { // types are: "image_file" or "text" if (item.type === "text") { console.log(`Message content: ${JSON.stringify(item.text?.value)}`); } } }
Create the
tsconfig.json
file to transpile the TypeScript code and copy the following code for ECMAScript.{ "compilerOptions": { "module": "NodeNext", "target": "ES2022", // Supports top-level await "moduleResolution": "NodeNext", "skipLibCheck": true, // Avoid type errors from node_modules "strict": true // Enable strict type-checking options }, "include": ["*.ts"] }
Transpile from TypeScript to JavaScript.
tsc
Sign in to Azure with the following command:
az login
Run the code with the following command:
node index.js
Assistant created: {"id":"asst_zXaZ5usTjdD0JGcNViJM2M6N","createdAt":"2024-04-08T19:26:38.000Z","name":"Math Tutor","description":null,"model":"daisy","instructions":"You are a personal math tutor. Write and run JavaScript code to answer math questions.","tools":[{"type":"code_interpreter"}],"fileIds":[],"metadata":{}}
Thread created: {"id":"thread_KJuyrB7hynun4rvxWdfKLIqy","createdAt":"2024-04-08T19:26:38.000Z","metadata":{}}
Message created: {"id":"msg_o0VkXnQj3juOXXRCnlZ686ff","createdAt":"2024-04-08T19:26:38.000Z","threadId":"thread_KJuyrB7hynun4rvxWdfKLIqy","role":"user","content":[{"type":"text","text":{"value":"I need to solve the equation `3x + 11 = 14`. Can you help me?","annotations":[]},"imageFile":{}}],"assistantId":null,"runId":null,"fileIds":[],"metadata":{}}
Created run
Run created: {"id":"run_P8CvlouB8V9ZWxYiiVdL0FND","object":"thread.run","status":"queued","model":"daisy","instructions":"You are a personal math tutor. Write and run JavaScript code to answer math questions.","tools":[{"type":"code_interpreter"}],"metadata":{},"usage":null,"assistantId":"asst_zXaZ5usTjdD0JGcNViJM2M6N","threadId":"thread_KJuyrB7hynun4rvxWdfKLIqy","fileIds":[],"createdAt":"2024-04-08T19:26:39.000Z","expiresAt":"2024-04-08T19:36:39.000Z","startedAt":null,"completedAt":null,"cancelledAt":null,"failedAt":null}
Message content: "The solution to the equation \\(3x + 11 = 14\\) is \\(x = 1\\)."
Message content: "Yes, of course! To solve the equation \\( 3x + 11 = 14 \\), we can follow these steps:\n\n1. Subtract 11 from both sides of the equation to isolate the term with x.\n2. Then, divide by 3 to find the value of x.\n\nLet me calculate that for you."
Message content: "I need to solve the equation `3x + 11 = 14`. Can you help me?"
It's important to remember that while the code interpreter gives the model the capability to respond to more complex queries by converting the questions into code and running that code iteratively in JavaScript until it reaches a solution, you still need to validate the response to confirm that the model correctly translated your question into a valid representation in code.
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples
- An Azure subscription - Create one for free
- Python 3.8 or later version
- An Azure OpenAI resource with a compatible model in a supported region.
- We recommend reviewing the Responsible AI transparency note and other Responsible AI resources to familiarize yourself with the capabilities and limitations of the Azure OpenAI Service.
- An Azure OpenAI resource with the
gpt-4 (1106-preview)
model deployed was used testing this example.
To successfully make a call against Azure OpenAI, you'll need the following:
Variable name | Value |
---|---|
ENDPOINT |
The service endpoint can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the endpoint via the Deployments page in Azure AI Studio. An example endpoint is: https://docs-test-001.openai.azure.com/ . |
API-KEY |
This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2 . |
DEPLOYMENT-NAME |
This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Deployments in the Azure portal or via the Deployments page in Azure AI Studio. |
Go to your resource in the Azure portal. The Endpoint and Keys can be found in the Resource Management section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either KEY1
or KEY2
. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
Create and assign persistent environment variables for your key and endpoint.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
Note
With Azure OpenAI the model
parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to "model": "{your-custom-model-deployment-name}"
.
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-05-01-preview \
-H "api-key: $AZURE_OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"instructions": "You are an AI assistant that can write code to help answer math questions.",
"name": "Math Assist",
"tools": [{"type": "code_interpreter"}],
"model": "gpt-4-1106-preview"
}'
An individual assistant can access up to 128 tools including code interpreter
, as well as any custom tools you create via functions.
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
-d ''
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/messages \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
-d '{
"role": "user",
"content": "I need to solve the equation `3x + 11 = 14`. Can you help me?"
}'
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs \
-H "api-key: $AZURE_OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"assistant_id": "asst_abc123",
}'
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs/run_abc123 \
-H "api-key: $AZURE_OPENAI_API_KEY" \
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/messages \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
In this example we create an assistant with code interpreter enabled. When we ask the assistant a math question it translates the question into python code and executes the code in sandboxed environment in order to determine the answer to the question. The code the model creates and tests to arrive at an answer is:
from sympy import symbols, Eq, solve
# Define the variable
x = symbols('x')
# Define the equation
equation = Eq(3*x + 11, 14)
# Solve the equation
solution = solve(equation, x)
solution
It is important to remember that while code interpreter gives the model the capability to respond to more complex queries by converting the questions into code and running that code iteratively in the Python sandbox until it reaches a solution, you still need to validate the response to confirm that the model correctly translated your question into a valid representation in code.
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples