Azure OpenAI Assistants Code Interpreter (Preview)
Code Interpreter allows the Assistants API to write and run Python code in a sandboxed execution environment. With Code Interpreter enabled, your Assistant can run code iteratively to solve more challenging code, math, and data analysis problems. When your Assistant writes code that fails to run, it can iterate on this code by modifying and running different code until the code execution succeeds.
Important
Code Interpreter has additional charges beyond the token based fees for Azure OpenAI usage. If your Assistant calls Code Interpreter simultaneously in two different threads, two code interpreter sessions are created. Each session is active by default for one hour.
Note
- File search can ingest up to 10,000 files per assistant - 500 times more than before. It is fast, supports parallel queries through multi-threaded searches, and features enhanced reranking and query rewriting.
- Vector store is a new object in the API. Once a file is added to a vector store, it's automatically parsed, chunked, and embedded, made ready to be searched. Vector stores can be used across assistants and threads, simplifying file management and billing.
- We've added support for the
tool_choice
parameter which can be used to force the use of a specific tool (like file search, code interpreter, or a function) in a particular run.
Code interpreter support
Supported models
The models page contains the most up-to-date information on regions/models where Assistants and code interpreter are supported.
We recommend using assistants with the latest models to take advantage of the new features, larger context windows, and more up-to-date training data.
API Versions
2024-02-15-preview
2024-05-01-preview
Supported file types
File format | MIME Type |
---|---|
.c | text/x-c |
.cpp | text/x-c++ |
.csv | application/csv |
.docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
.html | text/html |
.java | text/x-java |
.json | application/json |
.md | text/markdown |
application/pdf | |
.php | text/x-php |
.pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation |
.py | text/x-python |
.py | text/x-script.python |
.rb | text/x-ruby |
.tex | text/x-tex |
.txt | text/plain |
.css | text/css |
.jpeg | image/jpeg |
.jpg | image/jpeg |
.js | text/javascript |
.gif | image/gif |
.png | image/png |
.tar | application/x-tar |
.ts | application/typescript |
.xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet |
.xml | application/xml or "text/xml" |
.zip | application/zip |
File upload API reference
Assistants use the same API for file upload as fine-tuning. When uploading a file, you have to specify an appropriate value for the purpose parameter.
Enable Code Interpreter
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
assistant = client.beta.assistants.create(
instructions="You are an AI assistant that can write code to help answer math questions",
model="<REPLACE WITH MODEL DEPLOYMENT NAME>", # replace with model deployment name.
tools=[{"type": "code_interpreter"}]
)
Upload file for Code Interpreter
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
# Upload a file with an "assistants" purpose
file = client.files.create(
file=open("speech.py", "rb"),
purpose='assistants'
)
# Create an assistant using the file ID
assistant = client.beta.assistants.create(
instructions="You are an AI assistant that can write code to help answer math questions.",
model="gpt-4-1106-preview",
tools=[{"type": "code_interpreter"}],
tool_resources={"code interpreter":{"file_ids":[file.id]}}
)
Pass file to an individual thread
In addition to making files accessible at the Assistants level you can pass files so they're only accessible to a particular thread.
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
thread = client.beta.threads.create(
messages=[
{
"role": "user",
"content": "I need to solve the equation `3x + 11 = 14`. Can you help me?",
"file_ids": ["file.id"] # file id will look like: "assistant-R9uhPxvRKGH3m0x5zBOhMjd2"
}
]
)
Download files generated by Code Interpreter
Files generated by Code Interpreter can be found in the Assistant message responses
{
"id": "msg_oJbUanImBRpRran5HSa4Duy4",
"assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4",
"content": [
{
"image_file": {
"file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD"
},
"type": "image_file"
},
# ...
}
You can download these generated files by passing the files to the files API:
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
image_data = client.files.content("assistant-abc123")
image_data_bytes = image_data.read()
with open("./my-image.png", "wb") as file:
file.write(image_data_bytes)
See also
- File Upload API reference
- Assistants API Reference
- Learn more about how to use Assistants with our How-to guide on Assistants.
- Azure OpenAI Assistants API samples