{
"text": {
"annotations": [],
"value": "Your document is ready and you can download here '<a href=\"/mnt/data/your_doc.docx\">download</a>"
},
"type": "text"
}
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I'm encountering a strange behavior using the AssistantAPI. Although the assistant successfully responds with information about the asset created, it does not provide the expected files and annotation references for download in the content response as documented here and here
The issue does not occur when using GPT-4 128K Turbo, where the expected response format is correctly returned with the file and annotation references.
{
"id": "XXX",
"assistant_id": "YYY",
"content": [
{
"image_file": {
"file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD"
},
"type": "image_file"
}]
{
"id": "XXX",
"assistant_id": "YYY",
"attachments": null,
"completed_at": null,
"content": [
{
"text": {
"annotations": [],
"value": "Your document is ready and you can download here '<a href=\"/mnt/data/your_doc.docx\">download</a>"
},
"type": "text"
}
],
}
Analyzing the response, I noticed that there are no values or new references for retrieving assets (images or annotations). The attachments field is null, and the file_ids array is empty.
Has anyone experienced similar issues or have insights on why the annotations
and file_ids
are missing in the response? Any suggestions on how to resolve this discrepancy?
Thank you!
{
"text": {
"annotations": [],
"value": "Your document is ready and you can download here '<a href=\"/mnt/data/your_doc.docx\">download</a>"
},
"type": "text"
}
The root cause is quite intriguing.
When migrating to 4o, we modified the assistant instructions by adding the following phrase at the end:
"Format the answer as HTML. Here example: omitted example for brevity"
It appears that this instruction causes the model to respond with the HTML inside the "value" property. However, for some unknown reasons, it removes the references to "annotations" and "images" from the response.
I was able to replicate this issue on the OpenAI Assistant (non-Azure) and observed the same behavior.
Upon removing the final prompt, the functionality returned to normal.