I'm not able to pass the image to GPT 4o model through azure openai assistants api. The create message endpoint of deployed model only support string in the content.
/threads/{thread_id}/messages
It gives an error when trying to pass content as an array format which is needed when passing image to the model, shown below
content = [
{
"type": "text",
"text": {
"value": text_content,
"annotations": []
}
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/{extension};base64,{base64image}",
}
}
]
message = await self.client.beta.threads.messages.create(
thread_id=thread_id,
role="user",
content=content,
)
gives this error when making post request to azure endpoint
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid type for 'content[0].text': expected a string, but got an object instead.", 'type': 'invalid_request_error', 'param': 'content[0].text', 'code': 'invalid_type'}}
Is there any other way from which I can parse the image files as GPT 4o natively supports the image processing and it is processing images when chat-completion API is used.