@Jhuze Welcome to Microsoft Q&A Forum, Thank you for posting your query here!
.
I tried with the below sample code and it worked fine as expected. I was able to get the image description response with 200 (success) status code.
.
import base64
import requests
# Encode the image to base64
sLongImageFn='newimage.jpg'
sImageData = base64.b64encode(open(sLongImageFn, 'rb').read()).decode('ascii')
sEndpoint='https://navbaoaigptvision1.openai.azure.com/'
sKey='83f2b417c2a84a3a859d2e653c30270e'
# dData is copied from the tutorial
dData = {
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this picture:"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{sImageData}",
}
}
]
}
],
"max_tokens": 100,
"stream": False
}
# Make the API request
response = requests.post(
f'{sEndpoint}openai/deployments/test/chat/completions?api-version=2023-12-01-preview',
headers={'api-key': sKey, 'Content-Type': 'application/json'},
json=dData
)
# Print the response
print(response.json())
Please try with the above sample code for now and let me know if that helps.
.
We currently support PNG (.png), JPEG (.jpeg and .jpg), WEBP (.webp), and non-animated GIF (.gif). Can you try with this image for now and check ?
.
Also please refer this limitations section for the image support with Gpt vision preview feature.
.
Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.