I have an update on this.
This functionality was not working for me and my team till yesterday. But, today it is working just fine. I guess the concerned team has already pushed a fix for this.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
We are encountering an issue when executing Python code with code_interpreter in Azure AI Foundry.
When running a graph generation task (e.g., using matplotlib), one of the following occurs:
Run failed: {'code': 'server_error', 'message': 'Sorry, something went wrong.'}Even when the error occurs, the Run data includes an image ID, though no corresponding file exists in the file list.
Important note
Existing projects that were created earlier continue to work normally, but the issue occurs consistently in all newly created projects, regardless of configuration or region.
It seems that the image generation itself succeeds, but something fails afterward — possibly during the backend storage upload or image reference process.
Has anyone else experienced this issue recently? Could this be a known limitation or temporary backend issue in Azure AI Foundry? Any advice or workaround would be appreciated.
Environment:
code_interpreterI have an update on this.
This functionality was not working for me and my team till yesterday. But, today it is working just fine. I guess the concerned team has already pushed a fix for this.
Hello Gyeoun Jung,
Welcome to Microsoft Q&A and thank you for reaching out.
I understand that you're running into some frustrating issues when trying to generate graphs using the code_interpreter feature in your newly created Azure AI Foundry projects. The server_error, missing images, and “resource usage restricted” messages can definitely be disruptive thank you for sharing such detailed observations.
Based on your description and internal findings, this behavior appears to be a recent backend regression affecting new project environments, rather than an issue with your specific code or configuration. The image generation likely succeeds, but the failure occurs during the backend storage upload or image reference process.
Here are a few Workarounds and Troubleshooting that you can try and verify,
Wait and Retry
server_error or resource usage restricted) can occur when regional clusters are under heavy load. Waiting a few minutes and retrying often helps, especially during peak activity periods.Check Resource Availability and Status
Validate Deployment and Region Settings
gpt-4o, version 2024-11-20) and region support for the code_interpreter feature.Monitor Request Volume and Limits
Review and Adjust Quotas
Recreate or Clone Projects
Check Logs and Run IDs
run_cqSqofeP98UkrHFylbWzIQxF) and verify if the system still generates an image ID.As a temporary workaround, some users have reported success by:
Please refer this Performing analysis on request volumes
I Hope this helps. Do let me know if you have any further queries.
Thank you!