Server error occurs in Azure AI Foundry when generating graphs using code_interpreter

Gyeoun Jung 40 Reputation points
2025-10-24T00:28:39.0966667+00:00

User's image

We are encountering an issue when executing Python code with code_interpreter in Azure AI Foundry.

When running a graph generation task (e.g., using matplotlib), one of the following occurs:

  • A server error is returned: Run failed: {'code': 'server_error', 'message': 'Sorry, something went wrong.'}
  • Sometimes no explicit error appears, but the graph is not displayed. occasionally, messages like “Resource usage restricted” or “Request limit exceeded” are returned.

Even when the error occurs, the Run data includes an image ID, though no corresponding file exists in the file list.

Important note

Existing projects that were created earlier continue to work normally, but the issue occurs consistently in all newly created projects, regardless of configuration or region.

It seems that the image generation itself succeeds, but something fails afterward — possibly during the backend storage upload or image reference process.

Has anyone else experienced this issue recently? Could this be a known limitation or temporary backend issue in Azure AI Foundry? Any advice or workaround would be appreciated.

Environment:

  • Model: GPT-4o
  • Feature: Agent + code_interpreter
  • Reproducibility: 100% (in newly created projects only)
  • Tested regions: East Japan, East US, East US2
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
{count} votes

Answer accepted by question author
  1. Anubhav Chhabra 80 Reputation points
    2025-11-07T15:57:24.7666667+00:00

    I have an update on this.

    This functionality was not working for me and my team till yesterday. But, today it is working just fine. I guess the concerned team has already pushed a fix for this.


Answer accepted by question author
  1. SRILAKSHMI C 11,140 Reputation points Microsoft External Staff Moderator
    2025-10-24T05:45:06.6633333+00:00

    Hello Gyeoun Jung,

    Welcome to Microsoft Q&A and thank you for reaching out.

    I understand that you're running into some frustrating issues when trying to generate graphs using the code_interpreter feature in your newly created Azure AI Foundry projects. The server_error, missing images, and “resource usage restricted” messages can definitely be disruptive thank you for sharing such detailed observations.

    Based on your description and internal findings, this behavior appears to be a recent backend regression affecting new project environments, rather than an issue with your specific code or configuration. The image generation likely succeeds, but the failure occurs during the backend storage upload or image reference process.

    Here are a few Workarounds and Troubleshooting that you can try and verify,

    Wait and Retry

    • Temporary server errors (like server_error or resource usage restricted) can occur when regional clusters are under heavy load. Waiting a few minutes and retrying often helps, especially during peak activity periods.

    Check Resource Availability and Status

    • Visit the Azure AI Foundry status dashboard to verify if there are any ongoing incidents in the regions you’ve tested (East Japan, East US, East US2).
    • If your older projects continue to function correctly, this likely points to resource or configuration constraints tied to new project setups.

    Validate Deployment and Region Settings

    • Confirm your deployment model (gpt-4o, version 2024-11-20) and region support for the code_interpreter feature.
    • Try duplicating one of your older working projects and running the same task to confirm if it works as expected this helps isolate environment-level issues.

    Monitor Request Volume and Limits

    • Use Azure Monitor to check if your request volume or concurrent job limits are being exceeded.
    • If needed, you can scale your service or adjust limits to accommodate higher workloads.

    Review and Adjust Quotas

    • Go to Azure AI Foundry → Management Center → Quotas and verify available capacity for Standard (Global) and Code Interpreter features.
    • If you see restrictions, request a quota increase through the Azure portal or temporarily pause other deployments to free resources.

    Recreate or Clone Projects

    • Since this issue only occurs in newly created projects, try duplicating a working older project or re-creating a new one in a different region.
    • Ensure all project configurations (model, deployment type, and region) match those of the older setup.

    Check Logs and Run IDs

    • When the run fails, note the Run ID (e.g., run_cqSqofeP98UkrHFylbWzIQxF) and verify if the system still generates an image ID.
    • If the image ID exists but no file is present, it confirms a storage or file-linking issue in the backend pipeline.

    As a temporary workaround, some users have reported success by:

    • Running the same code in an older, pre-existing project, or
    • Deploying the model in an alternate region such as West Europe or Southeast Asia.

    Please refer this Performing analysis on request volumes

    I Hope this helps. Do let me know if you have any further queries.

    Thank you!

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.