Share via

Azure AI Foundry Agents - Project-scoped /openai/v1/responses endpoint returns HTTP 431 for ALL requests

Hemant Punamiya 0 Reputation points
2026-04-05T08:19:34.7766667+00:00

Environment:

  • Region: Central India (centralindia)
  • Resource: Azure AI Services (Cognitive Services) - brsr-quant-pot-resource
  • Project: brsr_quant_pot
  • SDK: azure-ai-projects==2.0.1, openai==2.30.0
  • Model deployment: gpt-4o-mini (gpt-4.1-mini) - Status: Succeeded

Problem: All agents deployed in our Azure AI Foundry project return "An error occurred while processing your request" in the Playground. When tested via SDK or REST API, the project-scoped responses endpoint returns HTTP 431 (Request Header Fields Too Large) for every single request — regardless of header size, auth method, or agent complexity.

Key findings from testing:

  1. Agent management APIs work fine on the project-scoped endpoint (project.agents.list(), create_version(), delete() all succeed).
  2. The project-scoped responses endpoint fails for ALL requests:
POST https://<Resource_id>.
.services.ai.azure.com/api/projects/brsr_quant_pot/openai/v1/responses
→ HTTP 431 Request Header Fields Too Large
  1. Even with a tiny 84-byte api-key header (no Bearer token), it still returns 431.
  2. Even with only ONE minimal agent (no tools, 40-character instructions), it returns 431.
  3. The resource-level endpoint works fine:
POST https://<Resource_id>.services.ai.azure.com/openai/v1/responses
→ HTTP 200 (but doesn't support agent_reference)
  1. The Azure Function Apps backing the agents are running and return correct data when called directly with the function key.

Steps to reproduce:

# Get API key
API_KEY=$(az cognitiveservices account keys list --name brsr-quant-pot-resource --resource-group rs-esg-emissions --query key1 -o tsv)

# This FAILS with 431 (project-scoped):
curl -X POST "https://<Resource_id>.services.ai.azure.com/api/projects/brsr_quant_pot/openai/v1/responses" \
  -H "api-key: $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"input":"Hello","model":"gpt-4o-mini"}'
# Returns: Request Header Fields Too Large (HTTP 431)

# This WORKS (resource-level):
curl -X POST "https://<Resource_id>.services.ai.azure.com/openai/v1/responses" \
  -H "api-key: $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"input":"Hello","model":"gpt-4o-mini"}'
# Returns: HTTP 200

What this rules out:

  • NOT a header size issue (84-byte key still triggers 431)
  • NOT an agent/spec complexity issue (1 minimal toolless agent still triggers 431)
  • NOT a model deployment issue (model is Succeeded, works on resource-level endpoint)
  • NOT a client SDK issue (raw curl also fails)

Conclusion: The project routing layer on the /openai/v1/responses path is broken for this resource/project. Other paths on the same project-scoped endpoint (agent management) work correctly. This blocks all agent Playground testing and SDK-based agent queries.

Subscription ID: <REDACTED at Support side>

Is this a known service issue in the Central India region? Is there a workaround to use agent_reference on the resource-level endpoint? Or does the project need to be recreated?

Azure AI Bot Service
Azure AI Bot Service

An Azure service that provides an integrated environment for bot development.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.