Hello John Kronlokken,
I see you are encountering a 400 Bad Request error when attempting to create a batch job for the gpt-5-mini model.
Here is the workaround step to fix it: Correct the JSONL model Field vs. Deployment Name
In Azure OpenAI Batch, the model field inside your JSONL file is not the generic model's name (e.g., "gpt-5-mini"). It must match your Deployment Name exactly.
Scenario: You deployed the model gpt-5-mini, but did you also name the deployment gpt-5-mini?
- Example: If you named your deployment
my-gpt5-test, but your JSONL says"model": "gpt-5-mini", the batch job will fail validation with a 400 error. - Fix: Ensure the value in your JSONL matches your deployment name in Azure AI Foundry exactly (case-sensitive).
// If your Deployment Name is "my-gpt5-deployment":
"body": {"model": "my-gpt5-deployment", "messages": ...}
Above screenshot reference is from the link you shared; https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/batch?view=foundry-classic&tabs=global-batch%2Cstandard-input%2Cpython-secure&pivots=ai-foundry-portal
Official Microsoft Documentation References:
I hope this helps in providing a way ahead. Kindly "Accept the answer and upvote" to support the community members for similar remediation.