Hello Kavishka !
Thank you for posting on Microsoft Learn.
Unlike standard GPT models, reasoning models like o3-mini use:
- max_completion_tokens instead of max_tokens
- temperature (for randomness control)
- top_p (nucleus sampling)
If you are using Prompt Flow within Azure AI Studio, your YAML or JSON payload should include the parameter:
parameters:
model: "o3-mini"
max_completion_tokens: 512
temperature: 0.7
top_p: 0.95
prompt: "Evaluate the following reasoning dataset..."
For Azure AI Foundry Evaluation, if you are running evaluations via API or SDK, update your payload:
{
"model": "o3-mini",
"max_completion_tokens": 512,
"temperature": 0.7,
"top_p": 0.95,
"input": "Evaluate the reasoning ability of this dataset..."
}