Hello @Harinath J ,
Below are the answers for your questions.
- Roles : Both developer and system message alone are supported but not both at same time, it is also mentioned in this documentation
When you use a system message with
o4-mini
,o3
,o3-mini
, ando1
it will be treated as a developer message. You should not use both a developer message and a system message in the same API request.
There are no hidden roles mentioned in documentation al we have is developer, system and user roles.
- Parameter Support & Limits: Yes, you are right that
max_completion_tokens
andreasoning_effort
works and below are not supported with reasoning models.
-
temperature
,top_p
,presence_penalty
,frequency_penalty
,logprobs
,top_logprobs
,logit_bias
,max_tokens
Regarding the toke usage you can refer this table for reasoning models.
- Prompting Guidance: Yes, here is the documentation for designing the system messages, you can also control output, below is the sample system message and output for it.
System message
You're an assistant designed to extract entities from text. Users will paste in a string of text and you'll respond with entities you've extracted from the text as a JSON object.
Output sample
{
"name": "",
"company": "",
"phone_number": ""
}
- Exposing Chain-of-Thought / “Thought Summaries”: We have a parameter
reasoning
while doing responses Api where you will get summaries of the model's chain of thought reasoning.
Below is the sample request.
response = client.responses.create(
input="Tell me about the curious case of neural text degeneration",
model="o4-mini", # replace with model deployment name
reasoning={
"effort": "medium",
"summary": "detailed" # auto, concise, or detailed (currently only supported with o4-mini and o3)
}
)
You can check more about this here.
- Documentation & Examples: You can check below documentation for reasoning models, here supporting features and usages with demo are documented. Azure OpenAI reasoning models - o3-mini, o1, o1-mini - Azure OpenAI | Microsoft Learn
Please check all of this and let us know if you have any query in comments.
Thank you