When using function calling in Azure OpenAI, the process follows these steps:
- User sends a prompt to the Azure OpenAI service.
- The assistant determines if a function call is required based on the model’s reasoning.
- The function call is executed (via your backend or an API integration).
- Azure OpenAI receives the function’s response.
- The assistant then decides how to use the function’s response in its next message.
What Happens Between the Assistant and the Function Call?
The assistant does not automatically replace the user’s prompt with the function response. Instead, the response from the function is treated as additional context, and the assistant integrates it into the conversation.
- Intermediate Processing: The model interprets the function output in the context of the conversation but does not inherently treat it as a new user prompt.
- Model Continuation: The model may ask clarifying questions or modify its response based on the function output rather than directly assuming it as a reformulated prompt.
How to Ensure the Assistant Uses the Function Call’s Response as the New Prompt?
Instead of relying on the model to process the function output implicitly, structure the response like this:
{
"role": "assistant",
"content": "Here is the improved prompt with structure and additional details: [Function Response]"
}
This helps the assistant treat it as a new instruction.
After the function returns a response, manually construct a new query for the assistant using both the original prompt and the function response then modify your system prompt to instruct the model to always use the function response as the basis for its next message.