Thanks for the question, Function calling is only supported for the latest chat models (e.g. gpt-35-turbo 0613), where only ChatCompletions API should be used.
Function calling is now available in Azure OpenAI Service and gives the latest 0613 versions of gpt-35-turbo and gpt-4 the ability to produce structured JSON outputs based on functions that you describe in the request. This provides a native way for these models to formulate API calls and structure data outputs, all based on the functions you specify. It's important to note that while the models can generate these calls, it's up to you to execute them, ensuring you remain in control.
The latest versions of gpt-35-turbo and gpt-4 have been fine-tuned to learn how to work with functions. If one or more functions are specified in the request, the model will then determine if any of the functions should be called based on the context of the prompt. When the model determines that a function should be called, it will then respond with a JSON object including the arguments for the function.
At a high level you can break down working with functions into three steps:
Step #1 – Call the chat completions API with your functions and the user’s input
Step #2 – Use the model’s response to call your API or function
Step #3– Call the chat completions API again, including the response from your function to get a final response
Check out the blog post here for examples of different scenarios where function calling could be helpful and how to use functions safely and securely.
You can also check out our samples to try out an end-to-end example of function calling.