Azure OpenAI Phi-4-multimodel-instruct: 'auto' tool choice error when using runTools() method that worked with GPT-4o

Muhammad Junaid Nazir 20 Reputation points
2025-03-22T18:27:37.5133333+00:00

I recently switched from using GPT-4o to Phi-4-multimodel-instruct in my Next.js application using Azure AI services, but I'm encountering the following error:

BadRequestError: 400 {"object":"error","message":"\"auto\" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set","type":"BadRequestError","param":null,"code":400}

The error occurs when calling the runTools() method, which was working perfectly with GPT-4o. Here's my implementation:

OpenAI Instance Configuration:

export const OpenAIInstance = () => {
  try {
    if (
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY ||
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION ||
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME
    ) {
      throw new Error(
        "Missing required environment variables for OpenAI instance."
      );
    }
    
    const azureOpenAI = new AzureOpenAI({
      apiKey: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY,
      apiVersion: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION,
      baseURL: `https://${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME}.openai.azure.com/models/chat/completions?api-version=${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION}`
    });

    return azureOpenAI;
  } catch (error) {
    console.error(
      "Error initializing OpenAI instance:",
      (error as Error).message
    );
    throw error;
  }
};

Chat API Extension Implementation:

export const ChatApiExtensions = async (props: {
  chatThread: ChatThreadModel;
  userMessage: string;
  history: ChatCompletionMessageParam[];
  extensions: RunnableToolFunction<any>[];
  signal: AbortSignal;
}): Promise<ChatCompletionStreamingRunner> => {
  const { userMessage, history, signal, chatThread, extensions } = props;
  const openAI = OpenAIInstance();
  
  const model = process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_MODEL_NAME;
  if (!model) {
    throw new Error("Model deployment name is not configured");
  }

  const systemMessage = await extensionsSystemMessage(chatThread);
  try {
    return await openAI.beta.chat.completions.runTools(
      {
        model: model,
        stream: true,
        messages: [
          {
            role: "system",
            content: chatThread.personaMessage + "\n" + systemMessage,
          },
          ...history,
          {
            role: "user",
            content: userMessage,
          },
        ],
        tools: extensions,
        temperature: 0.7,
        max_tokens: 4000,
      },
      { 
        signal: signal,
      }
    );
  } catch (error) {
    console.error("Error in ChatApiExtensions:", error);
    throw error;
  }
};

Based on the error message, it seems Phi-4-multimodel-instruct requires additional parameters for tool usage that weren't needed with GPT-4o. I've researched the Azure documentation but haven't found specifics about these flags (--enable-auto-tool-choice and --tool-call-parser).

Has anyone successfully used tools with Phi-4-multimodel-instruct on Azure? How can I modify my code to make this work?

Environment:

  • Next.js (server components)
  • Azure OpenAI service
  • OpenAI Node.js SDK
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,598 questions
{count} votes

Accepted answer
  1. Saideep Anchuri 9,425 Reputation points Microsoft External Staff Moderator
    2025-03-24T07:30:26.64+00:00

    Hi Muhammad Junaid Nazir

    I'm glad that you were able to resolve your issue and thank you for posting your solution so that others experiencing the same thing can easily reference this! Since the Microsoft Q&A community has a policy that "The question author cannot accept their own answer. They can only accept answers by others ", I'll repost your solution in case you'd like to accept the answer.

    Ask: Azure OpenAI Phi-4-multimodel-instruct: 'auto' tool choice error when using runTools() method that worked with GPT-4o

    Solution: The issue is resolved. That You have found answer to my own question. Right now, the phi-4-multimodel-instruct does not support tool calling.

    See details: https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/models-featured#microsoft

    If I missed anything please let me know and I'd be happy to add it to my answer, or feel free to comment below with any additional information.

    If you have any other questions, please let me know. Thank you again for your time and patience throughout this issue.

     

    Please don’t forget to Accept Answer and Yes for "was this answer helpful" wherever the information provided helps you, this can be beneficial to other community members.

    Thank You.

    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Muhammad Junaid Nazir 20 Reputation points
    2025-03-22T18:51:10.65+00:00

    I think I have found answer to my own question. Right now the phi-4-multimodel-instruct does not support tool calling.

    See details : https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/models-featured#microsoft

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.