Discrepancies between Azure Playground / OpenAI native API and Azure API

Louis G 0 Reputation points
2025-10-07T21:20:14.76+00:00

I'm facing a bit of a conundrum. I've been using the gpt-4o-mini-tts TTS model through the playground. I'm now trying to transitioning to API calls. In order to direct the TTS speech, I've been using the "instructions" parameter that is exposed through the Azure playground and the native OpenAI API, but I get a HTTP error 400 when using the API due to the inclusion of the "instructions". I seem to have hit an issue where the Azure and OpenAI API surface are out of sync. couple of questions:

  1. Why expose this in the playground if there is no way to use it in the API?
  2. When will this be added to the Azure API?

Lou

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
{count} votes

1 answer

Sort by: Most helpful
  1. Divyesh Govaerdhanan 10,310 Reputation points
    2025-10-07T22:54:49.3966667+00:00

    Hello,

    Welcome to Microsoft Q&A,

    Instructions are not a valid field on Azure’s /chat/completions request body, so you get HTTP 400. Put “how the voice should speak” in a system message there.

    If you want to use a top-level instructions field, call the Responses API instead (that surface supports instructions).

    Option A — Chat Completions (no instructions field)

    curl -X POST "https://{RESOURCE}.openai.azure.com/openai/deployments/{DEPLOYMENT}/chat/completions?api-version=2025-01-01-preview" \
      -H "api-key: $AZURE_OPENAI_API_KEY" -H "Content-Type: application/json" \
      -d '{
        "modalities": ["text","audio"],
        "audio": { "voice": "alloy", "format": "wav" },
        "messages": [
          { "role": "system", "content": "Speak like a calm British announcer. Keep sentences short." },
          { "role": "user", "content": "Your order is ready for pickup." }
        ]
      }'
    

    This is the preview audio generation flow for gpt-4o(-mini)-audio-preview; the style guidance lives in the system message

    https://learn.microsoft.com/en-us/azure/ai-foundry/openai/audio-completions-quickstart

    Option B — Responses API (use instructions)

    curl -X POST "https://{RESOURCE}.openai.azure.com/openai/v1/responses" \
      -H "api-key: $AZURE_OPENAI_API_KEY" -H "Content-Type: application/json" \
      -d '{
        "model": "{DEPLOYMENT}",                // your gpt-4o / gpt-4o-mini audio-capable deployment
        "instructions": "Speak like a calm British announcer. Keep sentences short.",
        "input": "Your order is ready for pickup.",
        "modalities": ["text","audio"],
        "audio": { "voice": "alloy", "format": "wav" }
      }'
    

    Azure’s Responses API supports instructions (top-level) and is available in multiple regions.

    Please Upvote and accept the answer if it helps!!


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.