How to deploy a Responses API on Azure AI Foundry?

It is VMS 100 Reputation points
2025-06-19T19:12:34.1566667+00:00

Hi

I would like to use REST with Responses API . Found the link https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/responses?tabs=rest-api

I created a new project, etc. at AI Foundry on the Azure portals, etc., but I see no option to deploy it. I deployed a gpt-4 model and then tried to the end point ending with responses as mentioned in the link above, but that didn't work.

I then thought that i should actually deploy the responses API but when i go to deployments in the project, i see no option to deploy responses api. I see all the other stuff like agents, chat, etc. but no responses APi!

What am i missing here? Am sure it is something silly!

Thanks in advance!

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,599 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Manas Mohanty 5,620 Reputation points Microsoft External Staff Moderator
    2025-06-19T20:32:59.2633333+00:00

    Hello It is VMS

    Good day.

    Response API is not available as a deployment icon in foundry but as an **API available with SDK or Rest API only. (client.responses.create instead of client.chat.completions.create)

    gpt-4 is not supported for Responses API. Currently Response API is supported for below models

    • gpt-4o (Versions: 2024-11-20, 2024-08-06, 2024-05-13)
    • gpt-4o-mini (Version: 2024-07-18)
    • computer-use-preview
    • gpt-4.1 (Version: 2025-04-14)
    • gpt-4.1-nano (Version: 2025-04-14)
    • gpt-4.1-mini (Version: 2025-04-14)
    • gpt-image-1 (Version: 2025-04-15)
    • o3 (Version: 2025-04-16)
    • o4-mini (Version: 2025-04-16)

    With limited regions as mentioned in region availability section

    Below is sample usage with Gpt-4.1-nano deploymet

    import os
    from openai import OpenAI
    client = OpenAI(
        api_key=os.getenv("AZURE_OPENAI_API_KEY"),
        base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
        default_query={"api-version": "preview"}, 
    )
    response = client.responses.create(   
      model="gpt-4.1-nano", # Replace with your model deployment name 
      input="This is a test.",
    )
    print(response.model_dump_json(indent=2))
    
    

    Hope you got the needed clarity now. Please let us know if you are still facing challenges

    Thank you.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.