Hello It is VMS
Good day.
Response API is not available as a deployment icon in foundry but as an **API available with SDK or Rest API only. (client.responses.create instead of client.chat.completions.create)
gpt-4 is not supported for Responses API. Currently Response API is supported for below models
-
gpt-4o
(Versions:2024-11-20
,2024-08-06
,2024-05-13
) -
gpt-4o-mini
(Version:2024-07-18
) -
computer-use-preview
-
gpt-4.1
(Version:2025-04-14
) -
gpt-4.1-nano
(Version:2025-04-14
) -
gpt-4.1-mini
(Version:2025-04-14
) -
gpt-image-1
(Version:2025-04-15
) -
o3
(Version:2025-04-16
) -
o4-mini
(Version:2025-04-16
)
With limited regions as mentioned in region availability section
Below is sample usage with Gpt-4.1-nano deploymet
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
default_query={"api-version": "preview"},
)
response = client.responses.create(
model="gpt-4.1-nano", # Replace with your model deployment name
input="This is a test.",
)
print(response.model_dump_json(indent=2))
Hope you got the needed clarity now. Please let us know if you are still facing challenges
Thank you.