Hi Bharat Ruparel,
Thanks for sharing the detailed context. The error you're seeing:
azure.core.exceptions.HttpResponseError: (None) Invalid URL (POST /v1/chat/completions)
means the client is trying to hit a relative URL (/v1/chat/completions) without a proper base URL set, which typically happens due to a misconfiguration of the endpoint or missing metadata about the deployed model in Azure AI Foundry.
How to fix:
1.Ensure you are using a valid model deployment
The get_chat_completions_client() method only works if your project has a deployed chat model (e.g., gpt-4, gpt-35-turbo, etc.) in the Azure AI Project.
You must deploy the model via AI Foundry first, or ensure that it is pre-deployed in the project created via the quickstart.
2.Check the model name
You are using "gpt-4o-mini" in this line:
response = chat.complete(model="gpt-4o-mini", messages=[...])
However, gpt-4o-mini may not be deployed or available in your Foundry instance.
Try replacing with "gpt-35-turbo" which is available by default in many AI Foundry quickstart projects.
3.Ensure proper instantiation: pick only one method
You have two instantiations of AIProjectClient:
project = AIProjectClient(...)
project = AIProjectClient.from_connection_string(...)
Keep only one. If you're using from_connection_string, make sure project_connection_string is valid (retrieved from the Azure Portal or CLI), otherwise it might be falling back to an invalid endpoint, causing the Invalid URL error.
Suggested code:
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
# Use EITHER this method:
project = AIProjectClient(
endpoint="https://eastus2.api.azureml.ms",
subscription_id="your-sub-id",
resource_group_name="your-rg",
project_name="your-project-name",
credential=DefaultAzureCredential(),
)
# OR this (make sure project_connection_string is correct)
# project = AIProjectClient.from_connection_string(
# conn_str=project_connection_string, credential=DefaultAzureCredential()
# )
chat = project.inference.get_chat_completions_client()
response = chat.complete(
model="gpt-35-turbo", # Use this if "gpt-4o-mini" is not deployed
messages=[
{
"role": "system",
"content": "Your content",
},
{"role": "user", "content": "Your content"},
],
)
print(response.choices[0].message.content)
Hope this helps, do let me know if you have any further queries.
Thank you!