Running into problems trying out a Microsoft Learn Sample on AI Foundry

Bharat Ruparel 20 Reputation points Microsoft Employee
2025-05-03T17:29:22.8633333+00:00

Trying to run the basic chat application example given here:

https://learn.microsoft.com/en-us/azure/ai-foundry/quickstarts/get-started-code?tabs=windows

from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
# Use proper endpoint format with https://
project = AIProjectClient(
    endpoint="https://eastus2.api.azureml.ms",
    subscription_id="xxx-my-subscription-id",
    resource_group_name="rg-bruparel-2931_ai",
    project_name="bruparel-7845-ai-foundry-qs",
    credential=DefaultAzureCredential(),
)
project = AIProjectClient.from_connection_string(
    conn_str=project_connection_string, credential=DefaultAzureCredential()
)
chat = project.inference.get_chat_completions_client()
response = chat.complete(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "system",
            "content": "You are an AI assistant that speaks like a techno punk rocker from 2350. Be cool but not too cool. Ya dig?",
        },
        {"role": "user", "content": "Hey, can you help me with my taxes? I'm a freelancer."},
    ],
)
print(response.choices[0].message.content)

Running into the following error:

Traceback (most recent call last):

File "C:\Users\bruparel\Learn\AIFoundry\basic\chat.py", line 15, in <module>

response = chat.complete(

           ^^^^^^^^^^^^^^

File "C:\Users\bruparel\Learn\AIFoundry\basic.venv\Lib\site-packages\azure\ai\inference_patch.py", line 738, in complete

raise HttpResponseError(response=response)

azure.core.exceptions.HttpResponseError: (None) Invalid URL (POST /v1/chat/completions)

Code: None

Message: Invalid URL (POST /v1/chat/completions)

Please advise on how to fix it?
Thanks.

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,601 questions
{count} votes

Accepted answer
  1. Jeffrey Chen 75 Reputation points Microsoft Employee
    2025-05-06T19:00:16.11+00:00

    Ran into this myself too. Not sure why, but for certain OpenAI models, you now have to use the Azure OpenAI client instead of the chat completions client:

    project = AIProjectClient.from_connection_string(conn_str=project_connection_string, credential=credential)
    openai_client = project.inference.get_azure_openai_client(api_version="2024-06-01")
    response = openai_client.chat.completions.create(
        model='gpt-4o-mini',
        messages=messages,
    )
    print(response.choices[0].message.content)
    

    If you use a non-OpenAI model like Phi-4, then the code you currently have should work.


2 additional answers

Sort by: Most helpful
  1. Prashanth Veeragoni 4,930 Reputation points Microsoft External Staff Moderator
    2025-05-05T04:50:21.2366667+00:00

    Hi Bharat Ruparel,

    Thanks for sharing the detailed context. The error you're seeing:

    azure.core.exceptions.HttpResponseError: (None) Invalid URL (POST /v1/chat/completions)

    means the client is trying to hit a relative URL (/v1/chat/completions) without a proper base URL set, which typically happens due to a misconfiguration of the endpoint or missing metadata about the deployed model in Azure AI Foundry.

    How to fix:

    1.Ensure you are using a valid model deployment

    The get_chat_completions_client() method only works if your project has a deployed chat model (e.g., gpt-4, gpt-35-turbo, etc.) in the Azure AI Project.

    You must deploy the model via AI Foundry first, or ensure that it is pre-deployed in the project created via the quickstart.

    2.Check the model name

    You are using "gpt-4o-mini" in this line:

    response = chat.complete(model="gpt-4o-mini", messages=[...])
    

    However, gpt-4o-mini may not be deployed or available in your Foundry instance.

    Try replacing with "gpt-35-turbo" which is available by default in many AI Foundry quickstart projects.

    3.Ensure proper instantiation: pick only one method

    You have two instantiations of AIProjectClient:

    project = AIProjectClient(...)
    project = AIProjectClient.from_connection_string(...)
    

    Keep only one. If you're using from_connection_string, make sure project_connection_string is valid (retrieved from the Azure Portal or CLI), otherwise it might be falling back to an invalid endpoint, causing the Invalid URL error.

    Suggested code:

    from azure.ai.projects import AIProjectClient
    from azure.identity import DefaultAzureCredential
    # Use EITHER this method:
    project = AIProjectClient(
        endpoint="https://eastus2.api.azureml.ms",
        subscription_id="your-sub-id",
        resource_group_name="your-rg",
        project_name="your-project-name",
        credential=DefaultAzureCredential(),
    )
    # OR this (make sure project_connection_string is correct)
    # project = AIProjectClient.from_connection_string(
    #     conn_str=project_connection_string, credential=DefaultAzureCredential()
    # )
    chat = project.inference.get_chat_completions_client()
    response = chat.complete(
        model="gpt-35-turbo",  # Use this if "gpt-4o-mini" is not deployed
        messages=[
            {
                "role": "system",
                "content": "Your content",
            },
            {"role": "user", "content": "Your content"},
        ],
    )
    print(response.choices[0].message.content)
    
    

    Hope this helps, do let me know if you have any further queries.

    Thank you!

    0 comments No comments

  2. Kml Ko 0 Reputation points
    2025-06-01T10:19:49.65+00:00

    For me those were wrong env constants' values. I pointed to deprecated project. After changing connection string, project name and model deployment name, this error stopped popping out.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.