Errors when trying to use Azure OpenAI o1 with max_completion_tokens

Tyler Suard 155 Reputation points
2025-04-04T17:05:39.6166667+00:00

I am trying to call the o1 and o3 models using AsyncAzureOpenAI. If I use max_tokens, I get an error saying the model cannot use max_tokens, it can only use max_completion_tokens. When I use max_completion_tokens, it says AsyncAzureOpenAI has no such keyword argument.

Error:

Traceback:

File "/opt/opentools/Jupyter/workspace/async_streamlit_3.py", line 530, in <module>
    generate_test_cases(requirement_pdf_path, reference_pdf_paths)
File "/opt/opentools/Jupyter/workspace/async_streamlit_3.py", line 505, in generate_test_cases
    asyncio.run(
File "/usr/lib64/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
File "/opt/opentools/Jupyter/workspace/async_streamlit_3.py", line 469, in generate_testcases_from_list_of_requirements
    await asyncio.gather(*tasks)
File "/opt/opentools/Jupyter/workspace/async_streamlit_3.py", line 134, in generate_testcases_from_one_requirement
    filtered_requirements_info = await send_prompt_to_o3_mini(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/opentools/Jupyter/workspace/async_streamlit_3.py", line 51, in send_prompt_to_o3_mini
    response = await o1_client.chat.completions.create(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/azureadmin/.local/lib/python3.11/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)

Code to reproduce:

import asyncio

from openai import AsyncAzureOpenAI

o1_client = AsyncAzureOpenAI(

api_key=o1_api_key,

api_version=o1_api_version,

base_url=f"{o1_api_base}/openai/deployments/{o1_deployment_name}",

)

async def send_prompt_to_o1(prompt):

response = await o1_client.chat.completions.create(

    model=o1_deployment_name,

    messages=[

        {"role": "user", "content": f"{prompt}"},

    ],

    max_completion_tokens=5000

)

print(response.choices[0].message.content)

return response.choices[0].message.content

asyncio.run(send_prompt_to_o1("Hello"))

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
4,080 questions
{count} votes

Accepted answer
  1. Saideep Anchuri 9,425 Reputation points Microsoft External Staff Moderator
    2025-04-04T17:31:51.3466667+00:00

    Hi Tyler Suard

    I'm glad that you were able to resolve your issue and thank you for posting your solution so that others experiencing the same thing can easily reference this! Since the Microsoft Q&A community has a policy that "The question author cannot accept their own answer. They can only accept answers by others ", I'll repost your solution in case you'd like to accept the answer.

    Ask: Errors when trying to use Azure OpenAI o1 with max_completion_tokens

    Solution: The issue is resolved. That pip install --upgrade openai solved the issue.

    If I missed anything please let me know and I'd be happy to add it to my answer, or feel free to comment below with any additional information.

    If you have any other questions, please let me know. Thank you again for your time and patience throughout this issue.

     

    Please don’t forget to Accept Answer and Yes for "was this answer helpful" wherever the information provided helps you, this can be beneficial to other community members.

    Thank You.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.