does azure openai support predicted outputs?

杜亚磊 0 Reputation points
2024-11-26T09:08:00.19+00:00
Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,392 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Sina Salam 13,371 Reputation points
    2024-11-26T13:49:06.81+00:00

    Hello 杜亚磊,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you would like to affirm if azure openai support predicted outputs.

    Yes, Azure OpenAI Service does support features related to predicted outputs, especially structured outputs and reproducible outputs.

    Structured outputs allow you to define specific schemas for the generated content. This ensures that the output adheres to a predefined structure, making it useful for tasks like function calling, extracting structured data, and building complex workflows.

    NOTE: Use structured outputs when you need the generated content to follow a specific format or schema. This can help in automating tasks and integrating AI-generated content into larger systems. - https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs Sample code:

    import openai
    response = openai.Completion.create(
      engine="davinci-codex",
      prompt="Generate a JSON object with name, age, and email fields",
      max_tokens=50,
      n=1,
      stop=None,
      temperature=0.5
    )
    print(response.choices[0].text)
    

    Reproducible outputs help in generating more deterministic results. By using parameters like seed and system_fingerprint, you can ensure that the outputs are consistent across multiple runs.

    NOTE: Use reproducible outputs when you need consistent results for tasks such as testing, debugging, or when the same output is required across different instances. - https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output Sample code:

    import openai
    response = openai.Completion.create(
      engine="davinci-codex",
      prompt="Translate the following English text to French: 'Hello, how are you?'",
      max_tokens=60,
      n=1,
      stop=None,
      temperature=0.5,
      seed=42
    )
    print(response.choices[0].text)
    

    I hope this is helpful! Do not hesitate to let me know if you have any other questions.


    Pease don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    0 comments No comments

  2. 杜亚磊 0 Reputation points
    2024-11-27T02:54:56.5+00:00

    Hi SriLakshmi & Sina Salam,

    I'm talking about "predicted outputs" here as a specific technology. Here is an example:

    from openai import OpenAI
    code = """
    class User {
      firstName: string = "";
      lastName: string = "";
      username: string = "";
    }
    export default User;
    """
    refactor_prompt = """
    Replace the "username" property with an "email" property. Respond only 
    with code, and with no markdown formatting.
    """
    client = OpenAI()
    stream = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "user",
                "content": refactor_prompt
            },
            {
                "role": "user",
                "content": code
            }
        ],
        prediction={
            "type": "content",
            "content": code
        },
        stream=True
    )
    for chunk in stream:
        if chunk.choices[0].delta.content is not None:
            print(chunk.choices[0].delta.content, end="")
    

    You can find this example in the doc: https://platform.openai.com/docs/guides/predicted-outputs. Note there is a "prediction" parameter. When I use Azure openai, I got an error: openai.BadRequestError: Error code: 400 - {'error': {'message': 'Unrecognized request argument supplied: prediction', 'type': 'invalid_request_error', 'param': None, 'code': None}}


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.