does azure openai support predicted outputs?

杜亚磊 5 Reputation points
2024-11-26T09:08:00.19+00:00
Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
4,089 questions
{count} vote

2 answers

Sort by: Most helpful
  1. Sina Salam 22,031 Reputation points Volunteer Moderator
    2024-11-26T13:49:06.81+00:00

    Hello 杜亚磊,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you would like to affirm if azure openai support predicted outputs.

    Yes, Azure OpenAI Service does support features related to predicted outputs, especially structured outputs and reproducible outputs.

    Structured outputs allow you to define specific schemas for the generated content. This ensures that the output adheres to a predefined structure, making it useful for tasks like function calling, extracting structured data, and building complex workflows.

    NOTE: Use structured outputs when you need the generated content to follow a specific format or schema. This can help in automating tasks and integrating AI-generated content into larger systems. - https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs Sample code:

    import openai
    response = openai.Completion.create(
      engine="davinci-codex",
      prompt="Generate a JSON object with name, age, and email fields",
      max_tokens=50,
      n=1,
      stop=None,
      temperature=0.5
    )
    print(response.choices[0].text)
    

    Reproducible outputs help in generating more deterministic results. By using parameters like seed and system_fingerprint, you can ensure that the outputs are consistent across multiple runs.

    NOTE: Use reproducible outputs when you need consistent results for tasks such as testing, debugging, or when the same output is required across different instances. - https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output Sample code:

    import openai
    response = openai.Completion.create(
      engine="davinci-codex",
      prompt="Translate the following English text to French: 'Hello, how are you?'",
      max_tokens=60,
      n=1,
      stop=None,
      temperature=0.5,
      seed=42
    )
    print(response.choices[0].text)
    

    I hope this is helpful! Do not hesitate to let me know if you have any other questions.


    Pease don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    0 comments No comments

  2. Sina Salam 22,031 Reputation points Volunteer Moderator
    2024-12-13T13:37:33.11+00:00

    Hello 杜亚磊,

    Thank you for your responses and patient.

    Regarding your challenges, I have been able to verified that the prediction parameter you mentioned, as outlined in the OpenAI API documentation, is not currently supported by Azure OpenAI Service. This limitation is evidenced by the error message you encountered: Unrecognized request argument supplied: prediction. While Azure OpenAI provides robust functionality, it does not always include every feature available in OpenAI’s native API, and this discrepancy can lead to some challenges when migrating or adapting code.

    Achieve similar outcomes

    However, this will be a workaround for the Prediction Parameter: Since Azure OpenAI does not directly support for the prediction parameter, you can achieve similar outcomes by carefully designing your prompts. This approach involves structuring the prompt and parsing the AI-generated response to meet your specific requirements. Below is an example of how you can refactor code without using the prediction parameter:

    from openai import OpenAI
    # Original code snippet
    code = """
    class User {
      firstName: string = "";
      lastName: string = "";
      username: string = "";
    }
    export default User;
    """
    # Instruction for the model
    refactor_prompt = """
    Replace the "username" property with an "email" property. Respond only 
    with code, and with no markdown formatting.
    """
    # Initialize the OpenAI client and call the completion endpoint
    client = OpenAI()
    response = client.Completion.create(
        engine="gpt-4",  # Adjust engine as needed
        prompt=f"{refactor_prompt}\n{code}",
        max_tokens=200,  # Specify an appropriate limit for tokens
        temperature=0    # Use deterministic output for consistency
    )
    # Print the AI's response
    print(response.choices[0].text)
    

    This code sends a clear, structured prompt to the model, instructing it to modify the username property to an email property. By parsing the response, you can achieve functionality equivalent to using the prediction parameter.

    Azure OpenAI Updates

    Azure OpenAI often lags behind OpenAI's API in terms of feature updates and parity. To stay informed about the latest features and their availability, you can regularly review the Azure OpenAI Service Documentation - https://learn.microsoft.com/en-us/azure/ai-services/openai . This resource is the official source for updates, feature releases, and best practices for using Azure OpenAI effectively. If the prediction parameter is an essential part of your workflow, you might want to consider using OpenAI’s API directly. The OpenAI platform currently offers the most comprehensive access to cutting-edge features like the prediction parameter.

    My recommendation

    If your application requires functionality that Azure OpenAI does not yet support, using OpenAI's API for those specific use cases might be more practical. However, if Azure OpenAI meets your other requirements, adapting your approach through manual prompt engineering, as shown in the example above, can serve as a temporary solution.

    Check out the following documentations for more details and updates:

    1. https://learn.microsoft.com/en-us/azure/ai-services/openai
    2. https://platform.openai.com/docs

    Pease don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.