Quickstart: Get started using ChatGPT (preview) and GPT-4 (preview) with Azure OpenAI Service

Use this article to get started using Azure OpenAI.

Prerequisites

  • An Azure subscription - Create one for free.

  • Access granted to Azure OpenAI in the desired Azure subscription.

    Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue.

  • An Azure OpenAI Service resource with either the gpt-35-turbo (preview), or the gpt-4 (preview)1 models deployed. These models are currently available in East US and South Central US. For more information about model deployment, see the resource deployment guide.

1 GPT-4 models are currently in preview. To access these models, existing Azure OpenAI customers can apply for access by filling out this form.

Go to Azure OpenAI Studio

Navigate to Azure OpenAI Studio at https://oai.azure.com/ and sign-in with credentials that have access to your OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.

From the Azure OpenAI Studio landing page, select ChatGPT playground (Preview)

Screenshot of the Azure OpenAI Studio landing page with ChatGPT playground highlighted.

Playground

Start exploring OpenAI capabilities with a no-code approach through the Azure OpenAI Studio ChatGPT playground. From this page, you can quickly iterate and experiment with the capabilities.

Screenshot of the ChatGPT playground page.

Assistant setup

You can use the Assistant setup dropdown to select a few pre-loaded System message examples to get started.

System messages give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses.

Add few-shot examples allows you to provide conversational examples that are used by the model for in-context learning.

At any time while using the ChatGPT playground you can select View code to see Python, curl, and json code samples pre-populated based on your current chat session and settings selections. You can then take this code and write an application to complete the same task you're currently performing with the playground.

Chat session

Selecting the Send button sends the entered text to the completions API and the results are returned back to the text box.

Select the Clear chat button to delete the current conversation history.

Settings

Name Description
Deployments Your deployment name that is associated with a specific model. For ChatGPT, you need to use the gpt-35-turbo model.
Temperature Controls randomness. Lowering the temperature means that the model produces more repetitive and deterministic responses. Increasing the temperature results in more unexpected or creative responses. Try adjusting temperature or Top P but not both.
Max length (tokens) Set a limit on the number of tokens per model response. The API supports a maximum of 4096 tokens shared between the prompt (including system message, examples, message history, and user query) and the model's response. One token is roughly four characters for typical English text.
Top probabilities Similar to temperature, this controls randomness but uses a different method. Lowering Top P narrows the model’s token selection to likelier tokens. Increasing Top P lets the model choose from tokens with both high and low likelihood. Try adjusting temperature or Top P but not both.
Multi-turn conversations Select the number of past messages to include in each new API request. This helps give the model context for new user queries. Setting this number to 10 results in five user queries and five system responses.
Stop sequences Stop sequence make the model end its response at a desired point. The model response ends before the specified sequence, so it won't contain the stop sequence text. For ChatGPT, using <|im_end|> ensures that the model response doesn't generate a follow-up user query. You can include as many as four stop sequences.

Show panels

By default there are three panels: assistant setup, chat session, and settings. Show panels allows you to add, remove, and rearrange the panels. If you ever close a panel and need to get it back, use Show panels to restore the lost panel.

Start a chat session

  1. From the assistant setup drop-down, select Xbox customer support agent

  2. You'll be prompted asking if you want to update the system message, select Continue.

  3. In the chat session pane, enter the following question: "I'm interested in buying a new Xbox", and select Send.

  4. You'll receive a response similar to:

    Screenshot of a first question and answer in playground.

  5. Enter a follow-up question like: "which models support 4K?"

    Screenshot of follow-up question and answer in playground.

  6. Now that you have a basic conversation select View code from under Assistant setup and you'll have a replay of the code behind the entire conversation so far:

#Note: The openai-python library support for Azure OpenAI is in preview.
import os
import openai
openai.api_type = "azure"
openai.api_base = "https://docs-test-001.openai.azure.com/"
openai.api_version = "2023-03-15-preview"
openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
  engine="gpt-35-turbo", # replace this value with the deployment name you chose when you deployed the associated model.
  messages = [{"role":"system","content":"You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."},{"role":"user","content":"How much is a PS5?"},{"role":"assistant","content":"I apologize, but I do not have information about the prices of other gaming devices such as the PS5. My primary focus is to assist with issues regarding Xbox devices. Is there a specific issue you are having with your Xbox device that I may be able to help with?"},{"role":"user","content":"I am interested in buying an Xbox?"},{"role":"assistant","content":"Great! There are several Xbox models available in the market, including the Xbox Series X, Xbox Series S, and Xbox One. The Xbox Series X is the latest and most powerful console, while the Xbox Series S is a more affordable option that still offers great performance. The Xbox One is the previous generation console, but it still has a large library of games available to play.\n\nBefore purchasing an Xbox, you may want to consider factors such as your budget, the types of games you want to play, and whether you want to play games in 4K resolution. Once you have decided on a console, you can purchase it from a retailer or directly from the Xbox website. Let me know if you have any other questions or concerns!"},{"role":"user","content":"Which models are 4K?"},{"role":"assistant","content":"Both the Xbox Series X and Xbox One X support 4K resolution gaming. The Xbox Series S, on the other hand, supports up to 1440p resolution gaming, but it can upscale games to 4K. It's worth noting that in order to take advantage of 4K gaming, you'll need a 4K TV or monitor that supports HDR (High Dynamic Range) and HDMI 2.0 or higher."}],
  temperature=0,
  max_tokens=350,
  top_p=0.95,
  frequency_penalty=0,
  presence_penalty=0,
  stop=None)

Understanding the prompt structure

If you examine the sample from View code you'll notice some unique tokens that weren't part of a typical GPT completion call. ChatGPT was trained to use special tokens to delineate different parts of the prompt. Content is provided to the model in between <|im_start|> and <|im_end|> tokens. The prompt begins with a system message that can be used to prime the model by including context or instructions for the model. After that, the prompt contains a series of messages between the user and the assistant.

The assistant's response to the prompt will then be returned below the <|im_start|>assistant token and will end with <|im_end|> denoting that the assistant has finished its response. You can also use the Show raw syntax toggle button to display these tokens within the chat session panel.

The ChatGPT how-to guide provides an in-depth introduction into the new prompt structure and how to use the gpt-35-turbo model effectively.

Clean up resources

Once you're done testing out the ChatGPT playground, if you want to clean up and remove an OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

Library source code | Package (PyPi) |

Prerequisites

  • An Azure subscription - Create one for free
  • Access granted to Azure OpenAI Service in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI Service by completing the form at https://aka.ms/oai/access.
  • Python 3.7.1 or later version.
  • The following Python libraries: os.
  • An Azure OpenAI Service resource with either the gpt-35-turbo (preview), or the gpt-4 (preview)1 models deployed. These models are currently available in East US and South Central US. For more information about model deployment, see the resource deployment guide.

1 GPT-4 models are currently in preview. Existing Azure OpenAI customers can apply for access by filling out this form.

Set up

Install the OpenAI Python client library with:

pip install openai

Note

This library is maintained by OpenAI and is currently in preview. Refer to the release history or the version.py commit history to track the latest updates to the library.

Retrieve key and endpoint

To successfully make a call against Azure OpenAI, you'll need an endpoint and a key.

Variable name Value
ENDPOINT This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the value in the Azure OpenAI Studio > Playground > Code View. An example endpoint is: https://docs-test-001.openai.azure.com/.
API-KEY This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.

Go to your resource in the Azure portal. The Endpoint and Keys can be found in the Resource Management section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either KEY1 or KEY2. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.

Screenshot of the overview UI for an OpenAI Resource in the Azure portal with the endpoint & access keys location circled in red.

Create and assign persistent environment variables for your key and endpoint.

Environment variables

setx OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE" 
setx OPENAI_API_BASE "REPLACE_WITH_YOUR_ENDPOINT_HERE" 

Create a new Python application

  1. Create a new Python file called quickstart.py. Then open it up in your preferred editor or IDE.

  2. Replace the contents of quickstart.py with the following code. You need to set the engine variable to the deployment name you chose when you deployed the ChatGPT or GPT-4 models. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name.

    #Note: The openai-python library support for Azure OpenAI is in preview.
    import os
    import openai
    openai.api_type = "azure"
    openai.api_base = os.getenv("OPENAI_API_BASE") 
    openai.api_version = "2023-03-15-preview"
    openai.api_key = os.getenv("OPENAI_API_KEY")
    
    response = openai.ChatCompletion.create(
        engine="gpt-35-turbo", # engine = "deployment_name".
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
            {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
            {"role": "user", "content": "Do other Azure Cognitive Services support this too?"}
        ]
    )
    
    print(response)
    print(response['choices'][0]['message']['content'])
    
  3. Run the application with the python command on your quickstart file:

    python quickstart.py
    

Output

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Yes, most of the Azure Cognitive Services support customer managed keys. However, not all services support it. You can check the documentation of each service to confirm if customer managed keys are supported.",
        "role": "assistant"
      }
    }
  ],
  "created": 1679001781,
  "id": "chatcmpl-6upLpNYYOx2AhoOYxl9UgJvF4aPpR",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 39,
    "prompt_tokens": 58,
    "total_tokens": 97
  }
}
Yes, most of the Azure Cognitive Services support customer managed keys. However, not all services support it. You can check the documentation of each service to confirm if customer managed keys are supported.

Understanding the message structure

The ChatGPT and GPT-4 models are optimized to work with inputs formatted as a conversation. The messages variable passes an array of dictionaries with different roles in the conversation delineated by system, user, and assistant. The system message can be used to prime the model by including context or instructions on how the model should respond.

The ChatGPT & GPT-4 how-to guide provides an in-depth introduction into the options for communicating with these new models.

Clean up resources

If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.

Next steps

REST API Spec |

Prerequisites

  • An Azure subscription - Create one for free.
  • Access granted to Azure OpenAI Service in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI Service by completing the form at https://aka.ms/oai/access.
  • An Azure OpenAI Service resource with either the gpt-35-turbo (preview), or the gpt-4 (preview)1 models deployed. These models are currently available in East US and South Central US. For more information about model deployment, see the resource deployment guide.

1 GPT-4 models are currently in preview. To access these models, existing Azure OpenAI customers can apply for access by filling out this form.

Set up

Retrieve key and endpoint

To successfully make a call against Azure OpenAI, you'll need an endpoint and a key.

Variable name Value
ENDPOINT This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the value in the Azure OpenAI Studio > Playground > Code View. An example endpoint is: https://docs-test-001.openai.azure.com/.
API-KEY This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.

Go to your resource in the Azure portal. The Endpoint and Keys can be found in the Resource Management section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either KEY1 or KEY2. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.

Screenshot of the overview UI for an OpenAI Resource in the Azure portal with the endpoint & access keys location circled in red.

Create and assign persistent environment variables for your key and endpoint.

Environment variables

setx OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE" 
setx OPENAI_API_BASE "REPLACE_WITH_YOUR_ENDPOINT_HERE" 

REST API

In a bash shell, run the following command. You will need to replace gpt-35-turbo with the deployment name you chose when you deployed the ChatGPT or GPT-4 models. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name.

curl $OPENAI_API_BASE/openai/deployments/gpt-35-turbo/chat/completions?api-version=2023-03-15-preview \
  -H "Content-Type: application/json" \
  -H "api-key: $OPENAI_API_KEY" \
  -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure Cognitive Services support this too?"}]}'

The format of your first line of the command with an example endpoint would appear as follows curl https://docs-test-001.openai.azure.com/openai/deployments/{YOUR-DEPLOYMENT_NAME_HERE}/chat/completions?api-version=2023-03-15-preview \ If you encounter an error double check to make sure that you don't have a doubling of the / at the separation between your endpoint and /openai/deployments.

If you want to run this command in a normal Windows command prompt you would need to alter the text to remove the \ and line breaks.

Output

{"id":"chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
"object":"chat.completion","created":1679072642,
"model":"gpt-35-turbo",
"usage":{"prompt_tokens":58,
"completion_tokens":68,
"total_tokens":126},
"choices":[{"message":{"role":"assistant",
"content":"Yes, other Azure Cognitive Services also support customer managed keys. Azure Cognitive Services offer multiple options for customers to manage keys, such as using Azure Key Vault, customer-managed keys in Azure Key Vault or customer-managed keys through Azure Storage service. This helps customers ensure that their data is secure and access to their services is controlled."},"finish_reason":"stop","index":0}]}

Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks.

Understanding the message structure

The ChatGPT and GPT-4 models are optimized to work with inputs formatted as a conversation. The messages variable passes an array of dictionaries with different roles in the conversation delineated by system, user, and assistant. The system message can be used to prime the model by including context or instructions on how the model should respond.

The ChatGPT & GPT-4 how-to guide provides an in-depth introduction into the options for communicating with these new models.

Clean up resources

If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.

Next steps