Get started with evaluating answers in a chat app in JavaScript

This article shows you how to evaluate a chat app's answers against a set of correct or ideal answers (known as ground truth). Whenever you change your chat application in a way that affects the answers, run an evaluation to compare the changes. This demo application offers tools you can use today to make it easier to run evaluations.

By following the instructions in this article, you will:

  • Use provided sample prompts tailored to the subject domain. These prompts are already in the repository.
  • Generate sample user questions and ground truth answers from your own documents.
  • Run evaluations using a sample prompt with the generated user questions.
  • Review analysis of answers.

Note

This article uses one or more AI app templates as the basis for the examples and guidance in the article. AI app templates provide you with well-maintained, easy to deploy reference implementations that help to ensure a high-quality starting point for your AI apps.

Architectural overview

Key components of the architecture include:

  • Azure-hosted chat app: The chat app runs in Azure App Service.
  • Microsoft AI Chat Protocol provides standardized API contracts across AI solutions and languages. The chat app conforms to the Microsoft AI Chat Protocol, which allows the evaluations app to run against any chat app that conforms to the protocol.
  • Azure AI Search: The chat app uses Azure AI Search to store the data from your own documents.
  • Sample questions generator: Can generate many questions for each document along with the ground truth answer. The more questions, the longer the evaluation.
  • Evaluator runs sample questions and prompts against the chat app and returns the results.
  • Review tool allows you to review the results of the evaluations.
  • Diff tool allows you to compare the answers between evaluations.

When you deploy this evaluation to Azure, the Azure OpenAI endpoint is created for the GPT-4 model with its own capacity. When evaluating chat applications, it's important that the evaluator has its own OpenAI resource using GPT-4 with its own capacity.

Prerequisites

  • Azure subscription. Create one for free

  • Access granted to Azure OpenAI in the desired Azure subscription.

    Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access.

  • Deploy a chat app

  • These chat apps load the data into the Azure AI Search resource. This resource is required for the evaluations app to work. Don't complete the Clean up resources section of the previous procedure.

    You'll need the following Azure resource information from that deployment, which is referred to as the chat app in this article:

    • Chat API URI: The service backend endpoint shown at the end of the azd up process.
    • Azure AI Search. The following values are required:
      • Resource name: The name of the Azure AI Search resource name, reported as Search service during the azd up process.
      • Index name: The name of the Azure AI Search index where your documents are stored. This can be found in the Azure portal for the Search service.

    The Chat API URL allows the evaluations to make requests through your backend application. The Azure AI Search information allows the evaluation scripts to use the same deployment as your backend, loaded with the documents.

    Once you have this information collected, you shouldn't need to use the chat app development environment again. It's referred to later in this article several times to indicate how the chat app is used by the Evaluations app. Don't delete the chat app resources until you complete the entire procedure in this article.

  • A development container environment is available with all dependencies required to complete this article. You can run the development container in GitHub Codespaces (in a browser) or locally using Visual Studio Code.

    • GitHub account

Open development environment

Begin now with a development environment that has all the dependencies installed to complete this article. You should arrange your monitor workspace so you can see both this documentation and the development environment at the same time.

This article was tested with the switzerlandnorth region for the evaluation deployment.

GitHub Codespaces runs a development container managed by GitHub with Visual Studio Code for the Web as the user interface. For the most straightforward development environment, use GitHub Codespaces so that you have the correct developer tools and dependencies preinstalled to complete this article.

Important

All GitHub accounts can use Codespaces for up to 60 hours free each month with 2 core instances. For more information, see GitHub Codespaces monthly included storage and core hours.

  1. Start the process to create a new GitHub Codespace on the main branch of the Azure-Samples/ai-rag-chat-evaluator GitHub repository.

  2. To display the development environment and the documentation available at the same time, right-click on the following button, and select Open link in new window.

    Open in GitHub Codespaces

  3. On the Create codespace page, review the codespace configuration settings, and then select Create new codespace

    Screenshot of the confirmation screen before creating a new codespace.

  4. Wait for the codespace to start. This startup process can take a few minutes.

  5. In the terminal at the bottom of the screen, sign in to Azure with the Azure Developer CLI.

    azd auth login --use-device-code
    
  6. Copy the code from the terminal and then paste it into a browser. Follow the instructions to authenticate with your Azure account.

  7. Provision the required Azure resource, Azure OpenAI, for the evaluations app.

    azd up
    

    This AZD command doesn't deploy the evaluations app, but it does create the Azure OpenAI resource with a required GPT-4 deployment to run the evaluations in the local development environment.

  8. The remaining tasks in this article take place in the context of this development container.

  9. The name of the GitHub repository is shown in the search bar. This visual indicator helps you distinguish the evaluations app from the chat app. This ai-rag-chat-evaluator repo is referred to as the Evaluations app in this article.

Prepare environment values and configuration information

Update the environment values and configuration information with the information you gathered during Prerequisites for the evaluations app.

  1. Create a .env file based on .env.sample:

    cp .env.sample .env
    
  2. Run this commands to get the required values for AZURE_OPENAI_EVAL_DEPLOYMENT and AZURE_OPENAI_SERVICE from your deployed resource group and paste those values into the .env file:

    azd env get-value AZURE_OPENAI_EVAL_DEPLOYMENT
    azd env get-value AZURE_OPENAI_SERVICE
    
  3. Add the following values from the chat app for its Azure AI Search instance to the .env, which you gathered in the prerequisites section:

    AZURE_SEARCH_SERVICE="<service-name>"
    AZURE_SEARCH_INDEX="<index-name>"
    

Use the Microsoft AI Chat Protocol for configuration information

The chat app and the evaluations app both implement the Microsoft AI Chat Protocol specification, an open-source, Cloud, and language agnostic AI endpoint API contract used for consumption and evaluation. When your client and middle tier endpoints adhere to this API spec, you can consistently consume and run evaluations on your AI backends.

  1. Create a new file named my_config.json and copy the following content into it:

    {
        "testdata_path": "my_input/qa.jsonl",
        "results_dir": "my_results/experiment<TIMESTAMP>",
        "target_url": "http://localhost:50505/chat",
        "target_parameters": {
            "overrides": {
                "top": 3,
                "temperature": 0.3,
                "retrieval_mode": "hybrid",
                "semantic_ranker": false,
                "prompt_template": "<READFILE>my_input/prompt_refined.txt",
                "seed": 1
            }
        }
    }
    

    The evaluation script creates the my_results folder.

    The overrides object contains any configuration settings needed for the application. Each application defines its own set of settings properties.

  2. Use the following table to understand the meaning of the settings properties that are sent to the chat app:

    Settings Property Description
    semantic_ranker Whether to use semantic ranker, a model that reranks search results based on semantic similarity to the user's query. We disable it for this tutorial to reduce costs.
    retrieval_mode The retrieval mode to use. The default is hybrid.
    temperature The temperature setting for the model. The default is 0.3.
    top The number of search results to return. The default is 3.
    prompt_template An override of the prompt used to generate the answer based on the question and search results.
    seed The seed value for any calls to GPT models. Setting a seed results in more consistent results across evaluations.
  3. Change the target_url to the URI value of your chat app, which you gathered in the prerequisites section. The chat app must conform to the chat protocol. The URI has the following format https://CHAT-APP-URL/chat. Make sure the protocol and the chat route are part of the URI.

Generate sample data

In order to evaluate new answers, they must be compared to a "ground truth" answer, which is the ideal answer for a particular question. Generate questions and answers from documents stored in Azure AI Search for the chat app.

  1. Copy the example_input folder into a new folder namedmy_input.

  2. In a terminal, run the following command to generate the sample data:

    python -m evaltools generate --output=my_input/qa.jsonl --persource=2 --numquestions=14
    

The question/answer pairs are generated and stored in my_input/qa.jsonl (in JSONL format) as input to the evaluator used in the next step. For a production evaluation, you would generate more QA pairs, more than 200 for this dataset.

Note

The few number of questions and answers per source is meant to allow you to quickly complete this procedure. It isn't meant to be a production evaluation which should have more questions and answers per source.

Run first evaluation with a refined prompt

  1. Edit the my_config.json config file properties:

    Property New value
    results_dir my_results/experiment_refined
    prompt_template <READFILE>my_input/prompt_refined.txt

    The refined prompt is specific about the subject domain.

    If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below. If asking a clarifying question to the user would help, ask the question.
    
    Use clear and concise language and write in a confident yet friendly tone. In your answers ensure the employee understands how your response connects to the information in the sources and include all citations necessary to help the employee validate the answer provided.
    
    For tabular information return it as an html table. Do not return markdown format. If the question is not in English, answer in the language used in the question.
    
    Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response. Use square brackets to reference the source, e.g. [info1.txt]. Don't combine sources, list each source separately, e.g. [info1.txt][info2.pdf].
    
  2. In a terminal, run the following command to run the evaluation:

    python -m evaltools evaluate --config=my_config.json --numquestions=14
    

    This script created a new experiment folder in my_results/ with the evaluation. The folder contains the results of the evaluation including:

    File Name Description
    config.json A copy of the configuration file used for the evaluation.
    evaluate_parameters.json The parameters used for the evaluation. Very similar to config.json but includes additional metadata like timestamp.
    eval_results.jsonl Each question and answer, along with the GPT metrics for each QA pair.
    summary.json The overall results, like the average GPT metrics.

Run second evaluation with a weak prompt

  1. Edit the my_config.json config file properties:

    Property New value
    results_dir my_results/experiment_weak
    prompt_template <READFILE>my_input/prompt_weak.txt

    That weak prompt has no context about the subject domain:

    You are a helpful assistant.
    
  2. In a terminal, run the following command to run the evaluation:

    python -m evaltools evaluate --config=my_config.json --numquestions=14
    

Run third evaluation with a specific temperature

Use a prompt that allows for more creativity.

  1. Edit the my_config.json config file properties:

    Existing Property New value
    Existing results_dir my_results/experiment_ignoresources_temp09
    Existing prompt_template <READFILE>my_input/prompt_ignoresources.txt
    New temperature 0.9

    The default temperature is 0.7. The higher the temperature, the more creative the answers.

    The ignore prompt is short:

    Your job is to answer questions to the best of your ability. You will be given sources but you should IGNORE them. Be creative!
    
  2. The config object should look like the following except replace results_dir with your path:

    {
        "testdata_path": "my_input/qa.jsonl",
        "results_dir": "my_results/prompt_ignoresources_temp09",
        "target_url": "https://YOUR-CHAT-APP/chat",
        "target_parameters": {
            "overrides": {
                "temperature": 0.9,
                "semantic_ranker": false,
                "prompt_template": "<READFILE>my_input/prompt_ignoresources.txt"
            }
        }
    }
    
  3. In a terminal, run the following command to run the evaluation:

    python -m evaltools evaluate --config=my_config.json --numquestions=14
    

Review the evaluation results

You performed three evaluations based on different prompts and app settings. The results are stored in the my_results folder. Review how the results differ based on the settings.

  1. Use the review tool to see the results of the evaluations:

    python -m evaltools summary my_results
    
  2. The results look something like:

    Screenshot of evaluations review tool showing the three evaluations.

    Each value is returned as a number and a percentage.

  3. Use the following table to understand the meaning of the values.

    Value Description
    Groundedness This refers to how well the model's responses are based on factual, verifiable information. A response is considered grounded if it's factually accurate and reflects reality.
    Relevance This measures how closely the model's responses align with the context or the prompt. A relevant response directly addresses the user's query or statement.
    Coherence This refers to how logically consistent the model's responses are. A coherent response maintains a logical flow and doesn't contradict itself.
    Citation This indicates if the answer was returned in the format requested in the prompt.
    Length This measures the length of the response.
  4. The results should indicate all three evaluations had high relevance while the experiment_ignoresources_temp09 had the lowest relevance.

  5. Select the folder to see the configuration for the evaluation.

  6. Enter Ctrl + C exit the app and return to the terminal.

Compare the answers

Compare the returned answers from the evaluations.

  1. Select two of the evaluations to compare, then use the same review tool to compare the answers:

    python -m evaltools diff my_results/experiment_refined my_results/experiment_ignoresources_temp09
    
  2. Review the results. Your results might vary.

    Screenshot of comparison of evaluation answers between evaluations.

  3. Enter Ctrl + C exit the app and return to the terminal.

Suggestions for further evaluations

  • Edit the prompts in my_input to tailor the answers such as subject domain, length, and other factors.
  • Edit the my_config.json file to change the parameters such as temperature, and semantic_ranker and rerun experiments.
  • Compare different answers to understand how the prompt and question affect the answer quality.
  • Generate a separate set of questions and ground truth answers for each document in the Azure AI Search index. Then rerun the evaluations to see how the answers differ.
  • Alter the prompts to indicate shorter or longer answers by adding the requirement to the end of the prompt. For example, Please answer in about 3 sentences..

Clean up resources and dependencies

Clean up Azure resources

The Azure resources created in this article are billed to your Azure subscription. If you don't expect to need these resources in the future, delete them to avoid incurring more charges.

To delete the Azure resources and remove the source code, run the following Azure Developer CLI command:

azd down --purge

Clean up GitHub Codespaces

Deleting the GitHub Codespaces environment ensures that you can maximize the amount of free per-core hours entitlement you get for your account.

Important

For more information about your GitHub account's entitlements, see GitHub Codespaces monthly included storage and core hours.

  1. Sign into the GitHub Codespaces dashboard (https://github.com/codespaces).

  2. Locate your currently running Codespaces sourced from the Azure-Samples/ai-rag-chat-evaluator GitHub repository.

    Screenshot of all the running Codespaces including their status and templates.

  3. Open the context menu for the codespace and then select Delete.

    Screenshot of the context menu for a single codespace with the delete option highlighted.

Return to the chat app article to clean up those resources.

Next steps