Discrepancy between a sample chatbot using gpt-35-turbo in a Free-tier Azure subscription and ChatGPT 3.5

DAA 20 Reputation points
2024-04-02T22:22:56.8933333+00:00

Hello,

I have installed and deployed the sample app described in this link:

https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md#deploying-from-scratch

I have followed the guidelines for minimal cost deployment:

https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md#deploying-from-scratch

I compared the following conversation between this sample app and ChatGPT (https://chat.openai.com/?sso=) :

I gave the below csv data to both chatbots:

Line Quantity Description Data Category Asset Type Fee Type Request Type Variable Start Date End Date List Price Amount Pre-Optimization Total Amount Optimization Factor % Optimization Discount Amount Net Amount 1 2335 Sched FixedIncome SecMaster Unique Security Master Fixed Income SCHEDULED Unique 01/01/2024 31/01/2024 "$3,123.75" "$3,123.75" "$3,123.75" 4.797 "$2,973.90" $149.85 2 810 Sched CMO/ABS SecMaster Unique Security Master CMO/ABS SCHEDULED Unique 01/01/2024 31/01/2024 "$1,249.58" "$1,249.58" "$1,249.58" 4.321 "$1,195.59" $53.99 3 6140 Schd FI ColTagging Unique Collateral Tagging Fixed Income SCHEDULED Unique 01/01/2024 31/01/2024 "$8,130.00" "$8,130.00" "$8,130.00" 100 $0.00 "$8,130.00"

ChatGPT immediately understood what the data was and provided a break down of it, showing that it understood what the rows and columns were. The sample app outputted the data as pipe-separated, which seemed to suggest it had understood the structure of the data (see screenshot below).

User's image

User's image

I then asked this question to both chatbots:

What is the total Net Amount across the Security Master data category?

ChatGPT provided the correct answer, and it even explained how it reached this answer (see screenshot below).

User's image

The sample app, on the other hand, gave a wrong answer following a partial understanding the data and/or question (see below screenshot).

User's image

The app uses gpt-35-turbo and ChatGPT on this URL (https://chat.openai.com/) uses ChatGPT 3.5, so I am wondering why there seems to be a discrepancy in the ability of the two chatbots to understand the prompts. Is it related to the fact that the sample app is using a free-tier Azure subscription? And would using a paid subscription enhance the chatbot's ability?

Please let me know if you need more information.

Thank you.

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,083 questions
{count} votes

1 answer

Sort by: Most helpful
  1. navba-MSFT 24,465 Reputation points Microsoft Employee
    2024-04-03T09:18:49.8266667+00:00

    @DAA Welcome to Microsoft Q&A Forum, Thank you for posting your query here!

    I was able to get the similar output as that of chatgpt in my Azure Open AI (gpt35 turbo model).

    See below:

    User's image

    Please note that the ability and accuracy of the Azure Open AI response depends on the System prompt instructions you provide. More clear and concise prompts you provide, better will be its accuracy. See the below system prompt I had used:

    You are an AI assistant that helps people to calculate the total net amount specifically for the 'Security Master' column only based on the provided CSV data. You also need to provide the clarity on how it calculated and found the answer and share the CSV data it used to obtain the answer.

    Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.