Guidance for Qna bot azure open ai

Sariga Rahul 146 Reputation points
2023-06-07T15:21:45.7133333+00:00

Hi I'm developing a qna bot using Azure open ai gpt 3.5 turbo. Can someone guide me how to give the correct training to the gpt? In my training gpt is returning different answers or sometimes wrong answers. So if I can get correct guidance on how to give training data it will be helpful.

Also about the system message that should be given also please advise. 

Also, I have difficulty adding context. If someone can give correct documentation or guidance on these will be really helpful.

Azure AI Bot Service
Azure AI Bot Service
An Azure service that provides an integrated environment for bot development.
941 questions
Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
4,080 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. YutongTie-MSFT 53,966 Reputation points Moderator
    2023-06-08T03:37:47.4233333+00:00

    Hello @Sariga Rahul

    Thanks for reaching out to us, there is a new feature which help customers to train the model with own data - https://learn.microsoft.com/en-us/azure/cognitive-services/openai/use-your-data-quickstart?tabs=command-line&pivots=programming-language-studio

    You can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication.

    It's very easy to add your own data for training and change the importance. I hope this helps!

    Regards,

    Yutong

    -Please kindly accept the answer if you feel helpful to support the community, thanks a lot.

    0 comments No comments

  2. Joel Borellis (gametolearn) 195 Reputation points
    2023-06-09T14:41:08.13+00:00

    @Sariga Rahul check out this for details on fine tuning a model. https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio

    Right now the gpt-3.5-turbo cannot be fine tuned just the ones listed at the site above.

    The above is about "fine-tuning" which is not the same as actually training. What that means is that you can get higher quality results from the model with fine tuning because you are putting the model's weights on specific prompts and structure.

    On the system message this would not necessarily be part of the tuning but part of the prompt. Keep in mind that the supported models for training do not have the same message structure as gpt-3.5-turbo. Turbo conforms to the message structure of system, assistant, user while the models that support training do not BUT you can still have a message that is part of the prompt that instructs the model what it is and what it purpose is and what personality traits it should have which is the system message.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.