Karishma Nanda Greetings!
To better understand the issue, could you please provide more details about the type of data analytics questions you're asking?
Additionally, can you provide an example of a question that returned an incorrect answer?
I would recommend cleaning up the role information around unsure part if the exact answer cannot be extracted do this - If you are unsure of an answer, you can say "I don't know" or "I'm not sure" to If the answer cannot be extracted from the retrieved documents, please respond with " I am not sure. Please visit "any of the site link" for more details". You should follow up this response with a clarifying question or a follow up question.
Please see Azure OpenAI On Your Data for more details.
Also, try adjusting the temperature
, top_p
, response_format
parameters which helps you further tune responses.
To reduce the likelihood of hallucination, you can try the following steps:
- Increase the amount of training data: One of the most effective ways to reduce hallucination is to train the model on a larger dataset. This can help the model learn more about the relationships between the input data and the expected output.
- Use a more diverse dataset: If the training data is too similar, the model may not be able to generalize well to new inputs. Using a more diverse dataset can help the model learn to handle a wider range of inputs.
- In addition to these steps, it's also important to carefully evaluate the quality of the training data and the performance of the model. If the model is still producing incorrect answers, you may need to re-evaluate the quality of the training data.
Do let me know if that helps or have any further queries.