@romungi-MSFT I changed the model to GPT4-32k and used a jsonl format instead of csv. I made it. Thanks a lot for your support.
Wrong Output evaluation with default questions - LLMs Evaluation with Azure AI Studio
Dear Microsoft community,
I am following step by step this official tutorial: https://learn.microsoft.com/en-us/azure/ai-studio/tutorials/deploy-copilot-ai-studio#customize-prompt-flow-with-multiple-data-sources
When I run the evaluation I get a result with the questions and answers I prepared on the test dataset. This is right:
However, the result with evaluation metrics is wrong. The questions of the test dataset are not considered. Instead I get always the same default question and answer pair:
How can I solve this problem? I used the custom evaluation instead of the built-in evaluation. When I use the built-in evaluation I get this other issue that I posted here https://learn.microsoft.com/en-us/answers/questions/1598940/error-flow-runtime-not-found-llms-built-in-evaluat.
Thanks in advance for your kind support.