Wrong Output evaluation with default questions - LLMs Evaluation with Azure AI Studio

ferrand 55 Reputation points
2024-02-27T21:47:34.63+00:00

Dear Microsoft community, I am following step by step this official tutorial: https://learn.microsoft.com/en-us/azure/ai-studio/tutorials/deploy-copilot-ai-studio#customize-prompt-flow-with-multiple-data-sources When I run the evaluation I get a result with the questions and answers I prepared on the test dataset. This is right:
User's image

However, the result with evaluation metrics is wrong. The questions of the test dataset are not considered. Instead I get always the same default question and answer pair: User's image

How can I solve this problem? I used the custom evaluation instead of the built-in evaluation. When I use the built-in evaluation I get this other issue that I posted here https://learn.microsoft.com/en-us/answers/questions/1598940/error-flow-runtime-not-found-llms-built-in-evaluat.

Thanks in advance for your kind support.

Azure Machine Learning
Azure AI Search
Azure AI Search
An Azure search service with built-in artificial intelligence capabilities that enrich information to help identify and explore relevant content at scale.
Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
{count} votes

1 answer

Sort by: Most helpful
  1. ferrand 55 Reputation points
    2024-02-28T21:54:41.37+00:00

    @romungi-MSFT I changed the model to GPT4-32k and used a jsonl format instead of csv. I made it. Thanks a lot for your support.

    1 person found this answer helpful.
    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.