Hello Piyush Athawale,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
I understand that you would like to ensure that all documents retrieved from RAG (Azure Search) are evaluated by the LLM.
It happened sometimes that with large language models (LLMs) where not all retrieved documents are being evaluated or cited. To resolve this issue of skipping documents during evaluation, improve the evaluation process and ensure more documents are considered, you can use several techniques such as batch processing to handle smaller groups of documents, optimizing the context window to include the most relevant parts, preprocessing documents to highlight key information, iterative refinement of the evaluation process, and developing custom evaluation metrics. Additionally, implementing a preview step can help you review the retrieved documents before they are passed to the LLM. Use these links for more detailed info: https://composio.dev/blog/llm-evaluation-guide and https://labelstud.io/blog/llm-evaluations-techniques-challenges-and-best-practices
I hope this is helpful! Do not hesitate to let me know if you have any other questions.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.