How can I improve the capabilities of my Copilot bot to match those of Azure OpenAI Studio's Chat playground?
We deployed a couple of Power Virtual Agents (now Copilots) by following the Azure OpenAI Studio -> Copilot Studio flow. Because of this, we tested the bots with our data (Azure AI Search) in both Azure OpenAI Studio and Copilot Studio. There are certain differences between the tests in both products that after some reading, I was able to identify as current limits for Copilots. For example Copilots are not turn-based bots. However testing the bot in Azure OpenAI Studio's Chat playground proves that in there turn-based bots are possible. I noticed as well that if we ask to retrieve data and present it in a table format, the Chat playground has no problems doing so but Copilots will respond with "I'm sorry, I'm not sure how to help with that. Can you try rephrasing?". Is there some work being done for Copilots answers to eventually reach the same level as Azure OpenAI? Is there something I can do to improve the situation? I considered the option of using Azure Bot Services instead which might yield the results I'm looking for, but I would be trading the option of carrying out automated tasks. Something that Copilots can do.