@Muhammad Zubair Applying on LLMs on Tabular Data is a bit trickier compared to asking it questions about regular textual facts. In many cases, it would be impossible to pass in the complete tabular data into the prompt due to token limits, but it should work for data that can fit in the prompt.
In these cases, you would need to bring in all your data instead of using Azure Cognitive Search to fetch semantically relevant information, which is what Azure OpenAI on your Data does. Unfortunately, there is no way to change this behavior since this is by design.
Another approach would be to ingest aggregate information (like a pivot table in excel), which could be fetched during retrieval from Cognitive Search with the right prompt.
With more complex use cases like this where you might need to perform auxiliary autonomous tasks (like querying a database for example), it might be worthwhile to explore frameworks like Semantic Kernel which could help orchestrate and build more complex AI-powered workflows.