Hello @AsifulHaqueLatifNobel-4615,
Considering the fact that you have only 10000 rows for application data. I assume the rest of the data is created by the Durable functions - in your case for all the instances that has been run by the functions.
For every instance that is run, the rows are added to the history with a unique Partition Key - there by - increase in number of partitions in the azure table storage with the increase in number of instances.
As documented in this article, though the partitioning significantly impacts positively on the scalability, the number and size of the partitions might ultimately determine how the query performs.
In case of the durable function, when an orchestration instance needs to run, the appropriate rows of the History table are loaded into memory - a Query is made to the history table - there will decrease with the performance/increase in the operation latency owing to large number of partitions - in turn the durable function performance
Reference : https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-perf-and-scale
To overcome this, you could purge the old instances history which have completed or failed or terminated upon considering your retention needs. Please refer the article for more details.
I also came across the couple of threads in which people encountered performance issue with the growing data and sharing it for your reference :
• Azure durable functions and retention of data
• Azure Durable Function getting slower and slower over time