Azure Functions Runtime Table Storage Entity Count and Performance

Asiful Nobel 196 Reputation points
2020-07-08T10:32:58.24+00:00

Azure Functions use two tables for tracking History and Instances of different types of functions according to documentation. Should this tables be periodically cleaned to maximize Table Storage performance for Durable Functions and Activity Functions?

At the moment, I am using a single storage account for both my application data needs and function app's own runtime data needs. I can see from Storage Account metrics that currently Table Entity Count stands at about 10 million. It has risen from 6 million to this in one month. I know that my application Table Storage tables do not have more than 10000 rows. On the other hand, I can see that History and Instances tables have records with timestamp from march of this year.

Can the amount of entity affect table operation latency? As a result, won't that also affect Durable Function performance since Durable Function depends on couple of those tables?

Azure Table Storage
Azure Table Storage
An Azure service that stores structured NoSQL data in the cloud.
171 questions
Azure Functions
Azure Functions
An Azure service that provides an event-driven serverless compute platform.
5,145 questions
0 comments No comments
{count} votes

Accepted answer
  1. svijay-MSFT 5,236 Reputation points Microsoft Employee
    2020-07-10T11:36:01.623+00:00

    Hello @AsifulHaqueLatifNobel-4615,

    Considering the fact that you have only 10000 rows for application data. I assume the rest of the data is created by the Durable functions - in your case for all the instances that has been run by the functions.

    For every instance that is run, the rows are added to the history with a unique Partition Key - there by - increase in number of partitions in the azure table storage with the increase in number of instances.

    As documented in this article, though the partitioning significantly impacts positively on the scalability, the number and size of the partitions might ultimately determine how the query performs.

    In case of the durable function, when an orchestration instance needs to run, the appropriate rows of the History table are loaded into memory - a Query is made to the history table - there will decrease with the performance/increase in the operation latency owing to large number of partitions - in turn the durable function performance

    Reference : https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-perf-and-scale

    To overcome this, you could purge the old instances history which have completed or failed or terminated upon considering your retention needs. Please refer the article for more details.

    Purge instance history

    I also came across the couple of threads in which people encountered performance issue with the growing data and sharing it for your reference :
    Azure durable functions and retention of data
    Azure Durable Function getting slower and slower over time

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.