Durable Function CPU resources

Mark Barge 6 Reputation points
2021-07-14T12:29:49.897+00:00

Hi

I'm new to durable functions and have set up a consumption (serverless) based function plan in which I have installed a durable function.

The function is used to calculate a large set of matrices and usually takes between 1 and 5 minutes to run. The output is written to a blob in a storage account which is polled by the client app to retrieve updates and results.

It all works fine but I'm worried about scaling.

I notice that each time I run it, it provides access to two processors. However if I run it twice from the one client then each client only seems to get 1 processor and it runs at half the speed.

Note I've tried fanning out but that's ridiculously slow and totally useless to me as the data transfer sizes are large.

My questions are: - (I'm happy to pay more if needed)

What will happen as it scales up to 100 simultaneous runs?

Is there a more useful plan that gives me 2 processors minimum per execution?

Can I pay more and get 4,8, even 16 processors for EACH single execution?

If it's really successful it may go to 500 or even 1000 simultaneous runs... can Azure cope with this?

I've thought about setting up 100 identical functions (obviously with different names) and calling them in turn for each client... would that work? If so it seems odd that its needed.

Many thanks

Mark Barge

Azure Functions
Azure Functions
An Azure service that provides an event-driven serverless compute platform.
4,221 questions
C#
C#
An object-oriented and type-safe programming language that has its roots in the C family of languages and includes support for component-oriented programming.
10,209 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Pramod Valavala 20,516 Reputation points Microsoft Employee
    2021-07-15T10:59:41.283+00:00

    @Mark Barge The Performance and Scale doc covers many of the questions you have and would be an informative read to understand more about how Durable Functions work.

    Firstly, to get more compute while still being elastic, you could simply upgrade to the Premium Plan which offers instances with more cores. Do note that there is a limit on how far you can scale-out per region. You can open a support request to increase these limits.

    Next, considering that your requirement is compute intensive, it would be best to control concurrency to ensure only a few activities are running at the same time. There are Concurrency Throttles that you can setup at the host level (per instance).

    With the above, you should be able to achieve optimum throughput and still ensure all queued requests are processed.

    Another enhancement would be to leverage the Event Grid Trigger alternate instead of polling blob storage.