host.json queue configuration ignored in run.csx

Matthieu Livoye 1 Reputation point
2021-04-12T13:22:58.203+00:00

Hi,

I did setup my host.json like this:

"extensions": {
"queues": {
"maxDequeueCount": 1,
"batchSize": 1,
"newBatchThreshold": 0,
"visibilityTimeout": "00:00:30"
}
}

So that my functions would always get one message at a time, we have a constraint where our queues are dynamically created on demande and we need to only dequeue one message at a time in each queue, but we can process multiple queues in parallel.

Also my setartup is made so I add a static access to IConfiguration for the run.csx like this:

[assembly: FunctionsStartup(typeof(AzureFunctionsIntegrations.Startup))]
namespace AzureFunctionsIntegrations
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
ReusableRunner.Configuration = builder.Services.BuildServiceProvider().GetService<IConfiguration>();
}
}
}

Now I see that the host.json is loaded correctly in the logs for the queue settings, but still my csx function is processing multiple messages in parallel.

If any of you have an idea, i'll be grateful !

Thanks.

Azure Functions
Azure Functions
An Azure service that provides an event-driven serverless compute platform.
4,257 questions
C#
C#
An object-oriented and type-safe programming language that has its roots in the C family of languages and includes support for component-oriented programming.
10,237 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Mike Urnun 9,666 Reputation points Microsoft Employee
    2021-08-19T03:40:32.85+00:00

    Hi @Matthieu Livoye (and others who may land here for similar scenarios) - I was unsure how Matthieu was able to configure the creation of the queues dynamically and establish connectivity to them at runtime. Nonetheless, if the end-goal was to achieve sequential ordering and consume only 1 message at a time, Azure Storage Queue may not have been the best fit for this scenario, as the following is stated in the doc:

    If you want to avoid parallel execution for messages received on one queue, you can set batchSize to 1. However, this setting eliminates concurrency as long as your function app runs only on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.

    So, in order to guarantee the intended behavior, you're kind of bound to run on a single vCPU which may not be ideal on the other aspects of a project. Instead, we recommend using Azure Service Bus and leverage its Message Sessions feature for the ordered FIFO consumption.

    My apologies for posting this answer so late but I hope this is helpful to anyone visiting this thread looking to configure a similar use case.

    0 comments No comments