@KSSharath-7336 Thank you for sharing the details. After investigation we figured that your application is using Snowflake Python connector, which expects any connection opened to be closed explicitly.
However, at times we see performance issues for python functions(in case of simultaneous calls as well). So we suggest to maximize the number of FUNCTIONS_WORKER_PROCESS_COUNT.
This behavior is expected due to the single threaded architecture of Python.
In scenarios such as , you are using blocking HTTP sync calls or IO bound calls which will block the entire event loop.
It is documented in our Python Functions Developer reference on how to handle such scenario’s: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#scaling-and-concurrency . Especially the Async part.
Here are the two methods to handle this:
- Async calls
- Add more Language worker processes per host, this can be done by using application setting : FUNCTIONS_WORKER_PROCESS_COUNT up to a maximum value of 10. ( So basically, for the CPU-bound workload you are simulating with any loops, we do recommend setting FUNCTIONS_WORKER_PROCESS_COUNT to a higher number to parallelize the work given to a single instance (docs here).
[Please note that each new language worker is spawned every 10 seconds until they are warmed up.]
Here is a GitHub issue which talks about this issue in detail : https://github.com/Azure/azure-functions-python-worker/issues/236
Please let me know if this helps. If it does, please 'Accept as answer' and ‘Up-vote’ so that it can help others in the community looking for help on similar topics.