An Azure service that provides an event-driven serverless compute platform.
Hello Steve Ngapo, thanks for patiently sharing all the details and for waiting while we dug deeper. Here’s what we found and recommend:
Key Findings:
- Parsing Issue in function.json: Internal tool flagged “Could not parse function.json”, meaning the runtime couldn’t read your trigger configuration. Your JSON contains comments that aren’t valid. When parsing fails, the function falls back to defaults and can stop polling the queue, which matches the behavior you observed.
- Queue Settings Using V1 Format: Your host.json uses the older V1-style queue configuration. On Premium plans running Functions v2+, these settings may be ignored or overridden by backend defaults, leading to inconsistent behavior.
Recommended Fixes:
- Clean Up function.json Remove comments and any unrelated properties
- Upgrade to V2-Style host.json Update your host.json so the queue settings live under extensions.queues:
{
"version": "2.0",
"functionTimeout": "00:10:00",
"extensions": {
"queues": {
"maxDequeueCount": 5,
"batchSize": 32,
"newBatchThreshold": 16,
"visibilityTimeout": "00:03:00"
}
}
}
This ensures the runtime applies your desired batch size and retry behavior instead of defaulting to backend values.
Next Steps:
- Redeploy the cleaned function.json and upgraded host.json.
- Restart the function app after deployment so the runtime reloads configuration.
- Over the next 24–48 hours, monitor the function’s execution count and queue metrics in the Azure portal (Function App → Monitoring → Invocation count / Failures and Storage Account → Queue metrics) to confirm consistent processing.
- If you see any further pauses, capture the exact timeframe so we can re-check logs and backend diagnostics.
References:
I hope the above information helps and fix your issue. Please do let us know for any further questions. Thank you!