Container App Job Auto Trigger Suddenly Stopped Working

Mandal, Sudarsan 20 Reputation points
2025-04-09T07:45:46.1933333+00:00

Hi everyone,

I’ve configured an Event-Driven Scaling rule for my Container App Job, and everything worked perfectly for the first 2–3 days. However, after I suspended the job and later resumed it, the jobs are no longer triggering.

The managed identity is correctly configured, and I’ve confirmed that the associated Storage Queue has a backlog of messages. Despite this, there’s no activity, and I haven’t found any helpful information in the logs.

Has anyone experienced this before or can offer any guidance on what might be going wrong?

Thanks in advance!

Additional Info and Screenshots Below:

Managed Identity Role Assigned with "Storage Queue Data Contributor"

User's image

User's image

User's image

User's image

Azure Container Apps
Azure Container Apps
An Azure service that provides a general-purpose, serverless container platform.
691 questions
{count} votes

Accepted answer
  1. Arko 4,150 Reputation points Microsoft External Staff Moderator
    2025-04-11T06:42:09.51+00:00

    Hello Mandal, Sudarsan,

    When I tried to check this from my end, I ran into this exact issue. After suspending and resuming the Container App Job that was event-driven using an Azure Queue scale rule, the job stopped auto-triggering, even though the managed identity was correctly assigned the Storage Queue Data Contributor role, the queue had new messages (verified in Storage Explorer and az storage message put/peek), the scale rule had the correct accountName, queueName, queueLength, and activationQueueLength metadata and manual job trigger via az containerapp job start still worked.

    Despite all of this, az containerapp job execution list showed no new executions, and the event-driven scaler logs (Log Stream) continuously showed-

    "No events since last 60 seconds" — even with messages in the queue.

    Upon some further digging I figured out that KEDA powers the event-driven scaling in Azure Container App Jobs (including the azure-queue scaler), although the official KEDA documentation does not currently document that resuming a suspended Container App Job may cause the scaler to stop polling or “lose binding.” However, it reflects a real-world observed behavior which multiple users (including myself) have encountered where an Event-triggered Container App Job stops responding to queue messages after being suspended and resumed. I guess as of now this is a known blocker that can occur after a job is resumed, where the event scaler loses its binding or internal watcher state and silently stops polling the source. Even re-creating the job doesn't always help unless the scaler metadata is refreshed.

    What worked for me was deleting and recreating the job with --mi-system-assigned identity followed by reassigning the "Storage Queue Data Contributor" role to the new principal and most importantly, reapply the scale rule metadata even if it looks correct:

    
    az containerapp job update `
    
      --name <job-name> `
    
      --resource-group <resource-group> `
    
      --scale-rule-name "azure-queue" `
    
      --scale-rule-type "azure-queue" `
    
      --scale-rule-metadata `
    
        "accountName=<storageAccount>" `
    
        "queueName=<queueName>" `
    
        "queueLength=1" `
    
        "activationQueueLength=0"
    
    

    After running this, I re-enqueued a test message, and within 30 seconds, the job successfully auto triggered,

    
    az containerapp job execution list --name <job-name> --resource-group <resource-group> --output table
    
    

    enter image description here


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.