Share via

My Azure function was automatically redeployed. Worker terminate request received. Gracefully shutting down the worker.

YSL 李 0 Reputation points
2026-03-16T15:02:40.37+00:00

I have a scheduled function in Azure Function that executes at 12:00 every day. I noticed that there is no execution record for March 13th, while it executed successfully on March 12th and 14th.

"No execution record" means that when checking the function's invocation history from the portal interface, there is no record for March 13th. However, there are function internal log outputs, so the function actually did execute, but the invocation logs were not recorded.

Upon checking the logs, I found that at 11:57, the function was automatically redeployed. The first log entry at that time was: "My Azure function was automatically redeployed. Worker terminate request received. Gracefully shutting down the worker."

The current status of my function is as follows:

  1. It is based on an App Service Plan
  2. Always On is enabled
  3. Deployed through Azure Pipeline, triggered when there are updates to the code branch

The following potential causes for automatic deployment have been ruled out:

  1. Code upload (there was none)
  2. Portal interface restart (no relevant logs in ActivityLog)
  3. Microsoft maintenance (checked Microsoft status page, no records at that time)
  4. App Service Plan overload (CPU did reach its 7-day peak at 11:45, but the maximum was 19 and average was 14, so I don't think this is the issue)
  5. Application Insight log collection is at 100%
  6. No scaling records for the application

I suspect that the automatic deployment at 11:57 caused the scheduled function at 12:00 to not show an execution record. I urgently want to know what other likely reasons could have caused the function to be automatically deployed.

Azure Functions
Azure Functions

An Azure service that provides an event-driven serverless compute platform.

0 comments No comments

2 answers

Sort by: Most helpful
  1. Rakesh Mishra 7,295 Reputation points Microsoft External Staff Moderator
    2026-03-16T16:19:24.1933333+00:00

    Hi YSL 李,

    Welcome to the Microsoft Q&A Platform! Thank you for asking your question here.

    Your understanding of the timeline of events is close. Here is the precise sequence confirmed by platform telemetry:

    1. ~11:27 JST — The ImageExtraction function began executing (invocation ID REDACTED)
    2. 11:57:58 JST — That invocation exceeded the configured functionTimeout (30 minutes). The Functions Host sent a WorkerTerminate signal to the Java worker process (PID 283). This is not a redeployment — it is a worker process restart due to the timeout. The "Worker terminate request received. Gracefully shutting down the worker" message is the Java worker's response to this signal.
    3. 11:57:58 JST — A new Java worker process (PID 5251) started immediately, initialized successfully, and re-registered all functions within the same second. The Application Insights Java Agent (v3.7.6) also reported starting successfully.
    4. 12:00:28 JST — The timer trigger JNDM070201_GetDskPayment_DepositConfirmation fired on schedule and completed successfully (Duration: 2928ms, Status: Succeeded). This is confirmed by platform-side FunctionsLogs telemetry.

    The timer trigger execution at 12:00 was not missed — it executed and succeeded. What is missing is only the Application Insights invocation record (the requests entry that appears in the portal's Monitor/Invocation History tab). The internal log outputs (traces) were recorded, confirming the function ran. The gap is specifically in the automatic request tracking that the platform is responsible for after a worker restart.

    Regarding the timeout: Please investigate why the ImageExtraction function ran for 30+ minutes. If long-running executions are expected, you can increase the functionTimeout value in host.json (see https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#timeout)

    As for the missing application insights log, this appears to have been a one-off anomaly due perhaps to the worker restart. We conferred with backend team on logging, and we suggest that if you are seeing more instances of lost logs like this, you should consider moving to Open Telemetry, which is the replacement for the classing Application Insights SDK - https://learn.microsoft.com/en-us/azure/azure-functions/opentelemetry-howto?tabs=app-insights%2Cihostapplicationbuilder%2Cmaven&pivots=programming-language-csharp.

     

    If you have any additional questions, please let me know in comments. Please accept the answer if you found the answer to be helpful.

    0 comments No comments

  2. Q&A Assist
    2026-03-16T15:03:16.0133333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    For an Azure Functions app, a message like “Worker terminate request received. Gracefully shutting down the worker.” around the time of a missed or partially logged execution almost always indicates a platform- or runtime-level recycle rather than a true “deployment” of new code.

    From the available information, the most likely categories of causes are:

    1. Platform maintenance or unexpected failure on the underlying App Service worker
      Even when no public incident is shown on the status page, the App Service platform can move an app to a new worker due to:
      • Routine maintenance on the worker node.
      • Unexpected hardware or file server failures. In both cases, the worker process is restarted and the app is brought up on a new worker. This shows up as the worker being terminated and restarted, and can coincide with a timer trigger window so that:
      • The function actually runs (as seen in internal logs), but
      • The invocation record is incomplete or not persisted correctly.
      This behavior is explicitly called out for App Service: when a worker or file server role fails or is replaced, the worker process restarts on all workers hosting the app.
    2. Runtime startup / configuration issues during or after the recycle
      After the worker restart, if the function host encounters any runtime issues, the host may:
      • Start, execute code, and log via custom logging, but
      • Fail to correctly emit or persist invocation telemetry (for example, due to storage or configuration issues).
      Common runtime issues after a recycle include:
      • Loss of access to the storage account used by the Functions runtime.
      • Misconfigured app settings (runtime version, trigger connection strings, Key Vault references).
      • External dependency timeouts (for example, storage, database, messaging).
      • Sync triggers problems causing incorrect loading of functions.
      Any of these can affect how invocations are tracked, even if the function code itself runs and writes its own logs.
    3. Application Insights / telemetry pipeline behavior
      If Application Insights or the logging pipeline is under transient stress during a worker recycle, it is possible for:
      • Function execution to occur.
      • Internal logs (for example, console or custom logs) to be written.
      • But the invocation record in the portal to be missing or incomplete for that specific run.
      The diagnostics guidance for “Function App down or reporting errors” shows that the platform checks for:
      • Startup issues.
      • Recent deployments.
      • Runtime and language version validation.
      • Configuration and SyncTrigger issues.
      • Execution health, including nontriggering functions and long-running executions.
      Running the built-in diagnostics can help confirm whether a transient runtime or configuration issue occurred around that time.
    4. Timer trigger and host lifecycle interaction
      When the host is recycling close to the scheduled time (for example, 11:57 recycle and 12:00 schedule), the timer trigger can be affected by:
      • Host not being fully initialized when the schedule fires.
      • A missed or delayed schedule that is then executed in a recovery window, but not logged as a normal invocation.
      Because the app is on an App Service Plan with Always On, cold start is less likely, but host restarts due to platform events still occur and can interfere with a single scheduled run.

    Given the constraints described (no code deployment, no manual restart, no scaling, Always On enabled, and no public maintenance notice), the most probable explanation is:

    • A platform-level worker or file server event (routine maintenance or unexpected failure) caused the worker process to restart around 11:57.
    • During or immediately after this recycle, the Functions host and/or telemetry pipeline did not fully record the 12:00 timer invocation, even though the function code executed and wrote internal logs.

    To further narrow down root cause and prevent recurrence:

    1. Use Azure Function App Diagnostics from the portal for the affected time window to check:
      • Startup issues.
      • Recent deployments and configuration changes.
      • SyncTrigger and configuration validation.
      • Execution health and nontriggering functions.
    2. Review Application Insights queries around that time for:
      • exceptions and traces with LogLevel == "Error" to see if runtime or telemetry errors occurred.
      • Any anomalies in requests or traces per worker.
    3. Verify storage account connectivity and configuration, since loss of storage access can cause runtime issues that affect invocation tracking.

    These checks align with the documented guidance for runtime issues after deployment and for diagnosing “Function App down or reporting errors” scenarios, and are the supported way to identify whether the event was due to platform maintenance, unexpected failure, or a configuration/runtime problem that coincided with the scheduled run.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.