An Azure service that provides an event-driven serverless compute platform.
Hi YSL 李,
Welcome to the Microsoft Q&A Platform! Thank you for asking your question here.
Your understanding of the timeline of events is close. Here is the precise sequence confirmed by platform telemetry:
- ~11:27 JST — The ImageExtraction function began executing (invocation ID REDACTED)
- 11:57:58 JST — That invocation exceeded the configured functionTimeout (30 minutes). The Functions Host sent a WorkerTerminate signal to the Java worker process (PID 283). This is not a redeployment — it is a worker process restart due to the timeout. The "Worker terminate request received. Gracefully shutting down the worker" message is the Java worker's response to this signal.
- 11:57:58 JST — A new Java worker process (PID 5251) started immediately, initialized successfully, and re-registered all functions within the same second. The Application Insights Java Agent (v3.7.6) also reported starting successfully.
- 12:00:28 JST — The timer trigger JNDM070201_GetDskPayment_DepositConfirmation fired on schedule and completed successfully (Duration: 2928ms, Status: Succeeded). This is confirmed by platform-side FunctionsLogs telemetry.
The timer trigger execution at 12:00 was not missed — it executed and succeeded. What is missing is only the Application Insights invocation record (the requests entry that appears in the portal's Monitor/Invocation History tab). The internal log outputs (traces) were recorded, confirming the function ran. The gap is specifically in the automatic request tracking that the platform is responsible for after a worker restart.
Regarding the timeout: Please investigate why the ImageExtraction function ran for 30+ minutes. If long-running executions are expected, you can increase the functionTimeout value in host.json (see https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#timeout)
As for the missing application insights log, this appears to have been a one-off anomaly due perhaps to the worker restart. We conferred with backend team on logging, and we suggest that if you are seeing more instances of lost logs like this, you should consider moving to Open Telemetry, which is the replacement for the classing Application Insights SDK - https://learn.microsoft.com/en-us/azure/azure-functions/opentelemetry-howto?tabs=app-insights%2Cihostapplicationbuilder%2Cmaven&pivots=programming-language-csharp.
If you have any additional questions, please let me know in comments. Please accept the answer if you found the answer to be helpful.