i have a blobtrigger which is listening on one particular container. 3rd party process writes file in this container and i want my trigger to do some processing on basis of these incoming file.
logic inside a trigger works fine. but i can see these logs getting printed long after all the files has been processed.
while monitoring function from "Application Insight - live metrics", i can see logs like this even after couple of hrs.
Blob 'trigger/******************.json' will be skipped for function 'BlobTrigger' because this blob with ETag '"0x8D93848EA14CA40"' has already been processed. PollId: 'f5726ca3-3608-4527-9b9d-8d23ebb10061'. Source: 'ContainerScan'.
I am sure no new files are been written in incoming blob, last file written was couple of hrs back.
I can also see 1 server instance (for blob listener) still running.
I am afraid because of these false +ve, i may be paying for unnecessary compute. How to fix this?, and why is this happening?
I am using python 3.9 and latest dependencies.