Based on my work with the support team, it appears the DataFlow Sink activity is firing a Microsoft.Storage.BlobRenamed event, not Microsoft.Storage.BlobCreated. I suspect that the sink is taking the "success" file created during the partitioning process and renaming to the output filename selected in the sink activity settings. We can see the "success" file triggering the Microsoft.Storage.BlobCreated event, but it is deleted before it can be processed.
Dataflow sink of ADLS Gen2 dataset is not firing storage events

I have a pipeline that is executing a dataflow using blobs.
The storage account is ADLS Gen2 with hierarchical namespaces enabled.
The dataflow is using the same container, but different directories for source and sink.
I am also using storage event based triggers monitoring the various directories to execute the appropriate pipeline.
When the datasets are connected to an Azure Blob Storage linked service, they produce the following error:
Job failed due to reason: at Sink 'snkOutput': org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This operation is not permitted on a non-empty directory.","Details":"org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This operation is not permitted on a non-empty directory.
This is a NEW error, and only exists in one of my 2 ADF environments.
I've seen posts were the resolution to this error was to migrate to an Azure Data Lake Storage Gen2 linked service. I will agree that this does resolve the error and the dataflow executes successfully.
However, upon making this change the storage event trigger monitoring the sink directory no longer fires. This was working correctly prior to making the change mentioned above. If I manually update the same file, the trigger will execute. What would cause the file created by the dataflow sink operation to prevent the storage event trigger from executing?