It seems that the change in file size from Azure Application Insights has disrupted your Azure Data Factory pipeline. Here are some strategies you could consider to resolve this issue:
File Splitting: One approach is to break these large files into smaller chunks before ingestion. Azure Data Lake Store has a capability to read a large file as smaller chunks or partitions.
Incremental Loading: You might also consider implementing incremental data loading. Instead of reading the entire dataset each time, only new or changed data since the last update would be processed.
Optimizing Data Factory Performance: There are ways to optimize the performance of Azure Data Factory itself. This includes parallel execution of activities and increasing DIUs (Data Movement Units) for copy activity.
Refactor your DataFlow: If feasible, you can refactor your DataFlow to better handle large files. This could include using transformations that reduce the amount of data loaded into memory at once, such as 'surrogate-keys' or 'window' transformation.
Use Mapping Data Flow's Optimized Compute Type: Mapping Data Flows provide an optimized compute type, which offers increased memory and processing power. This computing type is specifically designed to handle large scale data operations.
Remember to thoroughly test any changes to ensure they solve the problem without introducing new issues. Unfortunately, I don't see a straightforward solution to this problem.