The pipeline and data flow are succeeding, so the issue is almost certainly where the sink datasets are pointing in Azure Storage versus where the files are being looked for.
From the screenshots:
- The sink datasets
DS_processed_hospital_admissions_dailyandDS_processed_hospital_admissions_weeklyare DelimitedText datasets. - Each dataset has:
- Linked service: an ADLS Gen2 account
- File path split into folder and file parts, e.g.
- Folder:
hospitaladmissions - File:
dailyorweekly
- Folder:
In mapping data flows, when a DelimitedText dataset is used as a sink with only a folder and a base file name, ADF writes CSV files into that folder with a generated name pattern (for example daily_*.csv, weekly_*.csv), not necessarily a single file exactly named daily.csv or weekly.csv.
To locate the output:
- Open the same storage account that the linked service uses.
- Browse to the container configured in the linked service.
- Inside that container, open the
hospitaladmissionsfolder. - Look for files whose names start with
dailyorweeklyand may include additional suffixes (timestamps, partitions, or GUIDs) and.csvextension.
If nothing appears there, verify these points in the data flow sink settings:
- In the sink Settings tab, confirm the Output to path is not overridden by a dynamic expression; it should still reference the dataset path (
hospitaladmissions/dailyandhospitaladmissions/weekly). - Ensure the sink is not configured to Truncate or Delete files after write (for example via a subsequent activity).
- Confirm the linked service points to the same storage account and container that is being checked in the portal.
Once the correct container and folder are opened, the CSV files written by the sinks should be visible.
References: