Ref:
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-bulk-copy-portal
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-overview
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-derived-column
Create a new pipeline and add a "Get Metadata" activity. Configure the "Source" dataset to point to the folder containing the Excel files. Set the "Field list" property to "Child Items".
Add a "ForEach" activity and connect the output of the "Get Metadata" activity to the input of the "ForEach" activity. Set the "Items" property of the "ForEach" activity to @activity('Get Metadata').output.childItems.
Inside the "ForEach" activity, add a "Copy Data" activity. Configure the source dataset to use the same folder and set the "File" property to @item().name. This will read each Excel file one by one.
Create a new "Mapping Data Flow" activity inside the "ForEach" loop. Configure the source dataset to point to the Excel files, set the "File" property to @item().name, and set the "Sheet" property to the sheet name containing the data.
In the "Mapping Data Flow" activity, add a "Derived Column" transformation. Add a new column called "filename" and set its value to @item().name. This will add a new column to your data containing the Excel file's name.
Add a "Sink" transformation in the "Mapping Data Flow" activity. Configure the "Sink" dataset to point to the output location (CSV or Parquet file). Set the "Mapping" property to map the columns from the source data along with the additional "filename" column.
Connect the output of the "Copy Data" activity to the input of the "Mapping Data Flow" activity.