Hey,
As you know the number of sheets within the excel would remain constant, please proceed with the below flow:
- Create a variable with the value as max number of sheets, so lets say in our case would be 5 or an array variable with values [0,1,2,3,4] matching the sheet indexes
- use getmeta data activity to get the number of excel files within the folder you need to process
- Use for each iteration to process those many files
- within foreach activity, call execute pipeline activity
1. The another pipeline would have a foreach activity with the number of iterations as the sheet numbers within an excel. So the parameters passed from the parent pipeline to this pipeline would be the file path and the number of sheets within it.
- within foreach activity, call execute pipeline activity
You can parameterize the sheetindex in excel dataset
- Once all the excel sheets are converted in CSV in respective folders, use for each activity again in the main pipeline after the 1st one and iterate it over the number of sheets variable as now the task is to merge the CSV files into a single file.
to merge follow similar approach :
https://www.c-sharpcorner.com/article/merge-multiple-json-files-via-synapse-data-factory-pipelines/