You can use the Lookup activity to read the header row of the CSV file to dynamically identify the columns in the file. (Set the First Row Only option to true)
Then, add a ForEach activity to iterate over the columns identified by the Lookup activity, where inside of it (the Foreach Activity), you add a Data Flow activity that will handle the creation of individual files for each column.
You can set the Items property to @activity('LookupActivityName').output.value
In the Data Flow, you read the CSV file and select the necessary columns (TimeStart, TimeEnd, and the current column from the ForEach activity).
The Select Transformation :
@concat('TimeStart, TimeEnd, ', item().ColumnName)
The Sink Transformation :
@concat(item().ColumnName, '.csv')
Then, you write the data to a new CSV file in Azure Blob Storage, naming the file based on the current column name.
Here is an example of how your ADF pipeline JSON might look:
{
"name": "SplitCSVPipeline",
"properties": {
"activities": [
{
"name": "LookupHeaders",
"type": "Lookup",
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"dataset": {
"referenceName": "YourCSVFileDataset",
"type": "DatasetReference"
}
},
"firstRowOnly": true
}
},
{
"name": "ForEachColumn",
"type": "ForEach",
"typeProperties": {
"items": "@activity('LookupHeaders').output.value",
"activities": [
{
"name": "DataFlowActivity",
"type": "DataFlow",
"typeProperties": {
"dataflow": {
"referenceName": "SplitCSVDataFlow",
"type": "DataFlowReference"
}
},
"dependsOn": []
}
]
}
}
],
"variables": []
}
}