Our client plan to use dataflow in ADF to split source csv files into multiple files by a key column.
Since there are multiple csv files on the source side with different columns and schema. Our client need to configure the dataflow without import schema & projection.
However, since key partitioning does not allow computed columns, which enforce user hardcode in the dataflow design. It requires users have to create different datasets for each csv files. It would also increase non-standard admin efforts and potential errors in an agile environment.
Our client requests ADF improve the "partitioning by key column" function in dataflow which without hardcoded key column and schema structure, could help users reduce lot off on-hand and hard coding work.