Hi @Brianna C ,
Thank you for reaching out to us with your query.
Based on the structure of your pipeline, inside ForEach activity, you are passing every file name to the Dataflow using dataset parameters.
As you are doing the transformation dynamically for all the files in each iteration, your source schema was set to None while creating the dataset.
When creating a dataset, the schema for a specific file is imported from the connection/store. This imported schema is then used in the dataflow source.
I faced difficulty detecting file columns, and mapping in the sink activity only showed derived columns.
In the dataflow debug, you will observe this particular output. However, when you run the dataflow through the pipeline, it will dynamically produce the desired result based on the schema of the source file in each iteration.
In the below demonstration, I took one file with import schema set to None in the dataset and the source rows in the dataflow debug are 0 now.
Now you can see I added one column 'Location' and in sink it is showing only 1 column.
But when I execute the dataflow using pipeline, it took the schema dynamically from the file and you can see the result included all the existing columns from the source file.
So, the above behavior won't affect your output file. If you want to cross check the original data while doing the transformation, you need to give your source file once in the dataset and import schema in the dataset. After importing schema, use your dataset parameters in the file name to do this in each iteration.
From this, the dataflow will take it and you can see the data preview while doing the transformations.
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.