Actually, the returned result only appears as escaped when you look at it in string variable. Without the string conversion it is an array. The json function can help you convert it back. @Gav Cheal
For fixing the transposition, you will need to use Mapping Data Flow. Copy activity does not handle transformation.
mapLoop and flatten transform
Given the dummy data:
{"theColumns":{
"Prop_name":["First prop","second prop","third prop"],
"Numbers":[10,20,30],
"Flavors":["Savory","Umame","Spicy"]
}}
I made implementation (Data flow script):
source(output(
theColumns as (Flavors as string[], Numbers as short[], Prop_name as string[])
),
useSchema: false,
allowSchemaDrift: true,
validateSchema: false,
ignoreNoFilesFound: false,
format: 'json',
fileSystem: 'data',
folderPath: 'input',
fileName: 'transpose.json',
documentForm: 'singleDocument') ~> source1
source1 derive(rows = (mapLoop(size(theColumns.Prop_name),
@(Prop=theColumns.Prop_name[#index],
Number=theColumns.Numbers[#index],
Flavor=theColumns.Flavors[#index])
))) ~> derivedColumn1
derivedColumn1 select(mapColumn(
rows = rows
),
skipDuplicateMapInputs: true,
skipDuplicateMapOutputs: true) ~> select1
select1 foldDown(unroll(rows, rows),
mapColumn(
each(rows,match(true()))
),
skipDuplicateMapInputs: false,
skipDuplicateMapOutputs: false) ~> flatten1
flatten1 sink(allowSchemaDrift: true,
validateSchema: false,
format: 'delimited',
fileSystem: 'data',
folderPath: 'out',
columnDelimiter: ',',
escapeChar: '\\',
quoteChar: '\"',
columnNamesAsHeader: true,
partitionFileNames:['transpose_results.csv'],
umask: 0022,
preCommands: [],
postCommands: [],
skipDuplicateMapInputs: true,
skipDuplicateMapOutputs: true,
partitionBy('hash', 1)) ~> sink1
This has 3 transforms. First a derived column where I create a single column which contains an array of objects. I use mapLoop to get an element from each column by index.
(mapLoop(size(theColumns.Prop_name),@(Prop=theColumns.Prop_name[#index], Number=theColumns.Numbers[#index], Flavor=theColumns.Flavors[#index])))
Next a select transform where I discard the original json form.
Lastly, a Flatten transform where the array is turned into rows, and I use a rule-based mapping to extract each property into its own column.