Hi @Satish Hadapad
As I understand, you want to flatten the nested JSON array in ADF. You can achieve this using the Flatten transformation in the Data Flow activity. Given that the first node doesn't have a key and starts with non-constant values, you'll need to use a combination of transformations to get the desired output.
You can follow the below steps to flatten the JSON data:
Step1: Use the Parse JSON
transformation to parse the JSON data into a hierarchical structure.
Step2: Use Flatten
transformation with Map
Add a Flatten
transformation and set the Unpivot
option to Map
. This will allow you to flatten the JSON data while preserving the table names as keys.
In the Flatten
settings, set the following:
-
Input
: The parsed JSON data from Step 1
-
Unpivot
: Map
-
Map keys
: table*
(this will capture all table names as keys)
-
Map values
: lastSuccessfulWriteTimestamp
, totalProcessedRecordsCount
, dataFilesPath
, schemaHistory
Step 3: Flatten the schemaHistory
array
Add another Flatten
transformation to flatten the schemaHistory
array. Set the following:
-
Input
: The output from Step 2
-
Unpivot
: Array
-
Array
: schemaHistory
-
Columns
: schemaHistory_value1
, schemaHistory_value2
(you can add more columns if needed)
Step 4: Select and rename columns
Use the Select
transformation to select the desired columns and rename them as needed. For example:
-
table_name
: The table name (e.g., table1
, table2
, etc.)
-
lastSuccessfulWriteTimestamp
: The timestamp value
-
totalProcessedRecordsCount
: The count value
-
dataFilesPath
: The path value
-
schemaHistory_value1
: The first schema history value
-
schemaHistory_value2
: The second schema history value
Step 5: Sink the data to your SQL database
Finally, use the Sink
transformation to write the flattened data to your SQL database.
By following these steps, you should be able to flatten the nested JSON array and extract the desired columns for each table.
I hope this helps. Please let me know if you have any questions.