How to Load Data To On Premise SQL using single copy activity

Karthik Elavan 1 Reputation point

Dear All,
My project structure is the source system CRM to load the data to ADLS Gen2 using dataverse / synapse link. after that we are planned to load the data on premise SQL. so that i am using copy activity to load the data each table. here our expectation is each table i needs to create separate copy activity. instead of that, i would like to optimize single copy activity needs to load all the tables without fail. keep in mind my source data is different folder structure which means, each table, i have folder in my source.

example: source folder ( testA--> under folder multiple partitioned files (csv format)) ---> copy activity my source dataset marked as testA with wild card files --> sink --- connected on premise SQL with upsert types to load the data to SQL DB.

i have 15 tables, so that now, i have create 15 copy activity to load the data, instead of that, i would like to make dynamic with load all the 15 tables with different source of path location. in this regard, how do we handle, what are the possibility solution. kindly suggest.

tomorrow, if i have to load another 100 tables, we do not want create copy activity and other stuff. please help me in this manner.

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,899 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Nandan Hegde 29,996 Reputation points MVP

    the below thread can act as a reference to handle dynamic scenarios :

    You would have to maintain a lookup of that data and use foreach to iteratively copy the data

  2. ShaikMaheer-MSFT 38,301 Reputation points Microsoft Employee

    Hi @Karthik Elavan ,

    Thank you for posting query in Microsoft Q&A Platform.

    You can achieve this using parameterized datasets in copy activity and a sql table with folder and table names mapping information.

    Kindly check below detailed example for better understanding.
    Step1: Have table with folder paths and tables names mapping information.

    Step2: Use Lookup activity or script activity to read above table data into ADF pipeline. In below image I am using lookup activity.

    Step3: ForEach activity to iterate over each row of above table data, where I am getting folder path and table name mapping information.

    Step4: Inside foreach activity, Use copy activity. Under source tab use wildcard path and under sink tab use parameterized dataset which points to SQL table dynamically.

    Kindly check below video to know about parameterization of datasets in Azure data factory.
    Parameterize Datasets in Azure Data Factory

    Hope this helps. Please let me know if any queries.


    • Please don't forget to click on 130616-image.png or upvote 130671-image.png button whenever the information provided helps you. Original posters help the community find answers faster by identifying the correct answer. Here is how
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification