First, create an Azure SQL Database instance and design the tables to match the structure of your Dataverse entities. Then navigate to the Azure portal and create an Azure Data Factory instance to create two linked services:
Use the Common Data Service (current environment)
connector. (credentials and connection strings are needed)
Then for the Azure SQL Database set up also the connection strings and the necessary credentials.
Now, for each Dataverse table and corresponding Azure SQL table:
- Create a Dataverse dataset using the Dataverse linked service.
- Create an Azure SQL dataset using the Azure SQL linked service.
Use the 'Copy Data' activity in a pipeline.
- Set the source as the Dataverse dataset.
- Set the destination as the Azure SQL dataset.
For optimization considerations :
- Increase the batch count for improved performance.
- Increase the parallel copying tasks if you have more ADF Data Movement Units (DMUs).
-Set an appropriate retry policy for transient failures.
Don't forget to use ADF's Trigger feature to schedule your pipeline. You can set it to run once every 24 hours.
Extra considerations to keep in mind :
- If your Dataverse entities support change tracking, consider setting up incremental rather than full loads. This will make your data transfers more efficient by only transferring changed data.
- Ensure the table structures in Azure SQL match those of the Dataverse entities to avoid mapping conflicts.
- Plan for error handling. ADF allows you to create fault tolerance steps in your pipelines. This can be useful if there's a failure in the data copy process.
- Test the pipeline for a smaller subset of data before doing it for all 50+ entities to ensure everything works as expected.