Please see the following: https://www.blog.allandecastro.com/bringing-your-dataverse-data-to-azure-synapse/
Copy Dataverse data into Azure SQL
Hi,
I'm using the template Copy dataverse data into Azure SQL using Synapse link but am getting stuck on the create trigger step. For some reason it can't find the blobs when I enter in the details below, exactly like the step by step document says (https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-pipelines?tabs=synapse-analytics)
The dataverse container does have the files so this should work. I'm not sure what I'm missing.
Any ideas or suggestions greatly appreciated!
*
*
Azure Synapse Analytics
4 answers
Sort by: Most helpful
-
-
LiJia Liu 175 Reputation points MVP
2023-03-13T10:01:03.7233333+00:00 I can suggest you to go with Microsoft Flow,
- Create a Field in Azure SQL DB to Store Unique Values
- Create a Field in DataVerse with the same DataType as Created in Step 1.
- Create 2 Flows a. Create a FLOW Trigger when a record in Created/Updated in Azure DB
b. Create a Flow Trigger when a record in Updated in Dataverse--> Use List Rows and Check the CREATED SQL Field (Step 1 ) in Step 2 (Dataverse) --> If Available Update else Create
--> Use List Rows and and retrieve the Record from SQL DB --> then Update
-
Binway 736 Reputation points
2023-04-21T03:45:14.31+00:00 Just adding what I found when I was having the same issue as this seems to have been asked a number of times and I didn't see an appropriate answer. In the instructions for the Dataverse into the Azure SQL at https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-pipelines?tabs=synapse-analytics there are the prerequisites - one of which is to enable the incremental folder update - refer to https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-incremental-update. When you set up the link and add table you must enable the incremental.
The directory structure in your storage account is completely different with the model.json file at the root instead of being held in a folder where the /model.json parameter will work when you create the trigger. Compare the folder structure in Anders question with the folder structure you get when you enable incremental in the synapse link set up. The model.json is in the date folders so the /model.json parameter will now work.
-
Karthik Eetur 21 Reputation points
2023-08-15T02:04:31.6966667+00:00 I've used copy dataverse to sql pipeline(dataflow) suggested my MS.
I am running synapse pipelines for every 15 mins to copy data from incremental folders to sql database.
I've a scenario where main dataflow pipeline runs for more than 15 mins and orchestrator pipeline skipping all subsequent pipelines. I've to manually clear processinglog table entries with status <> 1 for orchestrator pipeline to pick new folders for processing.
How to reprocess missed incremental folders using same MS flow?
I don't think same MS flow works for reprocessing old folders when new folders are ingested in the database and maxrowversion is a latest one. For example - reprocess 20-30 old folders to catch up data.
Appreciate any help here