Copy Dataverse data into Azure SQL

Anders Wennerwik 0 Reputation points
2023-03-12T10:06:51.09+00:00

Hi,

I'm using the template Copy dataverse data into Azure SQL using Synapse link but am getting stuck on the create trigger step. For some reason it can't find the blobs when I enter in the details below, exactly like the step by step document says (https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-pipelines?tabs=synapse-analytics)

The dataverse container does have the files so this should work. I'm not sure what I'm missing.

Any ideas or suggestions greatly appreciated!

*Trigger One1

*

Trigger One2

Trigger One4

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
5,373 questions
0 comments No comments
{count} votes

4 answers

Sort by: Most helpful
  1. LiJia Liu 175 Reputation points MVP
    2023-03-13T10:00:46.28+00:00
    0 comments No comments

  2. LiJia Liu 175 Reputation points MVP
    2023-03-13T10:01:03.7233333+00:00

    I can suggest you to go with Microsoft Flow,

    1. Create a Field in Azure SQL DB to Store Unique Values
    2. Create a Field in DataVerse with the same DataType as Created in Step 1.
    3. Create 2 Flows a. Create a FLOW Trigger when a record in Created/Updated in Azure DB
      --> Use List Rows and Check the CREATED SQL Field (Step 1 ) in Step 2 (Dataverse)
      
      --> If Available Update else Create
      
      b. Create a Flow Trigger when a record in Updated in Dataverse
      --> Use List Rows and and retrieve the Record from SQL DB
      
      --> then Update
      

  3. Binway 736 Reputation points
    2023-04-21T03:45:14.31+00:00

    Just adding what I found when I was having the same issue as this seems to have been asked a number of times and I didn't see an appropriate answer. In the instructions for the Dataverse into the Azure SQL at https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-pipelines?tabs=synapse-analytics there are the prerequisites - one of which is to enable the incremental folder update - refer to https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-incremental-update. When you set up the link and add table you must enable the incremental. User's image

    The directory structure in your storage account is completely different with the model.json file at the root instead of being held in a folder where the /model.json parameter will work when you create the trigger. Compare the folder structure in Anders question with the folder structure you get when you enable incremental in the synapse link set up. The model.json is in the date folders so the /model.json parameter will now work. User's image

    0 comments No comments

  4. Karthik Eetur 21 Reputation points
    2023-08-15T02:04:31.6966667+00:00

    I've used copy dataverse to sql pipeline(dataflow) suggested my MS.

    I am running synapse pipelines for every 15 mins to copy data from incremental folders to sql database.

    I've a scenario where main dataflow pipeline runs for more than 15 mins and orchestrator pipeline skipping all subsequent pipelines. I've to manually clear processinglog table entries with status <> 1 for orchestrator pipeline to pick new folders for processing.

    How to reprocess missed incremental folders using same MS flow?

    I don't think same MS flow works for reprocessing old folders when new folders are ingested in the database and maxrowversion is a latest one. For example - reprocess 20-30 old folders to catch up data.

    Appreciate any help here

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.