Welcome to Microsoft Q&A platform and thanks for posting your question here.
As I understand your query, you are trying to copy data from dataverse to SQL server using Synapse link template, however, data is not getting copied to the destination, kindly let me know if that is not the case.
- I am assuming that you are not receiving any errors while executing the pipeline, however, data is not getting copied. It might be possible that the data is not moving into SQL because the data types in Dataverse and SQL Server are not compatible. You can check the data types of the columns in both Dataverse and SQL Server and ensure that they are compatible.
- Coming to SQL Server limit of 8060 per row size, you can consider splitting the table into multiple tables or using vertical partitioning to store some of the columns in a separate table.
- Regarding your next query, if you are using temporal tables with system versioning on to track changes, you can use the
MERGE
statement to update the records in the history table. - In case you are using multiple entities and want to update records without using upsert, you can use the
UPDATE
statement to update the records in the SQL Server table. However, you will need to ensure that the primary key of the table is included in the data that you are copying from Dataverse to SQL Server.
You can find additional details here: Copy Dataverse data into Azure SQL
Hope it helps. Kindly accept the answer by clicking on Accept answer
button.