Although you mentioned not finding any settings limitations, it’s worth reviewing if there are any limits related to data transfers, particularly on the Dataverse side or within Synapse when dealing with large volumes of data.
Dataverse might have API throttling or batch size limitations. Check the Dataverse API limits, which could potentially restrict the number of records you can retrieve in a single operation.
You may need to verify that the batch size for your data copy activity is optimized. Sometimes, the default batch size may not be sufficient for large datasets. You can adjust the batch size in the settings of your Copy Activity within the Synapse pipeline.
For tables with more than 10 million records, consider using an incremental loading strategy. Instead of copying the entire table at once, you could break the data transfer into smaller chunks, either by partitioning the data or by using a date-based incremental load.
You can achieve this by configuring the Synapse pipeline to copy data based on specific conditions, such as a timestamp or an index column, in batches.
Review and increase the Data Movement Units (DMUs) in Azure Synapse. DMUs control the throughput of data movements between different services (like Dataverse to SQL). For larger data volumes, increasing DMUs could help in faster and more consistent data transfers.
Consider leveraging Dataflows or PolyBase if available. PolyBase allows you to load large datasets efficiently into Azure SQL from Azure Data Lake or other external sources.