Hi @Surya Raja ,
Thank you for reaching out with your question and for using the MS Q&A platform.
When you mention the remaining 2000 records in the source database, I assume you’re referring to the source data lake storage. You’ve also noted that you’ve modified the data type in the source file. In the previous run, only 8000 records were successfully copied. Here’s a step-by-step guide on how to re-run the pipeline:
- Truncate the Destination Table: As you’ll be re-running the copy operation, it will attempt to copy all 10000 records again. This could potentially lead to duplication or failure due to primary key violation if you have primary key on the columns in SQL .
If you want to avoid this or only update the data not present in the destination, consider using Upsert or you can truncate the table and do the complete data copy from source to the destination. To truncate the table, use the following command: Truncate Table [Tablename]
. Please replace [Tablename]
with your actual table name.
Related Link- https://learn.microsoft.com/en-us/azure/data-factory/connector-sql-server?tabs=data-factory
- Reset Mapping in ADF: In Azure Data Factory (ADF), navigate to the mapping tab for the copy activity. Reset the mapping and make sure it reflects the changed data type. Once done, publish and re-run the pipeline.
Please try these steps and let us know if they work for you or if you have any follow up questions.