Hi Ashok Prathap,
Thank you for posting query in Microsoft Q&A Platform.
Azure Data factory is just an ETL service. It will use compute of storages to perform data movement and transformations. In your scenario, once a dataflow runs that means data inserted to your SQL table as well. ADF dataflows uses here SQL engine behind the seen to insert this data. So, if we want to roll back any changes, then we should relay on same SQL engine. You can consider have a flag column in the destination tables to identify latest run rows and consider them deleting if any failure on the pipeline.
So you can have a new pipeline or within same pipeline at the end, logic to check is all dataflows run well. if not, based the flag column value delete the rows from destination tables.
Hope this helps. Please let me know if any further queries.
Please consider hitting Accept Answer
button. Accepted answers help community as well.