ADF Internal server error, Not ran failed senario in case of retry failed attmept

Nikunj Patel 51 Reputation points

Hello Team,

We had setup following pipeline as per screenshot. main goal of this pipeline is copy data from sql server to data lake blob storage using parquet file type.


Many times we facing Internal server error in Copy activity, but we setup retry attempt to 3 to make it successful. But in case of failure attempt Copy activity created parquet file on data lake with leased stage. To handle this we created one pipeline which drop this type of files in case of failed copy activity.

But we notice like when copy activity failed using following internal error in first 2 attempt and in last attempt in was successful. Still it not ran failed scenario and not drop those leased files in 2 failed attempt.

"errorCode": "1000",
"message": "ErrorCode=SystemErrorActivityRunExecutedMoreThanOnce,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The activity run failed due to service internal error, please retry this activity run later.,Source=Microsoft.DataTransfer.TransferTask,'",
"failureType": "SystemError",
"target": "Move_Raw_To_Bronze",
"details": []

So can you please guide us is there any specific reason why it not ran first 2 failed attempt ? (In my case copy activity successes in 3rd attempt).

Thank you in advance.

Azure Data Lake Storage
Azure Data Lake Storage
An Azure service that provides an enterprise-wide hyper-scale repository for big data analytic workloads and is integrated with Azure Blob Storage.
1,394 questions
Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,553 questions
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,893 questions
0 comments No comments
{count} votes

Accepted answer
  1. Pratik Somaiya 4,201 Reputation points

    Hello @Nikunj Patel

    The retry attempt is of Copy activity right, so it will retry 3 times and based on the result of 3rd retry it will go into either success or failed state

    That is why it didn't delete the files as the process was still executing copy activity

    On the error side, mostly internal server issues happen due to network or server issues

    In your case, as SQL Server is used, so there might be a problem with either Integration Runtime or On-Premises network

    You need to check if both are working fine and in running state during the execution. You can monitor IR when you begin execution of the pipeline

3 additional answers

Sort by: Most helpful
  1. ShaikMaheer-MSFT 38,301 Reputation points Microsoft Employee

    Hi @Nikunj Patel ,

    Thank you for posting query in Microsoft Q&A Platform.

    As per my understanding you want have a process which deletes 0MB files once your copy activity execution completes (When I say complete it can complete as success as failure). Please correct me if I am wrong.

    I can see your delete 0MB files process was encapsulated into a separate pipeline. So, you should consider running that execute pipeline activity on completion.

    Please check below screenshots which explains logic implementation on high level.

    To know more about GetMetaData activity click here.

    Hope this helps. Please let us know if any further queries.


    Please consider hitting Accept Answer button. Accepted answers help community as well.

  2. Srikanth Thota 6 Reputation points

    Hello @Nikunj Patel , As far as I know you cant delete the leased files. In your case, you can save all the 0 MB files to one container and delete that container. No matter whether it is leased or not it will be deleted.

    Let me know if am wrong. Thanks

  3. Srikanth Thota 6 Reputation points

    Hello @Nikunj Patel

    If we leased the files we can pass the lease id through headers in ADF and we can delete but in your case the files are automatically leased when they created( if my understanding is correct) I am not sure how to pass those leased id. If you got the solution please feel free to share here.

    Alternatively, you can send the size greater than zero files to other container( Using Getmetadata, If and copy activity inside If activity) and consume these files in the next pipeline to avoid interruptions in the flow. Later you can delete the container which contain leased files and normal files manually.

    Please let me know how it goes?

    Srikanth Thota