Hi,
We are setting up backups of some of our ADLS Gen2 storage accounts based on this blog article: https://cloudblogs.microsoft.com/industry-blog/en-gb/technetuk/2021/08/17/backup-your-data-lake-using-azure-data-factory-metadata-copy-activity/#:~:text=%20Backup%20your%20data%20lake%20using%20Azure%20Data,for%20the...%205%20Learn%20more.%20%20More%20
The pipeline created is using a Copy Data activity to copy the content from a source ADLS Gen 2 storage account to a destination ADLS Gen2 storage account.
We set things up and ran copy activities - and initially things were looking promising. Then when we attempted running the copy on our largest storage account - and the Copy Data activity always fails after more or less an hour with an error:
Failure happened on 'Sink' side. ErrorCode=AdlsGen2TimeoutError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Request to ADLS Gen2 account '<storage_account_name_redacted>' met timeout error. It is mostly caused by the poor network between the Self-hosted IR machine and the ADLS Gen2 account. Check the network to resolve such error. ,Source=Microsoft.DataTransfer.ClientLibrary,'
Ran it more than once and it always fails.
Here are some things I have tried:
-Configured "Retry" and "Retry Interval": Failed. From what I can see, it looks like the entire copy is starting over and failing again after an hour when I do this (but that is my interpretation, could be wrong).
-Set up a custom integration runtime, with higher core count (16), located in the same data center as the source storage account (Canada Central): Failed.
-Set up a custom integration runtime, with higher core count (16), located in the same data center as the destination storage account (East US): Failed.
Is there really a timeout value of an hour interrupting my transfer? If so, is it configurable? I looked but failed to find a timeout setting anywhere that looking like it (there is one on the Copy Data activity but it is still set to is default of 7 days).
Any other idea? My next attempt is going to try and break down the jobs into smaller segments like going down to a folder level on the storage account. But from a backup perspective, this is very risky as any new folder requires a modification to the backup job to include it.
Your help is appreciated.
Regards,
Sebastien