Hi,
Suppose the files structure in Azure Storage, looks as below:
Blob Containers > Test > DirectoryA > 'Test1 1.ISO'
Blob Containers > Test > DirectoryA > 'Test1 2.ISO'
Blob Containers > Test > DirectoryA > 'DirectoryAA > (contains more than 100 zip files)
Blob Containers > Test > DirectoryB > 'Test1 1.ISO'
Blob Containers > Test > DirectoryB > 'Test1 2.ISO'
Blob Containers > Test > DirectoryB > 'DirectoryBB > (contains more than 100 zip files)
...
...
Blob Containers > Test > DirectoryN > 'Test1 1.ISO'
Blob Containers > Test > DirectoryN > 'Test1 2.ISO'
Blob Containers > Test > DirectoryN > 'DirectoryNN > (contains more than 100 zip files)
Runbook (powershell) is customized to copy only if the filename matches to 'Test1 1.ISO and 'Test1 2.ISO' and else not to copy to local. In Event grid Subscription, option used is 'Blob Created' . Query 1: Can Event Grid Subscription be customized to copy only specific files only? or should it be handled in Runbook (as it is currently done)
Here, 'Test1 1.ISO and 'Test1 2.ISO' size will be >5GB and such files in different directory will be available at any time so download will happen automatically. After download is complete, its observed that local copies are corrupted.
Whereas if we download same file manually from Microsoft Storage Explorer to local, then local file is not corrupted.
Query 2: Why is the automated download file getting corrupted?
Download syntax:
Initially syntax was,
Get-AzStorageBlobContent -ErrorAction stop -Blob $fileName -Container $Container -Destination $localFilePath -ClientTimeoutPerRequest 600 -Context $ctx
but files were getting corrupted randomly,
So modified the syntax to,
Get-AzStorageBlobContent -ErrorAction stop -Blob $fileName -Container $Container -Destination $localFilePath -ServerTimeoutPerRequest 86400 -ClientTimeoutPerRequest 86400 -Context $ctx
with this all files are getting corrupted.
Thanks In Advance,
Madhura