How to delete automatically migrated blobs (with Data Factory) based on original last modified data?

Domenico Fasano 0 Reputation points
2024-09-30T13:09:07.0733333+00:00

I have a Storage Account with a lifecycle management policy. I need to migrate the blobs in a new Storage Account and ensuring that these blobs get deleted after x days on the original last modified data. What can I do?

So far, I managed to create a Data Factory pipeline with a copy activity and I see there is a way to create a custom metadata with the original last modified data (the $$LASTMODIFIED value). What is the best way to exploit it for what I want to achieve? Is there a better/simpler solution?

I think one can use Azure Functions to delete the blob, but I have milions of blobs and a function go on timeout after 5/10 minutes.

Azure Blob Storage
Azure Blob Storage
An Azure service that stores unstructured data in the cloud as blobs.
2,919 questions
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,826 questions
{count} votes

3 answers

Sort by: Most helpful
  1. Sina Salam 12,011 Reputation points
    2024-09-30T13:29:52.6533333+00:00

    Hello Domenico Fasano,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you would like to delete automatically migrated blobs (with Data Factory) based on original last modified data.

    Follow these links for best practices to clean up files by built-in delete activity in Azure Data Factory https://azure.microsoft.com/en-us/updates/clean-up-files-by-built-in-delete-activity-in-azure-data-factory and how you can work with the Delete Activity in Azure Data Factory [https://www.sqlservercentral.com/articles/working-with-the-delete-activity-in-azure-data-factory](https://www.sqlservercentral.com/articles/working-with-the-delete-activity-in-azure-data-factory

    )

    I hope this is helpful! Do not hesitate to let me know if you have any other questions.


    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    0 comments No comments

  2. Vinodh247 22,951 Reputation points MVP
    2024-09-30T13:44:41.4266667+00:00

    Hi Domenico Fasano,

    Thanks for reaching out to Microsoft Q&A.

    To achieve automatic deletion of migrated blobs based on their original last modified date while considering large volumes of blobs, here’s a suggested approach using ADF, lifecycle management policies, and potentially azure functions for scalability.

    Copying Blobs with Metadata

    • You have already mentioned using adf to migrate blobs and capturing the '$$lastmodified' value in custom metadata for the copied blobs in the destination storage account. This will work as it preserves the original last modified date. The key is to make sure this metadata gets applied to the new blobs in the target storage account.
    • In the ADF copy activity, ensure the '$$LASTMODIFIED' value is being captured and applied correctly as custom metadata. You can configure this in the copy activity under "Mapping" and store it with a custom name like 'OriginalLastModifiedDate'.

    Implement Lifecycle Management in the Destination Account

    • Azure Storage supports lifecycle management policies that can delete blobs based on conditions such as their last modified date or custom metadata values.
      • Option A: If you can set the blob’s 'Last Modified' date to match the original modified date, you can directly apply a lifecycle management policy in the destination Storage Account to delete blobs after 'x' days of inactivity.
      • Option B: If you are storing the original 'Last Modified' date in metadata (ex: 'OriginalLastModifiedDate'), you can create a lifecycle management policy that uses this metadata for deletion rules. Unfortunately, lifecycle management doesn't directly support metadata-based conditions yet, so you’ll need to rely on another method, like Azure Functions, to handle deletions based on metadata.

    Azure Functions for Metadata-Based Deletion

    If the lifecycle policy cannot handle your custom metadata-based expiration, you can leverage Azure Functions with a timer trigger to delete the blobs based on the 'OriginalLastModifiedDate'. Since your concern is with timeouts and large volumes, consider using durable functions to split the workload into smaller, scalable tasks.

    Here’s how:

    Durable Functions allow orchestration of long-running tasks and scale efficiently to process large datasets (millions of blobs).

    The function can:

    1. List blobs from the destination Storage Account.
    2. Check each blob’s 'OriginalLastModifiedDate' metadata.
    3. Delete blobs that exceed the retention threshold.

    Schedule this durable function to run at regular intervals (ex: daily) to delete blobs in batches, avoiding timeout issues.

    Alternative: Logic Apps for Long-Running Operations

    If you prefer a low-code option and wish to avoid writing custom function code, Azure Logic Apps can provide a workflow-based approach for deleting blobs based on metadata:

    • Logic Apps can run indefinitely without timeout issues and can process blobs in chunks.
    • Use a Logic App with a "List blobs" action, filter blobs by the 'OriginalLastModifiedDate' metadata, and delete blobs that meet the condition.

    Scalable and Efficient Workflow

    Here’s a summary of the steps:

    • Migration with ADF: Ensure the '$$LASTMODIFIED' value is stored as metadata ('OriginalLastModifiedDate') during the copy operation.
    • Azure Storage Lifecycle Policy: If possible, apply a lifecycle policy based on the modified date (if it fits your requirement).
    • Durable Functions/Logic Apps: If custom metadata is required for deletion logic, set up a Durable Function or Logic App to check the 'OriginalLastModifiedDate' and delete expired blobs in scalable batches.

    Additional Considerations

    Monitoring and Logging: Ensure you have proper logging in place for tracking which blobs are deleted and if any failures occur.

    Cost: Consider the potential cost of listing and processing millions of blobs, especially if the lifecycle of these blobs is short and you need to delete them frequently.

    This approach balances scalability and maintainability, especially when dealing with large volumes of blobs and ensuring they are deleted according to their original last modified date.

    Please 'Upvote'(Thumbs-up) and 'Accept' as an answer if the reply was helpful. This will benefit other community members who face the same issue.

    0 comments No comments

  3. Nehruji R 8,146 Reputation points Microsoft Vendor
    2024-10-03T09:47:12.9866667+00:00

    Hello Domenico Fasano,

    Greetings! Welcome to Microsoft Q&A Forum.

    Azure Storage Accounts have lifecycle management policies which help with following:

    • Transition blobs from cool to hot
    • Transition blobs, blob versions, and blob snapshots to a cooler storage tier if these objects have not been accessed or modified for a period, to optimize for cost. The objects can be moved from hot to cool, from hot to archive, or from cool to archive.
    • Delete blobs, blob versions, and blob snapshots at the end of their lifecycles.
    • Apply rules to containers or to a subset of blobs, using name prefixes or blob index tags as filters. 

    Example of a rule could be: 

    • Any files where the modified date is older than 90 days will be changed to the Cool tier. 
    • Any files where the modified date is older than 180 days will be changed to the Archive tier. 
    • Delete files older than 365 days. 

    If the condition to move a blob is based on last accessed time you need to enable last access time tracking https://learn.microsoft.com/en-us/azure/templates/microsoft.storage/2021-02-01/storageaccounts/blobservices?tabs=json&pivots=deployment-language-terraform 

    The rules translate to a JSON format. So once you create the rule through the portal get the JSON from the code view and you can use PowerShell/Terraform(https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_management_policy) to apply to other storage accounts.

     User's image

     

    For optimising current storage accounts, you can run PowerShell or use Terraform to update the policies.

    For governance purposes any new storage accounts being created should have these policies enabled (https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_management_policy)

    Useful links:

    https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal

    https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview

    Hope this answer helps! please let us know if you have any further queries. I’m happy to assist you further.


    Please "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.