Rerun from point of failure (Implement framework in azure data factory)

Veeresh Koduri 1 Reputation point
2020-06-10T23:53:34.187+00:00

Hi All,

Can you please help me with the question i have. I want an idea in creating framework in azure data factory. Below is what i meant by framework.

Generally we will have multiple ADF pipelines in one single ADF pipeline. So, when the main master pipeline fails in production, i want to have a very good control, like in master piepline, which child pipeline failed and if i want to rerun pipeline, it has to skip pipelines which finshed earlier and start from the pipeline which failed and run remaining pipelines in that master pipeline. I am looking for "rerun from point of failure"

Below is what i am planning to do. Is it possible? If not, is there any other idea please?

I will have a Framework database in azure SQLDB and i will connect that database to ADF Pipelines because when pipeline fails it will log all the information into data base like when pipeline started, completed, failed. If pipeline fails, rerun from point of failure i will maintain in stored procedure in database. Pipeline will use that stored procedure and rerun form point of failure.

Is this possible?

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,642 questions
{count} votes

1 answer

Sort by: Most helpful
  1. MartinJaffer-MSFT 26,036 Reputation points
    2020-06-12T01:43:29.5+00:00

    This functionality already exists (unless I misunderstand your ask).
    It is possible to re-run a child pipeline without re-running the parent pipeline.
    It is possible to re-run a pipeline from a point of failure.

    Here is how I tested:

    1. Created a new blank factory without git (easier to test that way)
    2. Created a linked service and dataset which pointed to a missing blob
    3. Created child pipeline which does a lookup on that missing blob. I expect it to fail.
    4. Created a parent pipeline which calls the child pipeline. Nothing special.
    5. Published it all and then "trigger now" the parent pipeline.
    6. Upload the missing blob to storage.
    7. Go to Monitoring tab "Pipeline Runs" , not "Trigger Runs"
    8. See the parent pipeline and the failed child pipeline run.
    9. Click the rerun button on the failed child pipeline run.
    10. Refresh and see the child pipeline now succeeded. The parent pipeline did not re-run , only the child.

    Now modify and test the re-run from point of failure:

    1. Remove the blob so it is missing again.
    2. In the parent pipeline, add a copy activity before the execute pipeline activity. Connect by on-success dependency.
    3. In this copy activity, copy some convenient delimited text blob. In the source options, use the 'Additional Columns' to add new column "runID" with value (edit for dynamic expression) @pipeline().RunId This lets us track whether it was writted by a first run or a re-run. In the sink options, use the "File extension" feature to append before.txt to the file name.
    4. Duplicate the copy activity, and place the new one after the execute pipeline activity. Connect them by a green on-success dependency. Change the sink option File extension to `after.txt'.
    5. Set the execute pipeline activity to use the "wait on completion" flag.
    6. Publish and "Trigger Now".
    7. Go to the Monitoring "Pipeline Runs" and find the failed parent pipeline. Click it and see the details. Note there is a "Rerun from failed activity" button.
    8. Re-Upload that missing blob so the child pipeline can find it again.
    9. Click the "Rerun from failed activity" button. Refresh and wait to complete.
    10. Open your storage and download / look inside the 2 new files. Match the pipeline run IDs against those in the Monitoring Pipeline Runs. You may need to switch the filter from "Runs: Latest runs" to "Runs: Including reruns". This will prove the 'after' file was written in the re-run while the 'before' file was written in the original run.
    0 comments No comments