Azure Machine Learning
An Azure machine learning service for building and deploying models.
1,846 questions
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I have a successful pipeline that writes files to Blob storage. If I delete the files in blob storage and try to rerun the pipeline, the pipeline run is cached so the pipeline doesn't actually rerun. Is there a way to force rerun the pipeline?
If the inputs (data and parameters) of the step haven't changed, the system will just use the cached results, speeding up pipeline execution. However, you can force a re-run of the steps by using the allow_reuse parameter and setting it to False during the pipeline step definition.
from azureml.pipeline.steps import PythonScriptStep
step1 = PythonScriptStep(script_name="your_script.py",
arguments=["--arg1", arg1],
inputs=[dataset.as_named_input('input1')],
outputs=[output_dir],
compute_target=compute_target,
source_directory=project_folder,
allow_reuse=False) # setting allow_reuse to False