PipelineParameter Class

Defines a parameter in a pipeline execution.

Use PipelineParameters to construct versatile Pipelines which can be resubmitted later with varying parameter values.

Initialize pipeline parameters.

Inheritance
builtins.object
PipelineParameter

Constructor

PipelineParameter(name, default_value)

Parameters

name
str
Required

The name of the pipeline parameter.

default_value
Union[int, str, bool, float, DataPath, PipelineDataset, FileDataset, TabularDataset]
Required

The default value of the pipeline parameter.

name
str
Required

The name of the pipeline parameter.

default_value
Union[int, str, bool, float, DataPath, PipelineDataset, FileDataset, TabularDataset]
Required

The default value of the pipeline parameter.

Remarks

PipelineParameters can be added to any step when constructing a Pipeline. When the Pipeline is submitted, the values of these parameters can be specified.

An example of adding a PipelineParameter to a step is as follows:


   from azureml.pipeline.steps import PythonScriptStep
   from azureml.pipeline.core import PipelineParameter

   pipeline_param = PipelineParameter(name="pipeline_arg", default_value="default_val")
   train_step = PythonScriptStep(script_name="train.py",
                                 arguments=["--param1", pipeline_param],
                                 target=compute_target,
                                 source_directory=project_folder)

In this example, a PipelineParameter with the name "pipeline_arg" was added to the arguments of a PythonScriptStep. When the Python script is run, the value of the PipelineParameter will be provided through the command line arguments. This PipelineParameter can also be added to other steps in the Pipeline to provide common values to multiple steps in the Pipeline. Pipelines can have multiple PipelineParameters specified.

To submit this Pipeline and specify the value for the "pipeline_arg" PipelineParameter use:


   pipeline = Pipeline(workspace=ws, steps=[train_step])
   pipeline_run = Experiment(ws, 'train').submit(pipeline, pipeline_parameters={"pipeline_arg": "test_value"})

Note: if "pipeline_arg" was not specified in the pipeline_parameters dictionary, the default value of the PipelineParameter provided when the Pipeline was constructed would be used (in this case the default value provided was "default_val").

Multi-line parameters can't be used as PipelineParameters.

PipelineParameters can also be used with DataPath and DataPathComputeBinding to specify step inputs. This enables a Pipeline to be run with varying input data.

An example of using DataPath with PipelineParameters is as follows:


   from azureml.core.datastore import Datastore
   from azureml.data.datapath import DataPath, DataPathComputeBinding
   from azureml.pipeline.steps import PythonScriptStep
   from azureml.pipeline.core import PipelineParameter

   datastore = Datastore(workspace=workspace, name="workspaceblobstore")
   datapath = DataPath(datastore=datastore, path_on_datastore='input_data')
   data_path_pipeline_param = (PipelineParameter(name="input_data", default_value=datapath),
                               DataPathComputeBinding(mode='mount'))

   train_step = PythonScriptStep(script_name="train.py",
                                 arguments=["--input", data_path_pipeline_param],
                                 inputs=[data_path_pipeline_param],
                                 compute_target=compute_target,
                                 source_directory=project_folder)

In this case the default value of the "input_data" parameter references a file on the "workspaceblobstore" named "input_data". If the Pipeline is submitted without specifying a value for this PipelineParameter, the default value will be used. To submit this Pipeline and specify the value for the "input_data" PipelineParameter use:


   from azureml.pipeline.core import Pipeline
   from azureml.data.datapath import DataPath

   pipeline = Pipeline(workspace=ws, steps=[train_step])
   new_data_path = DataPath(datastore=datastore, path_on_datastore='new_input_data')
   pipeline_run = experiment.submit(pipeline,
                                    pipeline_parameters={"input_data": new_data_path})