dsl Package
Functions
pipeline
Build a pipeline which contains all component nodes defined in this function.
Note
The following pseudo-code shows how to create a pipeline using this decorator.
# Define a pipeline with decorator
@pipeline(name="sample_pipeline", description="pipeline description")
def sample_pipeline_func(pipeline_input, pipeline_str_param):
# component1 and component2 will be added into the current pipeline
component1 = component1_func(input1=pipeline_input, param1="literal")
component2 = component2_func(input1=dataset, param1=pipeline_str_param)
# A decorated pipeline function needs to return outputs.
# In this case, the pipeline has two outputs: component1's output1 and component2's output1,
# and let's rename them to 'pipeline_output1' and 'pipeline_output2'
return {
"pipeline_output1": component1.outputs.output1,
"pipeline_output2": component2.outputs.output1,
}
# E.g.: This call returns a pipeline job with nodes=[component1, component2],
pipeline_job = sample_pipeline_func(
pipeline_input=Input(type="uri_folder", path="./local-data"),
pipeline_str_param="literal",
)
ml_client.jobs.create_or_update(pipeline_job, experiment_name="pipeline_samples")
pipeline(func=None, *, name: str | None = None, version: str | None = None, display_name: str | None = None, description: str | None = None, experiment_name: str | None = None, tags: Dict[str, str] | None = None, **kwargs)
Parameters
- func
default value: None
The user pipeline function to be decorated.
- func
Required
types.FunctionType
- experiment_name
- str
Required
Name of the experiment the job will be created under, if None is provided, experiment will be set to current directory.
Feedback
Submit and view feedback for