Hi,
The machine learning model workflow generally follows this sequence:
Train
Develop machine learning training scripts in Python, R, or with the visual designer.
Create and configure a compute target.
Submit the scripts to a configured compute target to run in that environment. During training, the scripts can read from or write to datastores. The logs and output produced during training are saved as runs in the workspace and grouped under experiments.
Package - After a satisfactory run is found, register the persisted model in the model registry.
Validate - Query the experiment for logged metrics from the current and past runs. If the metrics don't indicate a desired outcome, loop back to step 1 and iterate on your scripts.
Deploy - Develop a scoring script that uses the model and Deploy the model as a web service in Azure, or to an IoT Edge device.
Monitor - Monitor for data drift between the training dataset and inference data of a deployed model. When necessary, loop back to step 1 to retrain the model with new training data.
For your scenario, I would highly recommend you try Azure Machine Learning Designer, which works with Azure Notebook well and easy to use.
IF you still want to stick with Notebook, I think creating pipelines will be good to you. https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline?view=azure-devops
Thanks,
Yutong