@Kiran Purushotham Thanks, using Databricks to build models and track using MLFlow. Then wants to deploy the model using MLFlow->AML service integration and wants to monitor the model. To work around the limitation of MLflow deployment, you can switch to AML deployment but use the model created and registered by MLFlow at AML.
First, add mflow to conda dependencies to be able to use it in your scoring script, then in init method, load the model using mlflow API, for example:
model = mlflow.pytorch.load_model(model_dir)
You need to check artifact structure of the mode registered in AML to construct model_dir correctly because it was created using MLFlow API.
You may implement ML Ops with a hybrid setup:
Cloud Part:
• Azure DevOps can orchestrate Azure ML Service for MLOps practices.
• Azure ML Service can be used to training and orchestrating model development, an MLOps manual in link.
On Prems:
• We can train models using data & CPU power on local, on prems.
• We can run Azure DevOps pipelines on prems with the Azure DevOps Server running on an On Prems Hardware.