Rebuild a Studio (classic) web service in Azure Machine Learning
Important
Support for Machine Learning Studio (classic) will end on 31 August 2024. We recommend you transition to Azure Machine Learning by that date.
Beginning 1 December 2021, you will not be able to create new Machine Learning Studio (classic) resources (workspace and web service plan). Through 31 August 2024, you can continue to use the existing Machine Learning Studio (classic) experiments and web services.
- See information on moving machine learning projects from ML Studio (classic) to Azure Machine Learning.
- Learn more about Azure Machine Learning
ML Studio (classic) documentation is being retired and may not be updated in the future.
In this article, you learn how to rebuild an ML Studio (classic) web service as an endpoint in Azure Machine Learning.
Use Azure Machine Learning pipeline endpoints to make predictions, retrain models, or run any generic pipeline. The REST endpoint lets you run pipelines from any platform.
This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see the migration overview article.
Note
This migration series focuses on the drag-and-drop designer. For more information on deploying models programmatically, see Deploy machine learning models in Azure.
Prerequisites
- An Azure account with an active subscription. Create an account for free.
- An Azure Machine Learning workspace. Create workspace resources.
- An Azure Machine Learning training pipeline. For more information, see Rebuild a Studio (classic) experiment in Azure Machine Learning.
Real-time endpoint vs pipeline endpoint
Studio (classic) web services have been replaced by endpoints in Azure Machine Learning. Use the following table to choose which endpoint type to use:
Studio (classic) web service | Azure Machine Learning replacement |
---|---|
Request/respond web service (real-time prediction) | Real-time endpoint |
Batch web service (batch prediction) | Pipeline endpoint |
Retraining web service (retraining) | Pipeline endpoint |
Deploy a real-time endpoint
In Studio (classic), you used a REQUEST/RESPOND web service to deploy a model for real-time predictions. In Azure Machine Learning, you use a real-time endpoint.
There are multiple ways to deploy a model in Azure Machine Learning. One of the simplest ways is to use the designer to automate the deployment process. Use the following steps to deploy a model as a real-time endpoint:
Run your completed training pipeline at least once.
After the job completes, at the top of the canvas, select Create inference pipeline > Real-time inference pipeline.
The designer converts the training pipeline into a real-time inference pipeline. A similar conversion also occurs in Studio (classic).
In the designer, the conversion step also registers the trained model to your Azure Machine Learning workspace.
Select Submit to run the real-time inference pipeline, and verify that it runs successfully.
After you verify the inference pipeline, select Deploy.
Enter a name for your endpoint and a compute type.
The following table describes your deployment compute options in the designer:
Compute target Used for Description Creation Azure Kubernetes Service (AKS) Real-time inference Large-scale, production deployments. Fast response time and service autoscaling. User-created. For more information, see Create compute targets. Azure Container Instances Testing or development Small-scale, CPU-based workloads that require less than 48 GB of RAM. Automatically created by Azure Machine Learning.
Test the real-time endpoint
After deployment completes, you can see more details and test your endpoint:
Go the Endpoints tab.
Select your endpoint.
Select the Test tab.
Publish a pipeline endpoint for batch prediction or retraining
You can also use your training pipeline to create a pipeline endpoint instead of a real-time endpoint. Use pipeline endpoints to perform either batch prediction or retraining.
Pipeline endpoints replace Studio (classic) batch execution endpoints and retraining web services.
Publish a pipeline endpoint for batch prediction
Publishing a batch prediction endpoint is similar to the real-time endpoint.
Use the following steps to publish a pipeline endpoint for batch prediction:
Run your completed training pipeline at least once.
After the job completes, at the top of the canvas, select Create inference pipeline > Batch inference pipeline.
The designer converts the training pipeline into a batch inference pipeline. A similar conversion also occurs in Studio (classic).
In the designer, this step also registers the trained model to your Azure Machine Learning workspace.
Select Submit to run the batch inference pipeline and verify that it successfully completes.
After you verify the inference pipeline, select Publish.
Create a new pipeline endpoint or select an existing one.
A new pipeline endpoint creates a new REST endpoint for your pipeline.
If you select an existing pipeline endpoint, you don't overwrite the existing pipeline. Instead, Azure Machine Learning versions each pipeline in the endpoint. You can specify which version to run in your REST call. You must also set a default pipeline if the REST call doesn't specify a version.
Publish a pipeline endpoint for retraining
To publish a pipeline endpoint for retraining, you must already have a pipeline draft that trains a model. For more information on building a training pipeline, see Rebuild a Studio (classic) experiment.
To reuse your pipeline endpoint for retraining, you must create a pipeline parameter for your input dataset. This lets you dynamically set your training dataset, so that you can retrain your model.
Use the following steps to publish retraining pipeline endpoint:
Run your training pipeline at least once.
After the run completes, select the dataset module.
In the module details pane, select Set as pipeline parameter.
Provide a descriptive name like "InputDataset".
This creates a pipeline parameter for your input dataset. When you call your pipeline endpoint for training, you can specify a new dataset to retrain the model.
Select Publish.
Call your pipeline endpoint from the studio
After you create your batch inference or retraining pipeline endpoint, you can call your endpoint directly from your browser.
Go to the Pipelines tab, and select Pipeline endpoints.
Select the pipeline endpoint you want to run.
Select Submit.
You can specify any pipeline parameters after you select Submit.
Next steps
In this article, you learned how to rebuild a Studio (classic) web service in Azure Machine Learning. The next step is to integrate your web service with client apps.
See the other articles in the Studio (classic) migration series:
- Migration overview.
- Migrate dataset.
- Rebuild a Studio (classic) training pipeline.
- Rebuild a Studio (classic) web service.
- Integrate an Azure Machine Learning web service with client apps.
- Migrate Execute R Script.
Feedback
Submit and view feedback for