model Module

Contains functionality for managing machine learning models in Azure Machine Learning.

With the Model class, you can accomplish the following main tasks:

  • register your model with a workspace
  • profile your model to understand deployment requirements
  • package your model for use with Docker
  • deploy your model to an inference endpoint as a web service

For more information on how models are used, see How Azure Machine Learning works: Architecture and concepts.

Classes

InferenceConfig

Represents configuration settings for a custom environment used for deployment.

Inference configuration is an input parameter for Model deployment-related actions:

Initialize the config object.

Model

Represents the result of machine learning training.

A model is the result of a Azure Machine learning training Run or some other model training process outside of Azure. Regardless of how the model is produced, it can be registered in a workspace, where it is represented by a name and a version. With the Model class, you can package models for use with Docker and deploy them as a real-time endpoint that can be used for inference requests.

For an end-to-end tutorial showing how models are created, managed, and consumed, see Train image classification model with MNIST data and scikit-learn using Azure Machine Learning.

Model constructor.

The Model constructor is used to retrieve a cloud representation of a Model object associated with the provided workspace. Must provide either name or ID.

ModelPackage

Represents a packaging of one or more models and their dependencies into either a Docker image or Dockerfile.

A ModelPackage object is returned from the package method of the Model class. The generate_dockerfile parameter of the package method determines if a Docker image or Dockerfile is created.

Initialize package created with model(s) and dependencies.