Muokkaa

Jaa


Data, privacy, and security for use of models through the Model Catalog

This article provides details regarding how data provided by you is processed, used, and stored when you deploy models from the Model Catalog. Also see the Microsoft Products and Services Data Protection Addendum, which governs data processing by Azure services.

What data is processed for models deployed in Azure Machine Learning?

When you deploy models in Azure Machine Learning, the following types of data are processed to provide the service:

  • Prompts and generated content. Prompts are submitted by the user, and content (output) is generated by the model via the operations supported by the model. Prompts may include content that has been added via retrieval-augmented-generation (RAG), metaprompts, or other functionality included in an application.

  • Uploaded data. For models that support fine-tuning, customers can upload their data to the Azure Machine Learning Datastore for use for fine-tuning.

Generate inferencing outputs with managed compute

Deploying models to managed compute deploys model weights to dedicated Virtual Machines and exposes a REST API for real-time inference. Learn more about deploying models from the Model Catalog to managed compute. You manage the infrastructure for these managed computes, and Azure's data, privacy, and security commitments apply. Learn more about Azure compliance offerings applicable to Azure Machine Learning.

Although containers for models "Curated by Azure AI" are scanned for vulnerabilities that could exfiltrate data, not all models available through the model catalog have been scanned. To reduce the risk of data exfiltration, you can protect your deployment using virtual networks. Follow this link to learn more. You can also use Azure Policy to regulate the models that can be deployed by your users.

A diagram showing the platform service life cycle.

Generate inferencing outputs with serverless APIs (Models-as-a-Service)

When you deploy a model from the model catalog (base or fine-tuned) as a serverless API for inferencing, an API is provisioned giving you access to the model hosted and managed by the Azure Machine Learning Service. Learn more about Models-as-a-Service. The model processes your input prompts and generates outputs based on the functionality of the model, as described in the model details provided for the model. While the model is provided by the model provider, and your use of the model (and the model provider's accountability for the model and its outputs) is subject to the license terms provided with the model, Microsoft provides and manages the hosting infrastructure and API endpoint. The models hosted in Models-as-a-Service are subject to Azure's data, privacy, and security commitments. Learn more about Azure compliance offerings applicable to Azure Machine Learning here.

Important

This feature is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.

For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Microsoft acts as the data processor for prompts and outputs sent to and generated by a model deployed for pay-as-you-go inferencing (MaaS). Microsoft doesn't share these prompts and outputs with the model provider, and Microsoft doesn't use these prompts and outputs to train or improve Microsoft's, the model provider's, or any third party's models. Models are stateless and no prompts or outputs are stored in the model. If content filtering (preview) is enabled, prompts and outputs are screened for certain categories of harmful content by the Azure AI Content Safety service in real time; learn more about how Azure AI Content Safety processes data here. Prompts and outputs are processed within the geography specified during deployment but may be processed between regions within the geography for operational purposes (including performance and capacity management).

A diagram showing model publisher service cycle.

As explained during the deployment process for Models-as-a-Service, Microsoft may share customer contact information and transaction details (including usage volume associated with the offering) with the model publisher so that they can contact customers regarding the model. Learn more about information available to model publishers, follow this link.

Fine-tune a model with serverless APIs (Models-as-a-Service)

If a model available for serverless API deployment supports fine-tuning, you can upload data to (or designate data already in) an Azure Machine Learning Datastore to fine-tune the model. You can then create a serverless API for the fine-tuned model. The fine-tuned model can't be downloaded, but the fine-tuned model:

  • Is available exclusively for your use;

  • Can be double encrypted at rest (by default with Microsoft's AES-256 encryption and optionally with a customer managed key).

  • Can be deleted by you at any time.

Training data uploaded for fine-tuning isn't used to train, retrain, or improve any Microsoft or third party model except as directed by you within the service.

Data processing for downloaded models

If you download a model from the model catalog, you choose where to deploy the model, and you're responsible for how data is processed when you use the model.

Next steps