Cannot deploy a fine tuned model

Michael Benisti 25 Reputation points
2025-06-02T13:50:34.53+00:00

Hi,

One of my user cannot deploy a fine tuned model on Azure OpenAI.

However, he did it only a few weeks ago and his permission on the ressource hasn't changed: he has the Cognitive Services OpenAI Contributor role.

He gets the following error when trying to deploy his fine tuned model:

Failed to create Azure OpenAI deployment

LinkedAuthorizationFailed: The client '******@y.fr' with object id 'xxx' has permission to perform action 'Microsoft.CognitiveServices/accounts/xxx'; however, it does not have permission to perform action(s) 'Microsoft.MachineLearningServices/workspaces/models/read' on the linked scope(s) '/subscriptions/xxx' (respectively) or the linked scope(s) are invalid.

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
4,082 questions
0 comments No comments
{count} votes

Accepted answer
  1. Alex Burlachenko 9,780 Reputation points
    2025-06-02T15:26:34.34+00:00

    Michael Benisti hi,

    thanks for posting this on the Q&A, Michael! Microsoft’s got some solid docs to help us here. even tho your user has the cognitive services openai contributor role, the error’s screaming about missing permissions for microsoft.machinelearningservices/workspaces/models/read. that’s cuz fine-tuned models live in azure machine learning workspace, and the openai service needs to peek into that space to deploy ’em.

    u gotta grant the user (or their group) the machine learning workspace reader role on the AML workspace linked to the openai resource. azure role-based access control for aml. just hop into the aml workspace in azure portal, go to access control (IAM), and slap that role on ’em.

    and one more thing:)) : ) sometimes the permissions take a hot minute to propagate. if it still doesn’t work after 10-15 mins, try logging out and back in. azure’s gotta catch up sometimes, u know how it is.

    microsoft’s docs on openai deployments also mention this exact scenario deploy fine-tuned models. they’re lowkey lifesavers when it comes to these permission tangles.

    hope this sorts it out! if the error’s still being stubborn, hit me up

    Best regards,

    Alex

    and "yes" if you would follow me at Q&A - personaly thx.
    P.S. If my answer help to you, please Accept my answer
    PPS That is my Answer and not a Comment
    

    https://ctrlaltdel.blog/


1 additional answer

Sort by: Most helpful
  1. Jerald Felix 1,630 Reputation points
    2025-06-02T16:15:57.3866667+00:00

    Hello Michael,

    When you fine-tune in Azure OpenAI the model artefacts are stored in a linked Azure Machine Learning (AML) workspace. Deploying the custom model is effectively a two-step call:

    Azure OpenAI control plane – checks you have the Cognitive Services OpenAI Contributor right on the OpenAI resource (you already do).

    AML control plane – reads the model (…/models/read) from the workspace that holds the fine-tuned weights.

    Because the OpenAI RBAC roles do not grant any rights on that AML workspace, the call fails with LinkedAuthorizationFailed.

    Fix – give the user read access to the AML workspace

    Step What to do
    1. Locate the workspace In the portal open Machine Learning → Workspaces. It is usually created automatically with a name like openai-aml-<region> in the same subscription / resource-group.
    1. Locate the workspace In the portal open Machine LearningWorkspaces. It is usually created automatically with a name like openai-aml-<region> in the same subscription / resource-group.
    2. Assign a role that contains Microsoft.MachineLearningServices/workspaces/models/read - Reader (built-in) is enough for deployment only, or - AzureML Data Scientist / Contributor if the user also needs to create or update models.
    3. Wait a few minutes & re-sign-in RBAC propagation can take up to 15 minutes.

    Tip: If you prefer to keep things simple, grant the role at the resource-group level that contains both the OpenAI account and the AML workspace; that covers all future fine-tuned models as well.

    The built-in permission list shows that Reader includes …/models/read, which is all that is required.

    Alternative work-arounds

    Auto-deploy during fine-tuning – enable Auto-deployment when you start the fine-tune job; the service uses its own service principal so the caller doesn’t need AML rights later.

    Use a service principal – create an Entra app registration, give it the two roles, and let users call the deployment through that principal.

    Best Regards,

    Jerald Felix

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.