Share via

How to force Azure AI model deployment only point to single AI resource

Pulin Gupta 0 Reputation points
2026-02-05T12:14:56.5033333+00:00

Deploying model in model catalog under a foundry is not using the correct parent foundry but is pointing to another foundry and also displaying another foundry for some reason. And since the foundry selected does not belong to the project, it throws error for quota.

Azure AI services
Azure AI services

A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.

{count} votes

2 answers

Sort by: Most helpful
  1. Manas Mohanty 14,745 Reputation points Microsoft External Staff Moderator
    2026-02-11T11:35:24.98+00:00

    Hi Pulin Gupta

    It seems the pointers shared could not address the issue.

    But I agree with that fact that "If you are trying to deploy a model in not supported region, Azure Logic will trigger a resource creation in supported region, You can submit quota increase requests in other regions".

    Attached Region and model documentation of Foundry Azure.

    Want to quota that Official Azure Bicep/ARM template might differ from Terraform templates used at customer side sometimes.

    Could you please share the details requested in private message to create a support ticket and connect and resolve on call.

    Looking forward to hearing from you.

    Thank you.


  2. Q&A Assist
    2026-02-05T12:15:08.95+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    To ensure that your Azure AI model deployment points to a single AI resource within the correct foundry, you should verify the following steps:

    1. Check Policy Assignment: Ensure that the policy assignment scope includes the correct resource group for your project. The policy should be set to Deny to block any deployments that do not comply with your specified resource.
    2. Model ID Verification: Confirm that the model ID you are using is an exact match, including the version number. This ensures that the deployment is referencing the correct model in the intended foundry.
    3. Role Assignment: Make sure that you have the necessary permissions, such as the Owner or Resource Policy Contributor role, at the target scope to deploy models correctly.
    4. Deployment Configuration: When deploying the model, ensure that you are using the correct deployment name and refer to the model properly in your API calls. Azure OpenAI requires the deployment name rather than just the model name.
    5. Quota Management: If you encounter quota errors, check your quota settings and ensure that you are within the limits for the selected foundry. You may need to manage or increase your quotas as necessary.

    By following these steps, you should be able to direct your model deployment to the appropriate AI resource and avoid errors related to incorrect foundry references.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.