Share via

we are unable to deploy any models in Canada Central due to zero quota availability in our subscription. Specifically, we attempted to deploy the gpt-4o-mini model, but encountered a restriction indicating that the required SKU is not supported

Akshant Tyagi 0 Reputation points
2026-03-24T07:23:48.5+00:00

We are currently working on a development use case using Azure OpenAI within a Canada-based Azure tenant, with a requirement to deploy and test models in the Canada Central region.

At present, we are unable to deploy any models in Canada Central due to zero quota availability in our subscription. Specifically, we attempted to deploy the gpt-4o-mini model, but encountered a restriction indicating that the required SKU is not supported or quota is unavailable in this region.

Our primary requirement is to deploy a cost-efficient model for a lightweight email classification task (SPAM, PHISHING, LEGITIMATE). We would prefer to use gpt-4o-mini due to its cost efficiency and suitability for this use case. If gpt-4o-mini is not supported in Canada Central, we request guidance and quota allocation for the most cost-effective alternative model available in this region (such as gpt-35-turbo).

We kindly request:

Quota allocation for a supported model in Canada Central

If possible, access to deploy gpt-4o-mini or equivalent low-cost model

Minimum required capacity (e.g., 5K–10K TPM) sufficient for development and testing

This is a non-production, low-volume workload intended strictly for testing and evaluation purposes.

We would appreciate your guidance on supported models and SKU availability in Canada Central, along with enabling the necessary quota to proceed.

Thank you for your support.

Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.

0 comments No comments

2 answers

Sort by: Most helpful
  1. Anshika Varshney 9,740 Reputation points Microsoft External Staff Moderator
    2026-03-24T18:32:51.33+00:00

    Hi Akshant Tyagi,

    Thanks for sharing the details.

    In Microsoft Foundry and Azure OpenAI, quota is assigned per subscription, per region, and per model in tokens per minute. Also, different deployment types can have different availability and quota behavior. So, when you see zero quota or a message that the required SKU is not supported in Canada Central, it usually means one of these two things.

    First, the model you picked is not offered in that region for the deployment type you selected.

    Second, the model is offered but your subscription has no quota assigned for that model in that region. [learn.microsoft.com], [learn.microsoft.com]

    Here is a simple way to troubleshoot it.

    1. Confirm the deployment type you are trying to use In Foundry, the same model can show different options depending on whether you pick Standard, Global Standard, or Provisioned. If the portal is telling you the SKU is not supported, switch the deployment type option and see if the model becomes selectable. In the thread, you mentioned gpt 4o mini in Canada Central, and the behavior you see often maps to a deployment type mismatch in that region.
    2. Check quota in Foundry for Canada Central and for that specific model Open Microsoft Foundry, go to Operate, then Quota. This view is where you can see your token per minute allocations per model and region, and it also explains the shared quota pool option for short term testing. If quota is truly zero for the model in Canada Central, the deployment wizard will not let you proceed.
    3. Make sure your account can read quota at subscription scope If your role is only at resource group scope, the portal can fail to show quota correctly because quota is subscription and region scoped. The Foundry quota doc lists the roles used to view and manage quota, including the Cognitive Services Usages Reader role at subscription level for viewing allocations.
    4. Verify regional support before you lock your architecture Foundry is available in Canada Central, but feature and model availability can still vary by region. The official region support doc recommends confirming both model availability and quota in the target region before deployment.
    5. If Canada Central must be used, pick the lowest cost model that is actually deployable there For a lightweight email classification task, the best next step is to open the model catalog filtered to Canada Central and choose any model that is deployable in that region with a non-zero quota. If gpt 4o mini is not deployable there for your chosen deployment type, try the closest available option that appears in the catalog for Canada Central. The thread content indicates this can be the deciding factor in Canada Central.

    Azure OpenAI quotas and limits overview for Foundry Models https://learn.microsoft.com/en-us/azure/foundry/openai/quotas-limits [Azure Open...soft Learn | Learn.Microsoft.com]

    If you can share the exact text of the deployment error you see in the portal, and which deployment type you selected, I can help map it to the right fix path quickly.

    I Hope this helps. Do let me know if you have any further queries.

    Thankyou!

    1 person found this answer helpful.
    0 comments No comments

  2. Q&A Assist
    2026-03-24T07:24:17.86+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    In Canada Central, gpt-4o-mini is not available for standard deployments. The model availability tables show that in Canada Central the following are available:

    • Standard (Foundry Models sold directly by Azure): gpt-4o (2024-05-13, 2024-08-06, 2024-11-20) and gpt-4o-mini are not listed for Canada Central.
    • Provisioned managed (Foundry Models sold directly by Azure): Canada Central has gpt-4o (2024-05-13, 2024-08-06) and gpt-4o-mini (2024-07-18) available, but only as provisioned deployments, not standard pay‑as‑you‑go.
    • For classic Azure OpenAI GPT‑3.5 models, Canada Central is not listed in the model summary table, so gpt-35-turbo standard deployments are not available there either.

    Given the context, the supported options for Canada‑based workloads are:

    1. Use a different region that supports low‑cost models
      • Regions such as East US, East US 2, North Central US, and others have standard gpt-4o-mini and/or gpt-35-turbo available. For example, gpt-35-turbo (0125) is available in West Europe and multiple other regions, and gpt-4o-mini is widely available in US and European regions.
      • For a development/test workload, the most practical approach is to deploy the resource in a nearby region that supports gpt-4o-mini or gpt-35-turbo and run the email classification there.
    2. Use provisioned throughput in Canada Central
      • Canada Central supports provisioned managed deployments for gpt-4o (2024‑05‑13, 2024‑08‑06) and gpt-4o-mini (2024‑07‑18) according to the provisioned model availability table.
      • To use these, provisioned throughput must be requested and configured. Provisioned throughput is capacity‑based and generally intended for higher, predictable volumes; it may not be the most cost‑efficient option for very low‑volume dev/test, but it is the only way in the context to use gpt-4o-mini in Canada Central.
    3. Quota and SKU issues
      • The zero‑quota and “SKU not supported” errors in Canada Central are consistent with the fact that standard SKUs for gpt-4o-mini and gpt-35-turbo are not available in that region.
      • For quota increases where a model is supported, the documented pattern is to submit a quota request (for example, Azure OpenAI in Azure Government uses a dedicated quota request form). For commercial regions, the equivalent is to request quota through the Azure portal or the documented quota request channel for Azure OpenAI.

    Practical guidance based on the context:

    • For the most cost‑efficient dev/test path for email classification:
      • Create an Azure OpenAI resource in a region that supports standard gpt-4o-mini (for example, East US or East US 2) and run the workload there.
    • If strict data residency in Canada Central is mandatory and the workload can justify provisioned capacity:
      • Use provisioned throughput for gpt-4o-mini in Canada Central by requesting provisioned quota for that model and region as described in the provisioned throughput documentation.

    The minimum TPM (for example, 5K–10K TPM) is not specified in the context for standard or provisioned deployments, so the exact quota level must be requested and negotiated through the normal Azure OpenAI quota request process for the chosen model and region.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.