Share via

Is BYO OpenAI resource supported for Foundry AI Agents standard deployment?

Jan Dolecek 40 Reputation points
2026-02-20T12:18:44.76+00:00

Dear Azure team,

I have existing Azure OpenAI resources i would like to connect to my Foundry projects to use in Agents. Given the documentation (https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/standard-agent-setup?view=foundry-classic) it should be possible to do so.

I have deployed Foundry Standard deployment within our Azure VNET with all necessary resources (CosmosDB, Storage Account, AI Search). All these resources are reachable through private endpoints. I made sure that they are all in the same region (including foundry resource and OpenAI resource) as the VNET. Given my conditions i cannot easily use any region - i have only available VNET in West Europe region, so all resources are located here aswell.

I made all necessary connections between project and the OpenAI resource and made sure to properly configure the capability hosts. Foundry account has one capability host with "capabilityHostKind": "Agents" and "customerSubnet" set with the VNET injection subnet. The AI foundry project capability host has similar configuration with connections to all BYOR including "aiServicesConnections" parametr to connect with OpenAI resource.

When browsing the Foundry Portal, I can see all deployments from connected OpenAI resource are visible and i can choose from them when creating a new agent. But when trying to send a message to the agent a receive this error:

"invalid_engine_error: No connection matching model: gpt-4.1-mini-deployment..."

I get this error both when sending message through Foundry portal Playground and via rest api/sdk.

What seems strange is that, when i destroy connection with the OpenAI resource and create deployments directly in the Foundry resource, then everything works fine - Agents respond, logs are successfully send to own CosmosDB, etc. Problem occurs only when trying to use deployments from external OpenAI resource (reachable from the same VNET and in the same region).

I thought the problem might be, that this feature is probably not supported for the given region (west europe) and deplyoment types. But I came across this documentation (https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/model-region-support?view=foundry-classic&tabs=global-standard), where I can find all supported models specifically to use with AI Agents. So I tried to use Global Standard 4.1-mini in my OpenAI resource (west europe) - which should be supported, but I still recieve the same error as above when trying to send the message to the agent. But when i try to use deployments directly in the Foundry, then everything works fine even those models are not on the supported list (eg gpt 4.1 in DataZoneStandard West Europe region works fine).

I tried checking connectivity, permissions, but I struggle to find what might cause this problem.

I also refer to similar post here: https://github.com/orgs/microsoft-foundry/discussions/59, which did not resolved my issue.

Is there a way to confirm whether this feature (using my own Azure OpenAI resource deployments in connection with Foundry Agents standard deployment) is supported or might be available only in some regions?

Thank you for any provided information and tips to help me resolve this issue.

Foundry Tools
Foundry Tools

Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform

{count} votes

2 answers

Sort by: Most helpful
  1. Sridhar M 5,335 Reputation points Microsoft External Staff Moderator
    2026-02-20T15:52:18.87+00:00

    Hi Jan Dolecek

    In Microsoft Azure AI Foundry, Foundry Basic deployments in Sweden Central successfully integrate with a BYO (external) Azure OpenAI resource, whereas the same setup in West Europe consistently fails. This behavior is reproducible and not related to Azure OpenAI itself, because Azure OpenAI resources deployed directly in West Europe function normally outside of Foundry. The failure occurs only when the Azure OpenAI resource is consumed through Foundry, indicating a Foundry platform–side regional dependency, not a customer configuration issue. [learn.microsoft.com]

    The primary root cause is that Vector Stores are not supported in the West Europe region for Azure AI Foundry, even though the Foundry Agent Service itself appears available. Vector Stores are a foundational dependency of the Foundry Agent platform, not only for the File Search tool but also for agent initialization, metadata handling, and knowledge management. As a result, Foundry attempts to access Vector Store infrastructure during agent loading—even when File Search is not enabled—leading to immediate failures in West Europe. [github.com]

    Sweden Central is explicitly identified by Microsoft as one of the regions with maximum feature availability for Azure AI Foundry, including Vector Stores, File Search, and full Agent Service dependencies. Because all required backend services are present in Sweden Central, Foundry can fully initialize the agent runtime and then successfully connect to the BYO Azure OpenAI resource. This is why identical architectures work in Sweden Central but fail in West Europe. [learn.microsoft.com]

    This limitation directly impacts BYO Azure OpenAI integrations, even though Azure OpenAI itself is healthy in West Europe. Foundry does not fail at the Azure OpenAI call layer; it fails earlier during its own control-plane initialization due to missing regional services (Vector Stores). Therefore, this is not a quota issue, not a private endpoint issue, and not a BYOR misconfiguration, but a Foundry regional capability gap. [github.com]

    The issue is more visible in Foundry Standard deployments, which rely more heavily on managed Foundry platform services. In West Europe, these internal services are not fully deployed, causing agent creation, updates, and knowledge attachment to fail. In contrast, Foundry Basic in fully supported regions (such as Sweden Central) does not encounter this limitation. This explains why some regions appear “supported” but still fail operationally. [learn.microsoft.com]

    There is also a documented Portal vs API support gap. In some cases, Foundry REST APIs may work while the Foundry Portal reports “Unsupported region” or fails to load agents and vector stores in West Europe. Microsoft has acknowledged this inconsistency, confirming that the limitation is regional and platform-related rather than customer-induced

    Based on Microsoft documentation and confirmed platform behavior, West Europe should currently be avoided for Azure AI Foundry Agent workloads that require stable operation, including BYO Azure OpenAI integrations. For production-grade Foundry Agents—especially those involving knowledge, retrieval, or future extensibility—Sweden Central (or East US 2) should be used, as these regions provide full backend service coverage. This is an officially documented regional availability constraint, not a temporary misconfiguration. [learn.microsoft.com]

    Microsoft’s official “Feature availability across cloud regions” documentation confirms that Foundry capabilities vary by region and explicitly recommends Sweden Central for maximum feature support. Additionally, GitHub and Microsoft Q&A threads confirm that Vector Stores are unavailable in West Europe, blocking Foundry Agent functionality regardless of tool usage. [learn.microsoft.com], [github.com]

    0 comments No comments

  2. Q&A Assist
    2026-02-20T12:19:04.6066667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Yes, you can use your own Azure OpenAI resources with Foundry AI Agents standard deployment. However, there are specific requirements and limitations to consider. You need to ensure that your Azure OpenAI resource is properly connected to your Foundry project, and that all resources are in the same region and accessible through the same VNET.

    The error message you are encountering, "invalid_engine_error: No connection matching model: gpt-4.1-mini-deployment...", suggests that there may be an issue with the model deployment or its configuration in relation to the Foundry project. It's important to verify that the model you are trying to use is indeed supported for use with Foundry Agents in your region.

    Additionally, ensure that the capability hosts are correctly configured and that the aiServicesConnections parameter is properly set to connect with your OpenAI resource. If the models are visible in the Foundry Portal but not responding as expected, it may indicate a misconfiguration or a limitation specific to the deployment type or region.

    For further troubleshooting, you may want to check the connectivity and permissions again, as well as refer to the documentation regarding supported models and regions for AI Agents. If the issue persists, reaching out to Microsoft support or checking community discussions may provide additional insights.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.