You need to deploy an embedding model to support vectorization.
The error "No deployments available with a supported model" occurs when the Azure OpenAI resource you are connecting to in Foundry does not have a deployed model that is compatible with the operation you are trying to perform. In the context of RAG (retrieval-augmented generation) and vectorizing text, Foundry expects a text embedding model to be deployed in the same region as your Azure AI Search/Microsoft Foundry project. This is common when creating an Azure OpenAI resource but deploying only a chat or completion model like gpt-4 or gpt-35-turbo, which do not directly support embedding operations.
To fix this, you need to verify which models are available in your Azure OpenAI resource and ensure that an embedding-capable model is deployed in the correct Azure region. Azure provides specific models for embeddings, such as text-embedding-3-small or text-embedding-3-large. You can check your deployments in the Azure portal under your OpenAI resource, and if none exist, create a new deployment with one of these embedding models. Once deployed, make sure that the deployment name matches what you reference in Foundry when configuring the vectorization or RAG pipeline.
For details, refer to https://learn.microsoft.com/en-au/azure/search/search-get-started-portal-import-vectors?tabs=sample-data-storage%2Cmodel-catalog%2Cconnect-data-storage%2Cvectorize-text-aoai%2Cvectorize-images#supported-embedding-models and https://learn.microsoft.com/en-us/azure/search/search-get-started-portal-import-vectors?tabs=sample-data-storage%2Cmodel-aoai%2Cconnect-data-storage%2Cvectorize-text-aoai%2Cvectorize-images
If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.
hth
Marcin