Azure APIM azure-openai-emit-token-metric policy throwing error on request to embedded model

Tamilarasan Ramachandran 0 Reputation points Microsoft Employee
2024-08-27T11:19:47.03+00:00

I have implemented azure-openai-emit-token-metric to log custom metrics from APIM. This works fine for the other models like gpt-4, etc. but on making request to text-embedding-ada-002 model getting the below error

LastErrorSource: language-model-request-handler

LastErrorSection: inbound

LastErrorReason: LanguageModelTokenLimitCannotEstimatePromptTokens

LastErrorMessage: Unable to parse and estimate tokens for incoming request.

ReponseBody: { "statusCode": 400, "message": "Unable to parse and estimate tokens from incoming request. Please ensure incoming request is of one of the following types: 'Chat Completion', 'Completion', 'Embeddings' and works with current prompt estimation mode of 'Auto'." }

Azure API Management
Azure API Management
An Azure service that provides a hybrid, multi-cloud management platform for APIs.
2,069 questions
0 comments No comments
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.