Events
17 Mar, 9 pm - 21 Mar, 10 am
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
APPLIES TO: Developer | Basic | Basic v2 | Standard | Standard v2 | Premium | Premium v2
The azure-openai-token-limit
policy prevents Azure OpenAI Service API usage spikes on a per key basis by limiting consumption of language model tokens to a specified rate (number per minute), a quota over a specified period, or both. When a specified token rate limit is exceeded, the caller receives a 429 Too Many Requests
response status code. When a specified quota is exceeded, the caller receives a 403 Forbidden
response status code.
By relying on token usage metrics returned from the OpenAI endpoint, the policy can accurately monitor and enforce limits in real time. The policy also enables precalculation of prompt tokens by API Management, minimizing unnecessary requests to the OpenAI backend if the limit is already exceeded.
Note
Set the policy's elements and child elements in the order provided in the policy statement. Learn more about how to set or edit API Management policies.
The policy is used with APIs added to API Management from the Azure OpenAI Service of the following types:
API type | Supported models |
---|---|
Chat completion | gpt-3.5 gpt-4 gpt-4o1 |
Completion | gpt-3.5-turbo-instruct |
Embeddings | text-embedding-3-large text-embedding-3-small text-embedding-ada-002 |
1 The gpt-4o
model is multimodal (accepts text or image inputs and generates text).
For more information, see Azure OpenAI Service models.
<azure-openai-token-limit counter-key="key value"
tokens-per-minute="number"
token-quota="number"
token-quota-period="Hourly | Daily | Weekly | Monthly | Yearly"
estimate-prompt-tokens="true | false"
retry-after-header-name="custom header name, replaces default 'Retry-After'"
retry-after-variable-name="policy expression variable name"
remaining-quota-tokens-header-name="header name"
remaining-quota-tokens-variable-name="policy expression variable name"
remaining-tokens-header-name="header name"
remaining-tokens-variable-name="policy expression variable name"
tokens-consumed-header-name="header name"
tokens-consumed-variable-name="policy expression variable name" />
Attribute | Description | Required | Default |
---|---|---|---|
counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A |
tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Either a rate limit (tokens-per-minute ), a quota (token-quota over a token-quota-period ), or both must be specified. |
N/A |
token-quota | The maximum number of tokens allowed during the time interval specified in the token-quota-period . Policy expressions aren't allowed. |
Either a rate limit (tokens-per-minute ), a quota (token-quota over a token-quota-period ), or both must be specified. |
N/A |
token-quota-period | The length of the fixed window after which the token-quota resets. The value must be one of the following: Hourly ,Daily , Weekly , Monthly , Yearly . The start time of a quota period is calculated using the UTC timestamp truncated to the unit (hour, day, etc.) used for the period. |
Either a rate limit (tokens-per-minute ), a quota (token-quota over a token-quota-period ), or both must be specified. |
N/A |
estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: - true : estimate the number of tokens based on prompt schema in API; may reduce performance. - false : don't estimate prompt tokens. When set to false , the remaining tokens per counter-key are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again. |
Yes | N/A |
retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified tokens-per-minute or token-quota is exceeded. Policy expressions aren't allowed. |
No | Retry-After |
retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified tokens-per-minute or token-quota is exceeded. Policy expressions aren't allowed. |
No | N/A |
remaining-quota-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to token-quota allowed for the token-quota-period . Policy expressions aren't allowed. |
No | N/A |
remaining-quota-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to token-quota allowed for the token-quota-period . Policy expressions aren't allowed. |
No | N/A |
remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to tokens-per-minute allowed for the time interval. Policy expressions aren't allowed. |
No | N/A |
remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to tokens-per-minute allowed for the time interval. Policy expressions aren't allowed. |
No | N/A |
tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed. | No | N/A |
tokens-consumed-variable-name | The name of a variable initialized to the estimated number of tokens in the prompt in backend section of pipeline if estimate-prompt-tokens is true and zero otherwise. The variable is updated with the reported count upon receiving the response in outbound section. |
No | N/A |
estimate-prompt-tokens
is set to false
, values in the usage section of the response from the Azure OpenAI Service API are used to determine token usage.stream
is set to true
in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the estimate-prompt-tokens
attribute. Completion tokens are also estimated when responses are streamed.estimate-prompt-tokens
is set to true
, the policy currently over-counts each image as a maximum count of 1200 tokens.counter-key
value that you specify in the policy. The counter is updated at all scopes at which the policy is configured with that key value. If you want to configure separate counters at different scopes (for example, a specific API or product), specify different key values at the different scopes. For example, append a string that identifies the scope to the value of an expression.In the following example, the token rate limit of 5000 per minute is keyed by the caller IP address. The policy doesn't estimate the number of tokens required for a prompt. After each policy execution, the remaining tokens allowed for that caller IP address in the time period are stored in the variable remainingTokens
.
<policies>
<inbound>
<base />
<azure-openai-token-limit
counter-key="@(context.Request.IpAddress)"
tokens-per-minute="5000" estimate-prompt-tokens="false" remaining-tokens-variable-name="remainingTokens" />
</inbound>
<outbound>
<base />
</outbound>
</policies>
In the following example, the token quota of 10000 is keyed by the subscription ID and resets monthly. After each policy execution, the number of remaining tokens allowed for that subscription ID in the time period is stored in the variable remainingQuotaTokens
.
<policies>
<inbound>
<base />
<azure-openai-token-limit
counter-key="@(context.Subscription.Id)"
token-quota="100000" token-quota-period="Monthly" remaining-quota-tokens-variable-name="remainingQuotaTokens" />
</inbound>
<outbound>
<base />
</outbound>
</policies>
For more information about working with policies, see:
Events
17 Mar, 9 pm - 21 Mar, 10 am
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register now