Hi @Peter Jansen I have received an update that currently counting tokens is not available in SDK. This feature is currently in the backlog, but unfortunately no ETA can be provided at this time.
A possible way to do this is to keep track of the token usage on your own -
Use this method to calculate the prompt tokens:
/// <summary>
/// Calculate the number of tokens that the messages would consume.
/// Based on: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
/// </summary>
/// <param name="messages">Messages to calculate token count for.</param>
/// <returns>Number of tokens</returns>
public int GetTokenCount(IEnumerable<Azure.AI.OpenAI.ChatMessage> messages)
{
const int TokensPerMessage = 3;
const int TokensPerRole = 1;
const int BaseTokens = 3;
var disallowedSpecial = new HashSet<string>();
var tokenCount = BaseTokens;
var encoding = SharpToken.GptEncoding.GetEncoding("cl100k_base");
foreach (var message in messages)
{
tokenCount += TokensPerMessage;
tokenCount += TokensPerRole;
tokenCount += encoding.Encode(message.Content, disallowedSpecial).Count;
}
return tokenCount;
}
And simply count the number of messages that you receive when consuming the response stream:
//...
OpenAIClient client = new(new Uri(endpoint), new AzureKeyCredential(key));
StreamingChatCompletions completions = await client.GetChatCompletionsStreamingAsync("gpt-4", input);
StreamingChatChoice choice = await completions.GetChoicesStreaming().FirstAsync();
int responseTokenCount = 0;
await foreach (var message in choice.GetMessageStreaming())
{
responseTokenCount++;
yield return message.Content;
}
//...
Reference: [FEATURE REQ] Add access to CompletionsUsage in StreamingChatCompletions
Please let me know if you have any other questions.
Please 'Accept as answer' and Upvote if it helped so that it can help others in the community looking for help on similar topics.