To estimate the cost of a 30-minute conversation using GPT-4o-Mini-Realtime-Preview-2024-12-17-Global in Azure OpenAI, we need to calculate the approximate token usage and apply the per-million token pricing.
A typical conversation involves both user input tokens and AI-generated output tokens. On average, a person speaks 125-150 words per minute, which translates to 185-220 tokens per minute. If the AI generates responses of a similar length, the output tokens would also be around 200-250 tokens per minute. Taking conversation context into account, the estimated total token usage is 400-500 tokens per minute.
For a 30-minute conversation, this would result in approximately 12,500-15,000 input tokens (0.012M-0.015M tokens) and 6,000-7,500 output tokens (0.006M-0.0075M tokens). Since Azure OpenAI pricing is based on cost per million tokens, the estimated cost can be calculated as:
Total Cost = (Input Cost per Million * 0.015) + (Output Cost per Million * 0.0075)
To get an exact cost, you’ll need to check the latest pricing on the Azure OpenAI pricing page. If responses are longer or include additional system instructions, the token usage may be slightly higher.
Below image of pricing calculator will show the average pricing details:
As per your requirements change the details in this pricing calculator.
I hope this helps! Thank you.