How to set/check database-level throughput for CosmosDB with MongoDB API?

pstark 20 Reputation points
2024-03-04T14:52:48.63+00:00

Hi

I'm using CosmosDB with MongoDB API and I currently use a database instance to save IoT time series data from multiple machines in a dedicated collection per machine.

Recently, all of the azure resources were moved (through a service provider) from one tenant to the other and since then our Azure Function (which processes and saves the machine data) fails to create new collections with the following error:

  • Error: Command insert failed: Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 1028; ActivityId: /REMOVED/; Reason: (Message: {"Errors":["Your account is currently configured with a total throughput limit of 3200 RU/s. This operation failed because it would have increased the total throughput to 3600 RU/s. See https://aka.ms/cosmos-tp-limit for more information."]}

Unfortunately, this doesn't make sense for me since:

  • I initially configured the database to use «Database-level throughput».
  • The database already has 25 collections (which would require 10'000 RU/s - but currently only 3200 are activated)

It's also weird that I can't even manually create a new collection for this database:

User's image

I assume that through the resource migration the "database-level throughput" setting was lost on the database. So my 2 questions are:

  1. Where can i check (and if necessary set) the following setting for an existing database? db
  2. If the above setting should should already be set: Why is my azure function not creating new collections anymore? Neither the code nor the database settings have changed. The only thing that changed is that the resources were moved from one Azure tenant to another.
Azure Functions
Azure Functions
An Azure service that provides an event-driven serverless compute platform.
Azure Cosmos DB
Azure Cosmos DB
An Azure NoSQL database service for app development.
{count} votes

Answer accepted by question author
  1. Oury Ba-MSFT 21,126 Reputation points Microsoft Employee Moderator
    2024-03-08T19:07:32.4966667+00:00

    @pstark

    Looks like there were a total throughput limit set, which is a cost management feature meant to enforce that no more than certain set number of RU/s (in this case 3,200) is used by all billable resources - databases or collections - that are created within the account. Typically, a FinOps person or an Enterprise admin would set such limit so as to prevent incurring unexpected costs or exceeding budget.

     

    You are using a shared database throughput and creating containers underneath the shared throughput database. In such case, the containers themselves do not incur cost, rather it is the database. In other words, adding an extra container to shared throughput database does not result in cost increase. Dozens or hundreds of containers can be created underneath any database, but only up to 25 containers can share the throughput from the database containing them. 26th container and any additional ones will have to have their own dedicated throughput and will carry cost.

     

    If you are trying to create the 26th container – such container will need its own RU/s and as mentioned, will carry cost. However, given the total throughput limit they set, you are unable to create it as the sum of all RU/s used in the account would exceed the limit you set.

     

    You can raise this self-imposed limit or remove it altogether by navigating to your Cosmos DB account and clicking on Cost Management tab. Here’s a screenshot:

    User's image

    Regards,

    Oury


2 additional answers

Sort by: Most helpful
  1. Pinaki Ghatak 5,690 Reputation points Microsoft Employee Volunteer Moderator
    2024-04-24T08:40:25.7366667+00:00

    Hello @pstark

    Regarding your first question, you can check and set the database-level throughput for an existing database in the Azure portal. Here are the steps:

    1. Go to your Cosmos DB account in the Azure portal.
    2. Click on the "Data Explorer" tab.
    3. Select the database you want to modify.
    4. Click on the "Scale & Settings" tab.
    5. Under "Throughput", select "Database throughput" and set the desired throughput value. Regarding your second question, it seems like the error message you received is related to the total throughput limit of your Cosmos DB account. According to the error message, your account is currently configured with a total throughput limit of 3200 RU/s, and the operation you are trying to perform would increase the total throughput to 3600 RU/s, which is not allowed. To resolve this issue, you can either increase the total throughput limit of your Cosmos DB account or reduce the throughput of some of your existing collections to free up some throughput for new collections. You can also consider using partitioning to distribute your data across multiple partitions and increase your account's throughput capacity.

    I hope this helps! Let me know if you have any further questions.

    0 comments No comments

  2. pstark 20 Reputation points
    2024-04-24T08:55:25.3266667+00:00

    The main problem seemed to be that I didn't know that database-level throughput is capped to 25 collections.

    I turned on autoscaling and it worked again.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.