Choosing Cosmos DB for Enterprise use ? underlying problems ??

Sudip Patil 26 Reputation points
2022-02-22T10:21:52.483+00:00

Referring to blog here are some questions pointed out , not sure if these issues are resolved today with respect to cosmos DB ?

https://weblogs.asp.net/morteza/lesson-learned-after-2-years-working-with-cosmos-db

Q1) 429 Error code problem :
If the load on the db is more than what is provisioned, Cosmos DB starts rejecting the requests with 429 http status code instead of increasing throughput , is it fixed in newer version ?

Q2) Does Collections size has to be fixed or it can be dynamic ?

Q3) Is Dynamic throughput provisioning offered by Cosmos DB ?
. Only couple of times a day you you receive a big load of data and need to store in the database.
. What we need here is to increase the throughput as much as needed and lower down when then not any longer needed.

Q4) Is there api to bump up the provision ?

Q5) Is below with respect to partition true ?

Q6) While creating a collection in Cosmos DB, you need to assign a PartitionKey.
PartitionKey should be a property of a document that we store in db.
Needless to say that documents in Cosmos DB are just plain JSON objects.
After you create a collection (again, it's also refereed as container), it's NOT possible to change the PartitionKey
maximum size of a logical partition (and consequently a physical partition) could not exceed 20 GB.
If it does, the database crashes and the whole thing goes down and there is no remedy for that!
Is this problem fixed now ?

Q7) What is the process to change partition key ?
Worth to recap that after creating a collection, it's not possible to change PartitionKey.
So we need to create a new collection and choose the right PartitionKey and possibly migrate the data from the old collection to the new one! sounds painful?! Yes it is.
Makes it worse to know, there is no backup/restore functionality to move the data in an atomic fashion.
So you either need to write your own tool for data migration ot use ETL tools like Azure Data Factory in azure to orchestrate data migration from the old collection to the new ones!
Is there any easy process now ?

Azure Cosmos DB
Azure Cosmos DB
An Azure NoSQL database service for app development.
1,614 questions
0 comments No comments
{count} vote

Accepted answer
  1. Anurag Sharma 17,606 Reputation points
    2022-02-22T12:13:51.08+00:00

    Hi @Sudip Patil , welcome to Microsoft Q&A forum.

    Please find the answers to your queries in the same sequence below:

    1) There is no automatic way of increasing the provisioned throughput, however in case of 429 error we can increase it using SDKs as mentioned in the below article:

    Container.ReplaceThroughputAsync on the .NET SDK.
    CosmosContainer.replaceThroughput on the Java SDK.

    Changing the provisioned throughput

    2). There is no need to define collection size as it keeps growing based on number of documents added. Currently the maximum storage per container is unlimited. More details on below article:

    Maximum storage per container

    3). We can use autoscale throughput to achieve dynamic throughput load. Autoscale provisioned throughput in Azure Cosmos DB allows you to scale the throughput (RU/s) of your database or container automatically and instantly. The throughput is scaled based on the usage, without impacting the availability, latency, throughput, or performance of the workload. The entry point for autoscale maximum throughput Tmax starts at 4000 RU/s, which scales between 400 - 4000 RU/s. You can set Tmax in increments of 1000 RU/s and change the value at any time.

    Create Azure Cosmos containers and databases with autoscale throughput

    Frequently asked questions about autoscale provisioned throughput in Azure Cosmos DB

    4). Yes as mentioned in point number 1 we can use below methods:

    Container.ReplaceThroughputAsync on the .NET SDK.  
    CosmosContainer.replaceThroughput on the Java SDK.  
    

    5). Technically, it is not possible to “update” your partition key in an existing container. Partition keys are immutable. To change to a new key we need to create a new collection and migrate the data.

    How to change your partition key in Azure Cosmos DB

    6). In this case when the logical partition grows to 20 GB, database does not crash but we start receiving the error message 'Partition key reached maximum size of 20 GB'. It is recommended to re-architect our application with a different partition key as a long-term solution. To help give time for this, you can request a temporary increase in the logical partition key limit for your existing application. File an Azure support ticket and select quota type Temporary increase in container's logical partition key size.

    The practical solution to handle this is to design a more granular partition key, could be combination of a field along with some other properties which would never cross this 20 GB limit.

    To make it possible, we need to migrate the existing container into new one and creating the partition key as described above.

    7). We can use Use Data migration tool to migrate from one container to another.
    Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB

    Please let us know if this helps or we can discuss further on the same.

    ----------

    If answer helps you can click on 176833-image.png and upvote using 176834-image.png button.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.