Getting Error While doing Bulk Delete Cosmosdb Documents using Stored Procedure through Rest API

kumar sanu 0 Reputation points
2023-03-21T06:50:43.4833333+00:00

I've created a database and a collection. When creating the collection, the partition key was a required field so I used 'Date' field as partition key.

I have created stored procedure to perform bulk documents delete from the collection in the cosmos db.

My Stored Procedure definition

Below is my code to call stored procedure(for bulk delete) in Azure Cosmos Db:

url="https://xxxxxxx.documents.azure.com/dbs/database_name/colls/collection_name/sprocs/delete_stored_procedure_name"

data=["SELECT * FROM collection_name c where c.SomeField='SomeValue'"]

headers = {
    'Authorization': uri_encoded,
    'x-ms-date': date,
    'x-ms-version': '2018-12-31',
    'Content-Type': 'application/query+json',
    'Accept':'application/json',
    'x-ms-documentdb-isquery': 'True' ,
    'x-ms-query-enable-crosspartition':'True',
    'x-ms-documentdb-query-enablecrosspartition': 'True', 
   
}
response=requests.post(url,data=json.dumps(data), headers=headers)
print(response.text)

Calling the Rest API EndPoint, I am getting following error.

{\"Errors\":[\"The partition key supplied in x-ms-partitionkey header has fewer components than defined in the the collection.\"]}

After searching online, I added following in the header.

'x-ms-documentdb-partitionkey':"[\"2023-03-10\"]",

Where "2023-03-10" is the one value of partition key(Date Column). In this case, I am getting 200 OK and documents are getting deleted from that particular partition.

But my requirement is to delete all documents corresponding to the supplied query



data=["SELECT * FROM collection_name c where c.SomeField='SomeValue'"]

which I am passing in the API Body irrespective of which partition all the returned documents from the query lies. I don't know in which partition the document lies so don't know the value of partition key.

My Question:

  1. What value should I give for the partition key if the partition key is one of the fields I don't know?? OR
  2. How to skip passing 'x-ms-documentdb-partitionkey' to avoid the above mentioned error.
Azure Cosmos DB
Azure Cosmos DB
An Azure NoSQL database service for app development.
1,442 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Sajeetharan 1,566 Reputation points Microsoft Employee
    2023-03-21T08:25:51.2366667+00:00

    The error is somewhat misleading, however in order to fix, you need to specify is the partition key for your collection using the x-ms-documentdb-partitionkey header with the request


  2. ShaktiSingh-MSFT 13,271 Reputation points Microsoft Employee
    2023-03-21T10:12:18.84+00:00

    Hi @kumar sanu ,

    Welcome to Microsoft Q&A forum and thanks for using Azure services.

    As I understand, you want to perform bulk delete collection in Cosmos DB using Stored Procedure.

    Partition wise it is happening when giving key but not for all documents together.

    Since stored procedures are scoped to a single partition, this stored procedure would only be able to delete one record at a time effectively making it useless to do any bulk operations.

    • The only way you can do bulk operations on this data is to copy it to a new container with a different partition key.
    • Or you could set "time to live" TTL on the container to 1 sec. That will remove all documents.

    But be aware that this process takes some time, so if you set "time to live" back to unlimited too soon the documents that haven't been removed yet will reappear.

    ETA for the deletion of documents will be depending on the number of documents and the throughput of Cosmos DB account.

    You can set the "time to live" under "Settings" for the container:

    Configure time to live in Azure Cosmos DB

    Configure Time to live in Azure portal

    References: Remove all document for cosmosDB in portal

    Deleting documents from a Cosmos collection from Azure portal

    Hope this helps. Do let us know if further queries. Thank you.

    0 comments No comments