question

saudsiraj-2030 avatar image
0 Votes"
saudsiraj-2030 asked KranthiPakala-MSFT commented

Azure Cosmos api for mongodb to other using data factory getting the error...

Hi, I need a help regarding azue data factory? I was working on it when I setup a pipeline to move data from one Cosmos DB API for MongoDB to another but only few of the records of four collection move properly. while for the remaining one the it failed & gave this error { "dataRead": 704614, "dataWritten": 0, "sourcePeakConnections": 1, "sinkPeakConnections": 4, "rowsRead": 1800, "rowsCopied": 0, "copyDuration": 13, "throughput": 52.931, "errors": [ { "Code": 23403, "Message": "Failure happened on 'Sink' side. ErrorCode=MongoDbFailedWithMongoDbServerError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=>The operation failed with server error. ConnectionId: '{ ServerId : { ClusterId : 1, EndPoint : \"Unspecified/procheck-reporting-westus.mongo.cosmos.azure.com:10255\" }, LocalValue : 6, ServerValue : \"1404717474\" }'.,Source=Microsoft.DataTransfer.Runtime.MongoDbAtlasConnector,''Type=MongoDB.Driver.MongoBulkWriteException`1[[MongoDB.Bson.BsonDocument, MongoDB.Bson, Version=2.10.4.0, Culture=neutral, PublicKeyToken=15b1115599983c50]]

azure-data-factoryazure-cosmos-db
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

KranthiPakala-MSFT avatar image
0 Votes"
KranthiPakala-MSFT answered KranthiPakala-MSFT commented

Hello @saudsiraj-2030,

Thanks for the question and using MS Q&A platform.


By looking at the error message, my assumption is that there can be many causes for this error. One of the causes is when the current allocated request units capacity is not sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones.

Could you please try either of the following two solutions and see if that helps to resolve the issue:

  1. Increase the container RUs number to a greater value in Azure Cosmos DB. This solution will improve the copy activity performance, but it will incur more cost in Azure Cosmos DB.

  2. Decrease writeBatchSize to a lesser value, such as 1000, and decrease parallelCopies to a lesser value, such as 1. This solution will reduce copy run performance, but it won't incur more cost in Azure Cosmos DB.

Reference: Troubleshoot the Azure Cosmos DB connector in Azure Data Factory and Azure Synapse

Hope this info helps. Do let us know how it goes.

Thank you


· 3
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hello @saudsiraj-2030,

Following up to see if the above suggestion was helpful. And, if you have any further query do let us know.

0 Votes 0 ·
saudsiraj-2030 avatar image saudsiraj-2030 KranthiPakala-MSFT ·

Thanks for the replying, Yes it was helpful ....

0 Votes 0 ·

Hello @saudsiraj-2030, thanks for confirming and glad to know the above information was helpful. If it answers your query, please do click “Accept Answer” and/or Up-Vote, as it might be beneficial to other community members reading this thread.

Thank you & Have a good day! :)

0 Votes 0 ·