Hi @Anonymous ,
Thank you for posting query in Microsoft Q&A Platform.
I hope we are using ADF pipeline here to load data from cosmos DB to Azure DWH.
If you are facing issue with single load then yes processing data in batches with incremental iterative approach is good idea. You can try increasing set of records to process from 3000 to more and see if that helps for quicker processing.
To me its little confusing the error which you are getting, error is not saying size is too large or something its saying null pointer error. So i doubt is really the size concern here or anything else.
Could you please try skipping incompatible rows settings in copy activity and see how it behaves?
Hope this helps. Please let me know how it goes.
-------
Please consider hitting Accept Answer
button. Accepted answers help community as well.