In a ADF pipeline, add 2 datasets inputs for Cosmos DB (1 for each container) and configure the connection strings/DB names for each. Then add a join activity and connect the two inputs to it.
Choose the appropriate join type and specify the join key and then proceed with adding the mapping data flow and connect it to the join activity.
You can use a derived column transformation to concatenate the "value" field from container 1 with the fields from container 2.
If you need to exclude unwanted fields you can use a select transformation.
Add a Cosmo DB output dataset to your pipeline and connect the mapping data flow to it.
You can select the Upsert write mode to follow the logic of the incremental load.