Hi Jason. I'm going to paste the answers I sent you on Twitter here and also answer your follow up questions. Better to reply here as this may help others.
The simplest way to do this is with Change feed. Live Data Migrator is certainly an option but in my opinion this may be a bit of overkill as it is intended for live migration for production workloads.
You first will need to write scripts using PowerShell, Azure CLI or use an ARM template to provision your Cosmos resources in your staging environment.
Once provisioned you would write a change feed routine in a console application using the Change Feed Processor with the start time set far enough back that it copies everything in your production container to your staging container. Be sure the Cosmos client connects using bulk mode to better saturate throughput on writes. Once it is caught up you can then stop the console app.
Depending what you're doing with the data in your staging deployment you could either turn on the change feed again to copy any new or updated data from your prod into stage or if you're destroying data in staging, you'd need to drop and recreate the container again with your scripts and rehydrate with change feed again. You can keep the account and database resources though so you will not need to modify your connection strings in your test apps.
Other follow up questions:
Q: What's the difference between Container == Collection.
A: They are synonymous to us.
Q: Can I use continuous backup to do this.
A: Yes, that is certainly a possibility and easier than what I've described. However there are a fair number of limitations on continuous backup so you may want to explore before deciding upon that as an option. You also will have to modify your connection information for any clients used for testing in your staging environment. This may or may not be an issue for you.