The Azure Cosmos DB for MongoDB migration extension helps you in migrating your MongoDB workloads to Azure Cosmos DB. This article answers commonly asked questions about the migration extension.
How do I run my assessment if the "Run Validation" step is failing?
Refer to the error displayed on the extension to see why the validation is failing. Typically the issue is an inability to connect to the MongoDB endpoint. The issue could also potentially be the user not having sufficient privileges on the connected server to run the assessment.
To run an assessment, the user connected to MongoDb should have readAnyDatabase
and clusterMonitor
roles assigned on the source instance.
Use grantRolesToUser
to configure appropriate roles for the currently connected user.
How do I see collection names and database names for assessments in the "Feature Compatibility" category?
The assessment uses the serverStatus
command to perform the feature compatibility assessment. Since this command doesn't provide the details of database or collection names, the extension is unable to report the resource names.
For more granular assessment details, rerun the assessment providing the folder path containing the MongoDB profiler logs in the Log Folder Path field.
How do I collect log messages?
You can locate the log file at the following path: /var/log/mongodb/mongodb.log
. If the log file isn't found, check the location in the MongoDB config file.
For more information, see MongoDB log messages.
Once the migration starts, why can't I see the exact count of documents migrated and am given an estimate?
To reduce resource utilization on the source during migration, the extension estimates the number of documents in each collection to be moved from the source to the target instead of retrieving the exact count.
Why are some collections missing or disabled on the collection mapping step?
vCore-based Azure Cosmos DB for MongoDB doesn't support time series, or clustered collections. Thus, these types of collections are either missing or disabled in the collection mapping step.
Why are views missing or disabled on the collection mapping step when vCore-based Azure Cosmos DB for MongoDB supports views?
vCore-based Azure Cosmos DB for MongoDB supports the creation of new views. However, the migration extension doesn't provide support for migrating existing views.
After the migration is finished, you can always recreate the views.
How much storage should I expect to use in the target account after migration?
vCore-based Azure Cosmos DB for MongoDB doesn't compress data on disk. A typical rough estimate is to double the storage size consumed by the collections on the source MongoDB instance to estimate the storage in the target vCore-based Azure Cosmos DB for MongoDB account.
Which collections and databases are skipped when migrating from MongoDB to vCore-based Azure Cosmos DB for MongoDB?
The following databases and collections are considered internal for MongoDB:
Resource | |
---|---|
Databases | admin , local , system config |
Collections | Any collection with prefix system . |
Since the internal databases and collections aren't required in vCore-based Azure Cosmos DB for MongoDB, the extension doesn't enable the migration of these databases.
Is it possible to migrate databases and collections whose names start with numbers?
This is a known issue. The migration doesn't support databases and collections whose names start with numbers.
If I select multiple collections to migrate, do they get migrated in parallel?
Each migration task in Azure Database Migration Service provides two trains for migration. Each train migrates one collection at any point in time. Hence, two collections are typically migrated in parallel. Once the migration for a collection is completed, the next collection is automatically picked up. If you have many collections to migrate, create multiple migrations tasks. Each task should have a scoped number of collections to help make the migrations more efficient.
How many databases and collections can I migrate in a single migration?
There are no limits on the number of databases and collections that can be included in a single migration. However, the selected collections are split into batches of 50 when creating the migration tasks on Azure Database Migration Service. For large quantities of collections, you see multiple migration tasks in the migrations list.
How should I plan the order and quantity of collections to migrate?
When you select multiple collections to migrate, the order in which the collections are migrated isn't configurable. If you wish to control the order of migration, migrate the collections in smaller batches based on your desired sequence. For best performance, avoid combining larger collections with smaller collections in a batch.
How should I configure my vCore-based Azure Cosmos DB for MongoDB and MongoDB firewalls to avoid connectivity issues?
Add firewall exceptions to the vCore-based Azure Cosmos DB for MongoDB target account to accept connections from global Azure datacenters. For more information, see Azure Cosmos DB firewall configuration.
How should I configure my source server firewalls to avoid connectivity issues?
Configure the source MongoDB instance to allow connections from global Azure datacenters. For more information, see global Azure IP address ranges.
Warning
The extension doesn't support Private Endpoint enabled source or target MongoDB instances. The extension doesn't support the Azure Database Migration Service's self-hosted integration runtime.
Does the migration jobs run locally on my machine?
The database, collections, and indexes are created directly using commands from the local Azure Data Studio client. This functionality requires connectivity between the client running Azure Data Studio with both the source and target environments.
The data migration tasks are run on Azure Database Migration Service. The migration service is an Azure service instance that orchestrates and performs data movement activities. Once the data migration tasks are created, you aren't required to be connected to the source and target environments.
How many migrations can I run simultaneously?
There are no limits on the number of migrations you can create simultaneously.
Can I rename databases and collections during migration?
The extension doesn't support database and collection renaming during migration.
Can I migrate the collections via multiple migration iterations?
It's possible to create multiple migration jobs, each having a limited number of collections. This approach is a best practice to optimize the speed of migrations.
What is included in an assessment report?
The initial part of the report has the key details of the assessment run including a summary of the source MongoDB environment. Details include source MongoDB version, license type, and instance type. This part also contains a list of the databases and collections assessed, with their respective assessment summaries and migration readiness.
The findings are grouped into Critical, Warning and Informational categories. These categories help you prioritize the findings according to their importance.
The assessment checks include:
Description | |
---|---|
Collection Options | Findings related to the unsupported collection settings. Examples include time series and collations. |
Features | Findings related to unsupported database commands, query syntax or operators, including aggregation pipeline queries. In the extra details column, you would be able to see how often the particular feature was being used on the source endpoint. |
Limits and Quotas | Findings related to vCore-based Azure Cosmos DB for MongoDB specific quotas and limits. |
Indexes | Findings related to the unsupported MongoDB index types or properties. |
Shard Keys | Findings related to unsupported shard key configurations. |
What type of logs does the extension create?
The extension stores errors, warnings, and other diagnostic logs in the default log directory:
- Windows -
C:\Users\<username>\.dmamongo\logs\
- Linux -
~/.dmamongo/logs
- macOS -
/Users/<username>/.dmamongo/logs
Note
A separate log file is created for each day. By default, the extension stores the last seven log files.