Επεξεργασία

Κοινή χρήση μέσω


FAQ about using Azure Database Migration Service

This article lists commonly asked questions about using Azure Database Migration Service together with related answers.

Overview

What is Azure Database Migration Service?

Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure Data platforms with minimal downtime. The service is currently in General Availability, with ongoing development efforts focused on:

  • Reliability and performance.
  • Iterative addition of source-target pairs.
  • Continued investment in friction-free migrations.

What source/target pairs does Azure Database Migration Service currently support?

The service currently supports various source/target pairs, or migration scenarios. For a complete listing of the status of each available migration scenario, see the article Status of migration scenarios supported by the Azure Database Migration Service.

What versions of SQL Server does Azure Database Migration Service support as a source?

When migrating from SQL Server, supported sources for Azure Database Migration Service are SQL Server 2008 and later versions. If you use Azure Data Studio with SQL Migration extension, supported sources are SQL Server 2008 through SQL Server 2022.

When using Azure Database Migration Service, what's the difference between an offline and an online migration?

You can use Azure Database Migration Service to perform offline and online migrations. With an offline migration, application downtime starts when the migration starts. With an online migration, downtime is limited to the time to cut over at the end of migration. We suggest that you test an offline migration to determine whether the downtime is acceptable; if not, do an online migration.

Note

Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier. For more information, see the Azure Database Migration Service pricing page.

How does Azure Database Migration Service compare to other Microsoft database migration tools such as the Database Migration Assistant (DMA) or SQL Server Migration Assistant (SSMA)?

Azure Database Migration Service is the preferred method for database migration to Microsoft Azure at scale. For more information on how Azure Database Migration Service compares to other Microsoft database migration tools, and for recommendations on using the service for various scenarios, see Differentiating Microsoft's Database Migration Tools and Services.

How does Azure Database Migration Service compare to the Azure Migrate offering?

Azure Migrate assists with migration of on-premises virtual machines to Azure IaaS. The service assesses migration suitability and performance-based sizing, and it provides cost estimates for running your on-premises virtual machines in Azure. Azure Migrate is useful for lift-and-shift migrations of on-premises VM-based workloads to Azure IaaS VMs. However, unlike Azure Database Migration Service, Azure Migrate isn't a specialized database migration service offering for Azure PaaS relational database platforms such as Azure SQL Database or Azure SQL Managed Instance.

Does Database Migration Service store customer data?

No. Database Migration Service doesn't store customer data.

How can I ensure that DMS has migrated all the data from the source database to Azure SQL Targets?

For Azure SQL VM and Azure SQL MI targets, DMS uses physical migration i.e., using backup and restore. As described below, the migration mode chosen determines how the data is kept consistent between source and target.

  • Offline migration: During offline migration to Azure SQL VM and Azure SQL MI targets, application downtime starts when the migration starts. DMS will restore all the backup files to the Target, as long as the latest backup file/s from source have been transferred to SMB network storage or to Azure blob container (as per the migration configuration). If the backup is taken with the CHECKSUM option, DMS restore operation automatically performs the validation. In the absence of checksum, the integrity of the backup is explicitly checked before restoring. This ensures that the restore file is identical to the backup file and hence have the same data. You can verify all the backup files including the last backup file name from source with the backup file applied/restore on target shown on DMS migration monitoring page and validate their respective checksum.

  • Online migration: During online migration to Azure SQL VM and Azure SQL MI targets, downtime starts once you initiate the migration cutover along with stopping the application. DMS will restore all the backup files to the Target, as long as the latest backup file/s from source have been transferred to SMB network storage or to Azure blob container (as per the migration configuration). After you press the cutover button, DMS shows the count for pending backup file/s (if any) which are present on the SMB network storage or Azure blob container and yet to be applied/restore on target. If the backup is taken with the CHECKSUM option, DMS restore operation automatically performs the validation. In the absence of checksum, the integrity of the backup is explicitly checked before restoring. This ensures that the restore file is identical to the backup file and hence have the same data. You can verify all the backup files including the last backup file name from source with the backup file applied/restore on target shown on DMS migration monitoring page and validate their respective checksum.

For Azure SQL DB targets, DMS does logical migration in the case of Azure SQL DB i.e., it copies the data from source SQL database’s tables and writes it to the Target Azure SQL DB’s tables. As DMS only supports offline migration to Azure SQL DB, application downtime starts when the migration starts. You can monitor and validate the number of rows read (from source database table) and copied (written to target Azure SQL DB table) from the migration monitoring page. As additional confirmation, you can run the below TSQL to get checksum both at source and destination databases and validate the source and restore data being identical.

  "SELECT CHECKSUM_AGG(CHECKSUM(*)) FROM <table_name>"
  

Note: Provided no application/s is/are writing to source or Target DB. You can also leverage tools like Database Comparison tool for data comparison.

Security

What services are created and consumed when an instance of DMS(classic) is created and run?

The following list contains the Azure resources that may be created behind the scenes to perform a data migration. The services used may vary by migration scenario.

  • Azure Monitor
  • Azure VM
  • Azure Storage
  • Azure Service Bus
  • Azure Data Factory

How is metadata and client data extracted from source and written to target?

Internally, DMS maintains a metadata store that includes information about network locations, credentials, backup files and tasks completed. Credentials and selected fields, like account keys, are encrypted. Data, like table names that may be included in telemetry, is hashed. Usernames may appear in plain text in service logs but passwords never will. Telemetry is siloed by region, governed by retention policies and available only to authorized personnel within Microsoft for valid troubleshooting purposes. Azure resource names, like server and database names, follow the rules for Azure resources.

DMS (Classic) utilizes Azure Service Bus topics to facilitate communication between compute layers. The Azure Service Bus topics are unique to each DMS instance and all personal data is encrypted.

Azure SQL Managed Instance and SQL Server on Azure Virtual Machines

Schema and data are migrated using backup and restore. Customers have the choice of restoring from a network share or directly from a storage container. A file containing Windows performance data may be consumed to provide optional (but highly recommended) workload sizing recommendations.

Azure SQL Database

Migrations to Azure SQL Database are performed in two steps. The first step is a schema migration. DMS (Classic) uses the SQL Management Objects (SMO) for schema migration. The second step is the actual data migration. SqlBulkCopy is used to perform data migration. DMS doesn't support schema migration. Data is migrated using Azure Data Factory. Optionally but highly recommended, a file containing Windows performance data may be consumed to provide workload sizing recommendations.

Azure Database for PostgreSQL

In this scenario, the end-user extracts the metadata, in this case the schema, using the pg_dump and pg_restore command line utilities. When configuring change data capture for PostgreSQL, DMS internally uses pg_dump and pg_restore to perform the initial seeding for CDC. The data is stored in an encrypted temporary storage that is only accessible to your DMS instance. A file containing Windows performance data may be consumed to provide optional (but highly recommended) workload sizing recommendations.

Azure Database for MySQL

In this scenario, schema extraction and migration are done by DMS (Classic) using the mysql-net/MySqlConnector. Where possible, MySQL binlog replication is used to replicate both data and schema changes. Custom code is used to synchronize changes where binlog replication can't be used.

MongoDB to Azure Cosmos DB

DMS extracts and inserts data from a MongoDB into Cosmos DB. It also offers the option to extract the data from a BSON or JSON dump.

For BSON dumps, DMS uses the data in bsondump format within the same folder within a blob container. DMS only looks for metadata files using the format collection.metadata.json.

For JSON dumps, DMS will read the files in the blob container folders named after the containing databases. Within each database folder, DMS only uses data files placed in the data subfolder. DMS only looks at files placed in the metadata subfolder and named using the format collection.json for metadata.

Oracle to Azure SQL Database

In this scenario, the AWR report or a Windows perfmon file is consumed to provide optional (but highly recommended) workload sizing recommendations. The user performing the migration uses the Database Schema Conversion Toolkit to perform a schema migration to prepare the target database.

Oracle to Azure Database for PostgreSQL

Much like Oracle to Azure SQL Database, in this scenario, the AWR report or a Windows perfmon file is consumed to provide optional (but highly recommended) workload sizing recommendations. The ora2pg library is used to extract the schema and migrate the data manually by the user performing the migration.

Are there any public endpoints used?

DMS (Classic) relies on the customer network configuration. If the migration source uses private endpoints, we use private endpoints, which is the preferred configuration. We use public endpoints if they are the only option.

DMS uses ADF heavily behind the scenes for scheduling and coordination of data movement. Additionally, the Self-Hosted Integration Runtime is no different from the one you probably already use for your own ADF pipelines. For more information about firewall and proxy server issues, see Create and configure a self-hosted integration runtime.

Is all data in transit and at rest encrypted?

All customer data is encrypted at rest. Some metadata, including but not limited to logical server names and database names, as well as migration status and migration progress appear in service logs that isn't encrypted.

All data in transit is protected with TLS 1.2 encryption by default. Legacy clients requiring older versions of TLS, need the required versions enabled in the DMS (Classic) portal page. For DMS the machine on which the Self-Hosted Integration Runtime is installed can be configured to allow required TLS settings to accommodate legacy clients. For more information on TLS configuration for SQL Server, see KB3135244 - TLS 1.2 support for Microsoft SQL Server.

Do all Azure services that underpin DMS and DMS (Classic) use private endpoints?

Wherever possible, private endpoints are used. If private endpoints aren't an option, public endpoints are used for communication between service layers. Regardless of endpoint type, all resources are dedicated/scoped to the particular instance of DMS and secured with unique credentials.

Do all Azure services that underpin DMS and DMS (Classic) make use of CMK for data at rest?

We don't support Customer Managed Keys for encryption of data within our data plane or control plane. However, all customer data is encrypted at rest using keys managed by the service. Some metadata, including but not limited to logical server names and database names, as well as migration status and progress appear in service logs in unencrypted form.

What type of encryption is used for data in transit?

All data in transit is encrypted with TLS 1.2 encryption by default. The DMS (Classic) portal page allows older versions of TLS to be used for legacy clients. For DMS the machine on which the Self-Hosted Integration Runtime is installed, can be configured to allow management of TLS settings to accommodate legacy clients. For more information on TLS configuration for SQL Server, see KB3135244 - TLS 1.2 support for Microsoft SQL Server.

Is there any data that isn't protected by CMK, and what type of data? For instance, metadata, logs, and so on.

We don't expose the capability to encrypt data on our control or data plane with Customer Managed Keys. All customer data is deleted the moment the DMS instance is deleted except service logs. DMS service logs are kept only for 30 days.

How does DMS support Customer Managed Keys (CMK)?

TDE

DMS supports the migration of Customer Managed Keys (CMK) to Azure SQL for Transparent Database Encryption (TDE). For step by step instructions to migrate your TDE keys, see Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio.

Cell encryption

Cell level encryption is handled at the schema level. The schema migration tools migrate all schema objects including the functions and stored procedures necessary to implement cell level encryption.

Always Encrypted

DMS doesn't currently support migration of Always Encrypted via scenarios where individual data rows are migrated between source and target. Columns encrypted via Always Encrypted are migrated as expected in scenarios that use backup/restore, like moving to Azure SQL VM or Azure SQL Managed instance from an existing SQL Server instance.

Does DMS ensure that access to data is controlled with Location Aware Access Control?

We don't implement any location aware access control beyond what is already available in Azure. All data associated with a DMS instance resides in the same region as the DMS resource.

How does DMS ensure that data in one environment can't be moved to another using DMS?

Our services are used in various environments with different internal controls and business processes. DMS moves data from and to anywhere that the account it is using has access to. It's the user's responsibility to understand the permissions and internal controls of the environment they are working in. It is especially important to ensure that the account that DMS is using to connect to the source has access to see all of the data that is intended to be migrated from the source.

How is VNET injection in DMS (Classic) used? Does it give Microsoft access to my network?

VNET injection is the action of adding an Azure resource that resides within the Microsoft tenant, to a subnet in a VNet under the customer tenant. This approach was taken with DMS to allow us to manage the compute on behalf of the customer, while still maintaining access to customer resources. Since the network is in customer subscription, Microsoft can't manage the VM beyond issuing Start, Stop, Delete, or Deploy commands. All other management actions needing access to the VM require a customer-initiated support request and approval.

Setup

What are the prerequisites for using Azure Database Migration Service?

There are several prerequisites required to ensure that Azure Database Migration Service runs smoothly when performing database migrations. Some of the prerequisites apply across all scenarios (source-target pairs) supported by the service, while other prerequisites are unique to a specific scenario.

Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to:

  • Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either ExpressRoute or VPN.
  • Ensure that your virtual network Network Security Group rules don't block the port 443 for ServiceTags of ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article Filter network traffic with network security groups.
  • When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.

For a list of all the prerequisites required to compete specific migration scenarios using Azure Database Migration Service, see the related tutorials in the Azure Database Migration Service documentation.

How do I find the IP address for Azure Database Migration Service so that I can create an allowlist for the firewall rules used to access my source database for migration?

You may need to add firewall rules allowing Azure Database Migration Service to access to your source database for migration. The IP address for the service is dynamic, but if you're using ExpressRoute, this address is privately assigned by your corporate network. The easiest way to identify the appropriate IP address is to look in the same resource group as your provisioned Azure Database Migration Service resource to find the associated Network Interface. Usually the name of the Network Interface resource begins with the NIC prefix and followed by a unique character and number sequence, for example, `NIC-jj6tnztnmarpsskr82rbndyp``. By selecting this network interface resource, you can see the IP address that needs to be included in the allowlist on the resource overview Azure portal page.

You may also need to include the port source that SQL Server is listening on the allowlist. By default, it's port 1433, but the source SQL Server may be configured to listen on other ports as well. In this case, you need to include those ports on the allowlist as well. You can determine the port that SQL Server is listening on by using a Dynamic Management View query:

SELECT DISTINCT
    local_tcp_port
FROM sys.dm_exec_connections
WHERE local_tcp_port IS NOT NULL;

You can also determine the port that SQL Server is listening by querying the SQL Server error log:

USE master;
GO
xp_readerrorlog 0, 1, N'Server is listening on';
GO

How do I set up a Microsoft Azure Virtual Network?

While multiple Microsoft tutorials that can walk you through the process of setting up a virtual network, the official documentation appears in the article Azure Virtual Network.

Usage

What is a summary of the steps required to use Azure Database Migration Service to perform a database migration?

During a typical, simple database migration, you:

  1. Create a target database(s).
  2. Assess your source database(s).
    • For homogenous migrations, assess your existing database(s) by using DMA.
    • For heterogeneous migrations (from compete sources), assess your existing database(s) with SSMA. You also use SSMA to convert database objects and migrate the schema to your target platform.
  3. Create an instance of Azure Database Migration Service.
  4. Create a migration project specifying the source database(s), target database(s), and the tables to migrate.
  5. Start the full load.
  6. Pick the subsequent validation.
  7. Perform a manual switchover of your production environment to the new cloud-based database.

Troubleshooting and optimization

I'm setting up a migration project in DMS, and I'm having difficulty connecting to my source database. What should I do?

If you have trouble connecting to your source database system while working on migration, create a virtual machine in the same subnet of the virtual network with which you set up your DMS instance. In the virtual machine, you should be able to run a connect test, such as using a UDL file to test a connection to SQL Server or downloading Robo 3T to test MongoDB connections. If the connection test succeeds, you shouldn't have an issue with connecting to your source database. If the connection test doesn't succeed, contact your network administrator.

Why is my Azure Database Migration Service unavailable or stopped?

If the user explicitly stops Azure Database Migration Service (DMS) or if the service is inactive for 24 hours, the service is in a stopped or auto paused state. In each case, the service is unavailable and in a stopped status. To resume active migrations, restart the service.

Are there any recommendations for optimizing the performance of Azure Database Migration Service?

You can do a few things to speed up your database migration using the service:

For DMS(Classic)-

  • Use the multi CPU General Purpose Pricing Tier when you create your service instance to allow the service to take advantage of multiple vCPUs for parallelization and faster data transfer.
  • Temporarily scale up your Azure SQL Database target instance to the Premium tier SKU during the data migration operation to minimize Azure SQL Database throttling that may affect data transfer activities when using lower-level SKUs.

For DMS-

  • If you are copying backups from local fileshare to Azure blob storage OR while performing migration to Target Azure SQL DB, DMS uses SHIR node as its compute. So ensure to monitor the resource usage of that SHIR node.
  • Temporarily scale up your Azure SQL Database target instance to the Premium tier SKU during the data migration operation to minimize Azure SQL Database disk throttling that may affect data transfer activities when using lower-level SKUs.
  • For more detailed information, refer blog.