Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Applies to:
Azure SQL Managed Instance
You should periodically test and validate that applications are ready for a recovery workflow. Verifying the application behavior and implications of data loss and/or the disruption that failover involves is good engineering practice. It's also a requirement by most industry standards as part of business continuity certification.
Performing a disaster recovery drill consists of:
- Simulating data tier outage
- Recovering
- Validate application integrity post recovery
- Fail back to the original instance (optional)
Depending on how you designed your application for business continuity, the workflow to execute the drill can vary. This article describes the best practices for conducting a disaster recovery drill in the context of Azure SQL Managed Instance.
Geo-restore
To prevent the potential data loss when conducting a disaster recovery drill, perform the drill using a test environment by creating a copy of the production environment and using it to verify the application's failover workflow.
Outage simulation
To simulate the outage, you can rename the source database. This name change causes application connectivity failures.
Recovery
- Perform a geo-restore of the database to a different instance as described in disaster recovery guidance.
- Change the application configuration to connect to the recovered instance and follow the Configure a database after recovery guide to complete the recovery.
Validation
Complete the drill by verifying the application integrity post recovery (including connection strings, logins, basic functionality testing, or other validations part of standard application sign off procedures).
Fail back
With geo-restore, fail back consists of repointing the application to the original instance.
Failover groups
For an instance protected by failover groups, the drill exercise involves planned failover to the secondary instance. The planned failover ensures that the primary and the secondary instances in the failover group remain in sync when the roles are switched. Unlike the unplanned failover, this operation doesn't result in data loss, so the drill can be performed in a production environment.
Configure your failover group with the failover policy that suits your business need, and test failover regardless of how your failover policy is configured. For more information, review test failover. A customer-managed failover policy is recommended to give you control over the failover process.
Important
Since system databases aren't replicated between instances in a failover group, manually recreate system objects on the secondary instance and then test environments with system object dependencies to ensure they continue functioning properly after a failover.
Outage simulation
To simulate the outage, you can disable the web application or virtual machine connected to the database. This outage simulation results in the connectivity failures for the web clients.
Recovery
- Make sure the application configuration in the DR region points to the former secondary, which becomes the fully accessible new primary.
- Initiate a planned failover of the failover group from the secondary instance.
- Follow the Configure a database after recovery guide to complete the recovery.
Validation
Complete the drill by verifying the application integrity post recovery (including connectivity, basic functionality testing, or other validations required for the drill signoffs).
Fail back
To fail back, perform a planned failover of the failover group back to the original primary instance. Since the application is already configured to point to the failover group endpoint, no further changes are needed. The failover group endpoint automatically routes traffic to the new primary instance after the failover.
Fail back is optional. If you don't need to fail back, you can keep the secondary instance as the new primary instance.
Managed Instance link
It's possible to use a Managed Instance link for disaster recovery. Two-way failover is only supported with SQL Server 2022, and instances configured with the SQL Server 2022 update policy. SQL Server 2019 and earlier versions support one-way failover only and fail back to SQL Server isn't supported.
This sections describes how to perform a disaster recovery drill with SQL Server 2022. When using a Managed Instance link for disaster recovery, the drill exercise involves planned failover to the secondary instance. The planned failover ensures that the primary and the secondary instances in the Managed Instance link remain in sync when the roles are switched. Unlike the unplanned failover, this operation doesn't result in data loss, so the drill can be performed in a production environment.
Outage simulation
To simulate the outage, disable client connections to the primary replica of the database replicated via link. This outage simulation results in the connectivity failures for the database clients (applications).
Recovery
For recovery, do the following:
- Initiate a planned link failover to the secondary instance.
- Repoint the impacted applications to the new primary instance.
Validation
For validation, do the following:
- Perform application connectivity and read/write tests on the new primary instance.
- Optionally, validate that the test data written during the drill is replicated to the secondary instance.
Fail back
To fail back, perform a planned failover of the Managed Instance link back to the original primary instance. After failover, the application must be repointed to the original primary instance.
Fail back is optional. If you don't need to fail back, you can keep the secondary instance as the new primary instance.
Related content
To learn more, review: