Düzenle

Aracılığıyla paylaş


Business continuity and database recovery - SQL Server on Linux

Applies to: SQL Server

This article provides an overview of business continuity solutions for high availability and disaster recovery in SQL Server, on Windows and Linux.

One common task everyone deploying SQL Server has to account for is making sure that all mission critical SQL Server instances and the databases within them are available when the business and end users need them, whether that is 9 to 5 or around the clock. The goal is to keep the business up and running with minimal or no interruption. This concept is also known as business continuity.

SQL Server 2017 (14.x) introduced many new features or enhancements to existing ones, some of which are for availability. The biggest addition to SQL Server 2017 (14.x) was support for SQL Server on Linux distributions. For a full list of the new features in SQL Server, see the following articles:

This article is focused on covering the availability scenarios in SQL Server 2017 (14.x) and later versions, as well as the new and enhanced availability features. The scenarios include hybrid ones that will be able to span SQL Server deployments on both Windows Server and Linux, and ones that can increase the number of readable copies of a database.

While this article doesn't cover availability options external to SQL Server, such as those provided by virtualization, everything discussed here applies to SQL Server installations inside a guest virtual machine whether in the public cloud or hosted by an on-premises hypervisor server.

SQL Server scenarios using the availability features

Always On availability groups, Always On failover cluster instances, and log shipping, can be used in various ways, and not necessarily just for availability purposes. There are four main ways the availability features can be used:

  • High availability
  • Disaster recovery
  • Migrations and upgrades
  • Scaling out readable copies of one or more databases

The following sections discuss the relevant features that can be used for that particular scenario. The one feature not covered is SQL Server replication. While not officially designated as an availability feature under the Always On umbrella, SQL Server replication is often used for making data redundant in certain scenarios. Merge replication isn't supported for SQL Server on Linux. For more information, see SQL Server Replication on Linux.

Important

The SQL Server availability features do not replace the requirement to have a robust, well tested backup and restore strategy, the most fundamental building block of any availability solution.

High availability

It's important to ensure that SQL Server instances or database are available in the case of a problem that is local to a data center or single region in the cloud. This section will cover how the SQL Server availability features can help with that task. All of the features described are available both on Windows Server and on Linux.

Availability groups

Introduced in SQL Server 2012 (11.x), availability groups (AGs) provide database-level protection by sending each transaction of a database to another instance, or replica, which contains a copy of that database in a special state. An AG can be deployed on Standard or Enterprise editions. The instances participating in an AG can be either standalone or failover cluster instances (FCIs, described in the next section). Since the transactions are sent to a replica as they happen, AGs are recommended where there are requirements for lower recovery point and recovery time objectives. Data movement between replicas can be synchronous or asynchronous, with Enterprise edition allowing up to three replicas (including the primary) as synchronous. An AG has one fully read/write copy of the database that is on the primary replica, while all secondary replicas can't receive transactions directly from end users or applications.

Note

Always On is an umbrella term for the availability features in SQL Server and covers both AGs and FCIs. Always On is not the name of the AG feature.

Before SQL Server 2022 (16.x), AGs only provide database-level, and not instance-level protection. Anything not captured in the transaction log or configured in the database will need to be manually synchronized for each secondary replica. Some examples of objects that must be synchronized manually are logins at the instance level, linked servers, and SQL Server Agent jobs.

Starting with SQL Server 2022 (16.x), you can manage metadata objects including users, logins, permissions and SQL Server Agent jobs at the AG level in addition to the instance level. For more information, see Contained availability groups.

An AG also has another component called the listener, which allows applications and end users to connect without needing to know which SQL Server instance is hosting the primary replica. Each AG would have its own listener. While the implementations of the listener are slightly different on Windows Server versus Linux, the functionality it provides and how it's used is the same. The diagram below shows a Windows Server-based AG that is using a Windows Server Failover Cluster (WSFC). An underlying cluster at the OS layer is required for availability whether it is on Linux or Windows Server. The example shows a simple configuration with two servers, or nodes, with a WSFC as the underlying cluster.

Diagram of a simple availability group.

Standard and Enterprise edition have different maximums when it comes to replicas. An AG in Standard edition, known as a basic availability group, supports two replicas (one primary and one secondary) with only a single database in the AG. Enterprise edition not only allows multiple databases to be configured in a single AG, but also can have up to nine total replicas (one primary, eight secondary). Enterprise edition also provides other optional benefits such as readable secondary replicas, the ability to make backups off of a secondary replica, and more.

Note

Database mirroring, which was deprecated in SQL Server 2012 (11.x), is not available on the Linux version of SQL Server, nor will it be added. Customers still using database mirroring should plan to migrate to AGs, which is the replacement for database mirroring.

When it comes to availability, AGs can provide either automatic or manual failover. Automatic failover can occur if synchronous data movement is configured and the database on the primary and secondary replica are in a synchronized state. As long as the listener is used and the application uses a later version of .NET Framework (3.5 with an update, or 4.0 and above), the failover should be handled with minimal to no effect on end users if a listener is utilized. Failing over to a secondary replica to make it the new primary replica can be configured to be automatic or manual, and is generally measured in seconds.

The list below highlights some differences with AGs on Windows Server versus Linux:

  • Owing to differences in the way the underlying cluster works on Linux and Windows Server, all failovers (manual or automatic) of AGs are done via the cluster on Linux. On Windows Server-based AG deployments, manual failovers must be done via SQL Server. Automatic failovers are handled by the underlying cluster on both Windows Server and Linux.
  • For SQL Server on Linux, the recommended configuration for AGs is a minimum of three replicas. This is due to the way that the underlying clustering works.
  • On Linux, the common name used by each listener is defined in DNS and not in the cluster like it is on Windows Server.

Starting with SQL Server 2017 (14.x), there are some new features and enhancements to AGs:

  • Cluster types
  • REQUIRED_SECONDARIES_TO_COMMIT
  • Enhanced Microsoft Distributor Transaction Coordinator (DTC) support for Windows Server-based configurations
  • Additional scale out scenarios for read-only databases (described later in this article)

Availability group cluster types

The built-in availability form of clustering in Windows Server is enabled via a feature named Failover Clustering. It allows you to build a WSFC to be used with an AG or FCI. Integration for AGs and FCIs is provided by cluster-aware resource DLLs shipped by SQL Server.

SQL Server on Linux supports multiple clustering technologies. Microsoft supports the SQL Server components, while our partners support the relevant clustering technology. For example, along with Pacemaker, SQL Server on Linux supports HPE Serviceguard and DH2i DxEnterprise as a cluster solution.

A Windows-based failover cluster and Linux cluster solution are more similar than different. Both provide a way to take individual servers and combine them in a configuration to provide availability, and have concepts of things like resources, constraints (even if implemented differently), failover, and so on.

For example, to support Pacemaker for both AG and FCI configurations including things like automatic failover, Microsoft provides the mssql-server-ha package, which is similar to, but not exactly the same as the resource DLLs in a WSFC, for Pacemaker. One of the differences between a WSFC and Pacemaker is that there's no network name resource in Pacemaker, which is the component that helps to abstract the name of the listener (or the name of the FCI) on a WSFC. DNS provides that name resolution on Linux.

Because of the difference in the cluster stack, some changes needed to be made for AGs because SQL Server has to handle some of the metadata that is natively handled by a WSFC. One such significant change is the introduction of a cluster type for an availability group. This is stored in sys.availability_groups in the cluster_type and cluster_type_desc columns. There are three cluster types:

  • WSFC
  • External
  • None

All AGs that require high availability must use an underlying cluster, which in the case of SQL Server 2017 (14.x) and later versions means WSFC or a Linux clustering agent. For Windows Server-based AGs that use an underlying WSFC, the default cluster type is WSFC and doesn't need to be set. For Linux-based AGs, when creating the AG, the cluster type must be set to External. The integration with an external cluster solution in Linux is configured after the AG is created, whereas on a WSFC, it's done at creation time.

A cluster type of None can be used with both Windows Server and Linux AGs. Setting the cluster type to None means that the AG doesn't require an underlying cluster. This means SQL Server 2017 (14.x) is the first version of SQL Server to support AGs without a cluster, but the tradeoff is that this configuration isn't supported as a high availability solution.

Important

Starting with SQL Server 2017 (14.x), you can't change a cluster type for an AG after it is created. This means that an AG cannot be switched from None to External or WSFC, or vice versa.

For those who are only looking to just add additional read-only copies of a database, or like what an AG provides for migration/upgrades but don't want to be tied to the additional complexity of an underlying cluster or even the replication, an AG with a cluster type of None is a perfect solution. For more information, see the sections Migrations and Upgrades and read-scale.

The screenshot below shows the support for the different kinds of cluster types in SQL Server Management Studio (SSMS). You must be running version 17.1 or later. The screenshot below is from version 17.2.

Screenshot of SSMS AG options.

REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT

SQL Server 2016 (13.x) increased support for the number of synchronous replicas from two to three in Enterprise edition. However, if one secondary replica was synchronized but the other was having a problem, there was no way to control the behavior to tell the primary to either wait for the misbehaving replica or to allow it to move on. This means that the primary replica at some point would continue to receive write traffic even though the secondary replica wouldn't be in a synchronized state, which means that there's data loss on the secondary replica. Starting with SQL Server 2017 (14.x), you can control the behavior of what happens when there are synchronous replicas with REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT. This option works as follows:

  • There are three possible values: 0, 1, and 2
  • The value is the number of secondary replicas that must be synchronized, which has implications for data loss, AG availability, and failover
  • For WSFCs and a cluster type of None, the default value is 0, and can be manually set to 1 or 2
  • For a cluster type of External, by default, the cluster mechanism will set this value, and it can be overridden manually. For three synchronous replicas, the default value will be 1.

On Linux, the value for REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is configured on the AG resource in the cluster. On Windows, it's set via Transact-SQL.

A value that is higher than 0 ensures higher data protection, because if the required number of secondary replicas isn't available, the primary won't be available until that is resolved. REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT also affects failover behavior since automatic failover couldn't occur if the right number of secondary replicas weren't in the proper state. On Linux, a value of 0 won't allow automatic failover, so on Linux, when using synchronous with automatic failover, the value must be set higher than 0 to achieve automatic failover. 0 on Windows Server is the behavior in SQL Server 2016 (13.x) and earlier versions.

Enhanced Microsoft Distributed Transaction Coordinator support

Before SQL Server 2016 (13.x), the only way to get availability in SQL Server for applications that require distributed transactions, which use DTC underneath the covers was to deploy FCIs. A distributed transaction can be done in one of two ways:

  • A transaction that spans more than one database in the same SQL Server instance
  • A transaction that spans more than one SQL Server instance or possibly involves a non-SQL Server data source

SQL Server 2016 (13.x) introduced partial support for DTC with AGs that covered the latter scenario. SQL Server 2017 (14.x) completes the story by supporting both scenarios with DTC.

In SQL Server 2017 (14.x) and later versions, DTC support can also be added to an AG after it's created. In SQL Server 2016 (13.x), enabling support for DTC to an AG could only be done when the AG was created.

Failover cluster instances

Clustered installations have been a feature of SQL Server since version 6.5. FCIs are a proven method of providing availability for the entire installation of SQL Server, known as an instance. This means that everything inside the instance, including databases, SQL Server Agent jobs, linked servers, and so on, will move to another server should the underlying server encounter a problem. All FCIs require some sort of shared storage, even if it's provided via networking. The FCI's resources can only be running and owned by one node at any given time. In the diagram below, the first node of the cluster owns the FCI, which also means it owns the shared storage resources associated with it denoted by the solid line to the storage.

Diagram of a Failover Cluster Instance.

After a failover, ownership changes as is seen in the diagram below.

Diagram of a Failover Cluster Instance, post failover.

There's zero data loss with an FCI, but the underlying shared storage is a single point of failure since there's one copy of the data. FCIs are often combined with another availability method, such as an AG or log shipping, to have redundant copies of databases. The additional method deployed should use physically separate storage from the FCI. When the FCI fails over to another node, it stops on one node and starts on another, not unlike powering off a server and turning it on. An FCI goes through the normal recovery process, meaning any transactions that need to be rolled forward will be, and any transactions that are incomplete will be rolled back. Therefore, the database is consistent from a data point to the time of the failure or manual failover, hence no data loss. Databases are only available after recovery is complete, so recovery time will depend on many factors, and will generally be longer than failing over an AG. The tradeoff is that when you fail over an AG, there may be extra tasks required to make a database usable, such as enabling a SQL Server Agent job.

Like an AG, FCIs abstract which node of the underlying cluster is hosting it. An FCI always retains the same name. Applications and end users never connect to the nodes; the unique name assigned to the FCI is used. An FCI can participate in an AG as one of the instances hosting either a primary or secondary replica.

The list below highlights some differences with FCIs on Windows Server versus Linux:

  • On Windows Server, an FCI is part of the installation process. An FCI on Linux is configured after installing SQL Server.
  • Linux only supports a single installation of SQL Server per host, so all FCIs will be a default instance. Windows Server supports up to 25 FCIs per WSFC.
  • The common name used by FCIs in Linux is defined in DNS, and should be the same as the resource created for the FCI.

Log shipping

If recovery point and recovery time objectives are more flexible, or databases aren't considered to be highly mission critical, log shipping is another proven availability feature in SQL Server. Based on SQL Server's native backups, the process for log shipping automatically generates transaction log backups, copies them to one or more instances known as a warm standby, and automatically applies the transaction log backups to that standby. Log shipping uses SQL Server Agent jobs to automate the process of backing up, copying, and applying the transaction log backups.

Diagram of Log Shipping.

Arguably the biggest advantage of using log shipping in some capacity is that it accounts for human error. The application of transaction logs can be delayed. Therefore, if someone issues something like an UPDATE without a WHERE clause, the standby may not have the change so you could switch to that while you repair the primary system. While log shipping is easy to configure, switching from the primary to a warm standby, known as a role change, is always manual. A role change is initiated via Transact-SQL, and like an AG, all objects not captured in the transaction log must be manually synchronized. Log shipping also needs to be configured per database, whereas a single AG can contain multiple databases.

Unlike an AG or FCI, log shipping has no abstraction for a role change, which applications must be able to handle. Techniques such as a DNS alias (CNAME) could be employed, but there are pros and cons, such as the time it takes for DNS to refresh after the switch.

Disaster recovery

When your primary availability location experiences a catastrophic event like an earthquake or flood, the business must be prepared to have its systems come online elsewhere. This section will cover how the SQL Server availability features can assist with business continuity.

Availability groups

One of the benefits of AGs is that both high availability and disaster recovery can be configured using a single feature. Without the requirement for ensuring that shared storage is also highly available, it's much easier to have replicas that are local in one data center for high availability, and remote ones in other data centers for disaster recovery each with separate storage. Having extra copies of the database is the tradeoff for ensuring redundancy. An example of an AG that spans multiple data centers is shown below. One primary replica is responsible for keeping all secondary replicas synchronized.

Diagram of an availability group spanning data centers.

Outside of an AG with a cluster type of none, an AG requires that all replicas are part of the same underlying cluster whether it's a WSFC or an external cluster solution. This means that in the diagram above, the WSFC is stretched to work in two different data centers, which adds complexity. regardless of the platform (Windows Server or Linux). Stretching clusters across distance adds complexity.

Introduced in SQL Server 2016 (13.x), a distributed availability group allows an AG to span AGs configured on different clusters. Distributed AGs decouple the requirement to have the nodes all participate in the same cluster, which makes configuring disaster recovery much easier. For more information on distributed AGs, see Distributed availability groups.

Diagram of a Distributed Availability Group.

Failover cluster instances

FCIs can be used for disaster recovery. As with a normal AG, the underlying cluster mechanism must also be extended to all locations, which adds complexity. There's an additional consideration for FCIs: the shared storage. The same disks need to be available in the primary and secondary sites. An external method such as functionality provided by the storage vendor at the hardware layer, or using storage Replica in Windows Server, is required to ensure that the disks used by the FCI exist elsewhere.

Diagram of an FCI spanning data centers.

Log shipping

Log shipping is one of the oldest methods of providing disaster recovery for SQL Server databases. Log shipping is often used with AGs and FCIs to provide cost-effective and simpler disaster recovery where other options may be challenging due to environment, administrative skills, or budget. Similar to the high availability story for log shipping, many environments will delay the loading of a transaction log to account for human error.

Migrations and upgrades

When deploying new instances or upgrading old ones, a business can't tolerate long outages. This section will discuss how the availability features of SQL Server can be used to minimize the downtime in a planned architecture change, server switch, platform change (such as Windows Server to Linux or vice versa), or during patching.

Note

Other methods, such as using backups and restoring them elsewhere, can also be used for migrations and upgrades. They are not discussed in this article.

Availability groups

An existing instance containing one or more AGs can be upgraded in place to later versions of SQL Server. While this will require some amount of downtime, with the right amount of planning, it can be minimized.

If the goal is to migrate to new servers and not change the configuration (including the operating system or SQL Server version), those servers could be added as nodes to the existing underlying cluster and added to the AG. Once the replica or replicas are in the right state, a manual failover could occur to a new server, and then the old ones could be removed from the AG, and ultimately, decommissioned.

Distributed AGs are also another method to migrate to a new configuration or upgrade SQL Server. Because a distributed AG supports different underlying AGs on different architectures, for example, you could change from SQL Server 2016 (13.x) running on Windows Server 2012 R2 to SQL Server 2017 (14.x) running on Windows Server 2016.

Diagram of a distributed availability group mixing WSFC and Pacemaker.

Finally, AGs with a cluster type of None can also be used for migration or upgrading. You can't mix and match cluster types in a typical AG configuration, so all replicas would need to be a type of None. A distributed AG can be used to span AGs configured with different cluster types. This method is also supported across the different OS platforms.

All variants of AGs for migrations and upgrades allow the most time consuming portion of the work to be done over time - data synchronization. When it comes time to initiate the switch to the new configuration, the cutover will be a brief outage versus one long period of downtime where all the work, including data synchronization, would need to be completed.

AGs can provide minimal downtime during patching of the underlying OS by manually failing over the primary to a secondary replica while the patching is being completed. From an operating system perspective, doing this would be more common on Windows Server since often, but not always, servicing the underlying OS may require a restart. Patching Linux sometimes needs a restart, but it can be infrequent.

Patching SQL Server instances participating in an availability group can also minimize downtime depending on how complex the AG architecture is. To patch servers participating in an AG, a secondary replica is patched first. Once the right number of replicas are patched, the primary replica is manually failed over to another node to do the upgrade. Any remaining secondary replicas at that point can be upgraded, too.

Failover cluster instances

FCIs on their own can't assist with a traditional migration or upgrade; an AG or log shipping would have to be configured for the databases in the FCI and all other objects accounted for. However, FCIs under Windows Server are still a popular option for when the underlying Windows Servers need to be patched. A manual failover can be initiated, which means a brief outage instead of having the instance unavailable for the entire time that Windows Server is being patched. An FCI can be upgraded in place to later versions of SQL Server. For information, see Upgrade a SQL Server Failover Cluster Instance.

Log shipping

Log shipping is still a popular option to both migrate and upgrade databases. Similar to AGs, but this time using the transaction log as the synchronization method, the data propagation can be started well in advance of the server switch. At the time of the switch, once all traffic is stopped at the source, a final transaction log would need to be taken, copied, and applied to the new configuration. At that point, the database can be brought online. Log shipping is often more tolerant of slower networks, and while the switch may be slightly longer than one done using an AG or a distributed AG, it's usually measured in minutes - not hours, days, or weeks.

Similar to AGs, log shipping can provide a way to switch to another server in the event of patching.

Other SQL Server deployment methods and availability

There are two other deployment methods for SQL Server on Linux: containers and using Azure (or another public cloud provider). The general need for availability as presented throughout this paper exists regardless of how SQL Server is deployed. These two methods have some special considerations when it comes to making SQL Server highly available.

SQL Server containers and HA/DR options

SQL Server container deployment is a new way of deploying SQL Server on Linux. A container is a complete image of SQL Server that is ready to run.

Depending on the container platform you use, for example when using a container orchestrator like Kubernetes, if the container is lost, it can be deployed again and attached to the shared storage that was used. While this does provide some resiliency, there will be some downtime associated with database recovery, and isn't truly highly available as it would be if using an availability group or FCI.

If you're looking to configure high availability for SQL Server containers deployed on Kubernetes or non-Kubernetes platforms, you can use DH2i DxEnterprise as one of the clustering solutions, on top of which you can configure an AG in high availability mode. This option provides you with the recovery point objective (RPO) and recovery time objective (RTO) expected from a high availability solution.

Linux-based IaaS deployment

Linux IaaS virtual machines can be deployed with SQL Server installed using Azure. As with on premises-based installations, a supported installation requires the use of STONITH (Shoot the Other Node in the Head) which is external to the cluster agent itself. STONITH is provided via fencing availability agents. Some distributions ship them as part of the platform, while others rely on external hardware and software vendors. Check with your preferred Linux distribution to see what forms of STONITH are provided so that a supported solution can be deployed in the public cloud.

Guides for installing SQL Server on Linux are available for the following distributions:

Read-scale

Since their introduction in SQL Server 2012 (11.x), secondary replicas have had the ability to be used for read-only queries. There are two ways that can be achieved with an AG: by allowing direct access to the secondary, and configuring read only routing, which requires the use of the listener. SQL Server 2016 (13.x) introduced the ability to load balance read-only connections via the listener using a round robin algorithm, allowing read-only requests to be spread across all readable replicas.

Note

Readable secondary replicas is a feature only in Enterprise edition, and each instance hosting a readable replica would need a SQL Server license.

Scaling readable copies of a database via AGs was first introduced with distributed AGs in SQL Server 2016 (13.x). This would allow companies to have read-only copies of the database not only locally, but regionally and globally with a minimal amount of configuration and reduce network traffic and latency by having queries executed locally. Each primary replica of an AG can seed two other AGs even if it isn't the fully read/write copy, so each distributed AG can support up to 27 copies of the data that are readable.

Diagram showing a distributed availability group related to read-scale.

Starting with SQL Server 2017 (14.x), It's possible to create a near-real time, read-only solution with AGs configured with a cluster type of None. If the goal is to use AGs for readable secondary replicas and not availability, doing this removes the complexity of using a WSFC or an external cluster solution on Linux, and gives the readable benefits of an AG in a simpler deployment method.

The only major caveat is that due to no underlying cluster with a cluster type of None, configuring read-only routing is a little different. From a SQL Server perspective, a listener is still required to route the requests even though there's no cluster. Instead of configuring a traditional listener, the IP address or name of the primary replica is used. The primary replica is then used to route the read-only requests.

A log shipping warm standby can technically be configured for readable usage by restoring the database WITH STANDBY. However, because the transaction logs require exclusive use of the database for restoration, it means that users can't be accessing the database while that happens. This makes log shipping a less than ideal solution - especially if near real-time data is required.

One thing that should be noted for all read-scale scenarios with AGs is that unlike using transactional replication where all of the data is live, each secondary replica isn't in a state where unique indexes can be applied, the replica is an exact copy of the primary. If any indexes are required for reporting or data needs to be manipulated, they must be created on the database(s) on the primary replica. If you need that flexibility, replication is a better solution for readable data.

Cross-platform and Linux distribution interoperability

With SQL Server now supported on both Windows Server and Linux, this section covers the scenarios of how they can work together for availability in addition to other purposes, and the story for solutions that will incorporate more than one Linux distribution.

Note

There are no scenarios where a WSFC-based FCI or AG will work with a Linux-based FCI or AG directly. A WSFC can't be extended by a Pacemaker node and vice versa.

Distributed availability groups

Distributed AGs are designed to span AG configurations, whether those two underlying clusters underneath the AGs are two different WSFCs, Linux distributions, or one on a WSFC and the other on Linux. A distributed AG will be the primary method of having a cross platform solution. A distributed AG is also the primary solution for migrations such as converting from a Windows Server-based SQL Server infrastructure to a Linux-based one if that is what your company wants to do. As noted above, AGs, and especially distributed AGs, would minimize the time that an application would be unavailable for use. An example of a distributed AG that spans a WSFC and Pacemaker is shown below.

Diagram showing a distributed availability group that spans a WSFC and Pacemaker.

If an AG is configured with a cluster type of None, it can span Windows Server and Linux, as well as multiple Linux distributions. Since this isn't a true high availability configuration, it shouldn't be used for mission critical deployments, but for read-scale or migration/upgrade scenarios.

Log shipping

Since log shipping is based on backup and restore, and there are no differences in the databases, file structures, etc., for SQL Server on Windows Server versus SQL Server on Linux. This means that log shipping can be configured between a Windows Server-based SQL Server installation and a Linux one, as well as between distributions of Linux. Everything else remains the same. The only caveat is that log shipping, just like an AG, can't work when the source is at a higher SQL Server major version against a target that is at a lower version of SQL Server.

Summary

Instances and databases of SQL Server 2017 (14.x) and later versions can be made highly available using the same features on both Windows Server and Linux. Besides standard availability scenarios of local high availability and disaster recovery, downtime associated with upgrades and migrations can be minimized with the availability features in SQL Server. AGs can also provide additional copies of a database as part of the same architecture to scale out readable copies. Whether you're deploying a new solution or considering an upgrade, SQL Server has the availability and reliability you require.