How can I build a stretch cluster with 2 Servers and 2 Storage appliances

Danny Arroyo 61 Reputation points
2025-06-24T20:54:28.29+00:00

I have 2 X HPE Gen 12 servers and 2 X rs4021xs+ Synology iscsi storage appliances. I would like to setup a stretch Fileshare cluster from one datacenter to another datacenter that are blocks apart. The plan was to connect one HPE Server with one Synology and the other HPE Server with the other Synology in the same datacenter, Install Windows and create the Stretch cluster over iscsi. Then I would shutdown and move one of the Server/Synolog pairs to the other datacenter and reconnect it to the network to continue to serve as a member of the stretch cluster. This has worked for us for the past 5 years with Server 2019 and Fujitsu servers with Fujitsu fiber attached storage.

I recently learned that Microsoft does not recommend this setup because there is no intra-site fault tolerance and it could cause issues with the Quorum (we resolved that with a fileshare Witness Server). In addition, the rs4021xs+ with iSCSI does not support Persistent Reservation so I will need to get 2 fiber storage adapters for our synologys.

Has anyone tried to setup something like this?

Are there any alternatives to create a stretch cluster between 2 datacenters using Server 2022 and the hardware that I mentioned above?

Any advice is appreciated.

Thank you

Windows for business | Windows Server | User experience | Other
0 comments No comments
{count} votes

Accepted answer
  1. Henry Mai 1,965 Reputation points Independent Advisor
    2025-06-26T02:50:00.05+00:00

    Hello, I am Henry and I want to share my insight about this issue.

    Based on the steps you’ve shared and the latest update, I recommend using Storage Replica. This feature is designed to support scenarios like yours.

    Instead of both servers connecting to a single shared disk, each server connects to its own dedicated disk. Storage Replica then performs real-time, block-level replication between the two disks over the network (the "stretch" link). The Failover Cluster manages this replicated volume as a single resource.

    1. Site A: HPE Server 1 connects to a LUN on Synology 1.
    2. Site B: HPE Server 2 connects to a LUN on Synology 2.
    3. Windows Server: You install the Storage Replica feature and configure it to replicate the volume from Site A to Site B.
    4. Failover Cluster: You create the cluster. Instead of adding a "disk" in the traditional sense, the cluster manages the replicated volume. If Site A fails, the cluster automatically reverses the replication direction and brings the storage online in Site B using its local, up-to-date copy.

    I've included an analysis table that maps your concerns to how Storage Replica addresses them.

    Concern Storage Replica
    SCSI-3 PR No SCSI-3 as it completely bypasses the need for shared storage and persistent reservations.
    Your current hardware It's a fit for your 2-server, 2-storage appliance topology.
    Microsoft Supported This is the official, recommended way to build a stretch cluster without a high-end SAN that supports stretch configurations.
    Quorum Your plan to use a File Share Witness (ideally in a third location, like Azure or another office) is the correct way to maintain quorum in a two-node cluster.

    Action Plan:

    1. Configure each HPE server to connect to its local Synology via iSCSI. Create a dedicated LUN on each.
    2. Install Windows Server 2022 and the Failover Clustering & Storage Replica features on both servers.
    3. Create your cluster and configure the File Share Witness.
    4. Use PowerShell or Windows Admin Center to configure Storage Replica between the two volumes.
    5. Create your File Server role on top of the replicated storage. 

    I hope this information and these keywords help point you in the right direction for your research.


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.