question

suicidegybe avatar image
0 Votes"
suicidegybe asked suicidegybe commented

S2D- Moving devices to different PCI-E

I have a 2 Node S2D cluster up and running for a few months now. I have an NVME HBA with 3 Intel DC P4600 2TB NVME drives as cache on both nodes as well as a dual 10GbE netwrok adapter for VM and storage access there is also a 40GbE network adpater for cluster RDMA comunication and a SAS HBA with 28 3TB HHDs for capacity storage. My question is; can I move devices to different PCI-E slots without reconfiguring everything? I need to move the 10GbE card to a different slot and I would like to remove 1 NVME drive completely as well as moving the remaining 2 to PCI-E x4 to U.2 adapters for cache and add 4 NVME drives to the NVME HBA as a performance teir. Will windows recognise the current NVME drives if they are moved or do I need to retire all 3 NVME drives from each server and then just add them back when I move them? What about the 10GbE network adapter? I also realize I will need to add the new NVME drives that I plan to attach to the HBA manually otherwise they will just become cache drives. Thanks for the help.

windows-server-clustering
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

jiayaozhu-MSFT avatar image
0 Votes"
jiayaozhu-MSFT answered suicidegybe commented

Hi,

Thank you for your posting!

Firstly, in view of the 10GbE network adapter, move it to another PCI-E slot will change your IP address and related DNS registery. If you use static IP address, you may have to maunally configure the new IP address while if you use dynamic IP address (enabling DHCP), DHCP will automatically change your IP address. The change in IP address does not necessarily affect your cluster configuration, but you may have to check the traffic transmission among the cluster. Because the change of IP address may lead to the scenario where the receiver in a cluster cannot find the original IP address.

Therefore, I suggest you could refer to the below document for your confusion:

https://docs.microsoft.com/zh-cn/troubleshoot/windows-server/high-availability/change-network-adapters-ip-address-cluster-node

Secondly, in view of moving the remaining 2 NVME drives, you do not necessarily retire them and add them back, S2D will automatically recognize eligible drives and add them to the storage pool. But as you have realized, you are supposed to add the new NVME drives that I plan to attach to the HBA manually.

More information about S2D storage pool:

https://techcommunity.microsoft.com/t5/storage-at-microsoft/deep-dive-the-storage-pool-in-storage-spaces-direct/ba-p/425959

Thank you for your time!

Best regards
Joann


If the Answer is helpful, please click "Accept Answer" and upvote it.

Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.

· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Thank you, I went ahead and moved the NICs and relocated the NVME cache drives. Everything ended up working fine. Since I use a DHCP server that has static assignments the NICs sorted themselves out without issue. As for removing 1 cache drive and relocating the other 2 to direct PCIe slots instead of the HBA. All I needed to do was restart the server and retire the missing cache drive and the cluster self-repaired.

As for adding the new NVME drives that I have attached to the HBA I am slightly stuck. They are connected to the systems and are seen by the OS no problem they are listed in a primordial pool with can pool status as true. How can I add these to a performance tier? Do I need to scratch the whole pool and start over? I can do that, and if I can setup a 2-node mirror accelerated parity with 2, DC P4600 2TB as cache, 4 Micron 9100 2.2TB SSDs as performance and the 28 3TB SAS HDDs as capacity per node I would be more than willing to back up my VM VHDs and rebuild the cluster.

0 Votes 0 ·