Windows Server 2019 Hyper-V guest VM ISCSI MPIO disconnects

Gary Jones 6 Reputation points
2023-06-08T14:25:05.1766667+00:00

Hi,
Please could someone help me with an issue I’m getting on a Windows 2019 Server Hyper-V Guest.

The Windows 2019 Server Hyper-V host has x3 NIC Teams - x1 for Production (active/active) and x2 for the Fault Domains (active/passive). The Hyper-V host can connect to the ISCSI device no problem.

I have configured x2 dedicated iscsi virtual switches using spare network ports (so that only the Production NIC Team is shared with the VM).

MPIO and ISCSI Initiator is configured on the guest OS, and I can connect to an ISCSI volume within the guest.

However, when I try and do anything with the device volume, I get errors in the event log:
Connection to the target was lost. The initiator will attempt to retry the connection.

It does let me format it (eventually) and copy some files. But usually on large files it drops the connection and doesn’t reconnect, halting the copy with the estimated end time gradually increasing (throughput at 0mbps).

If I ping the ISCSI device I can ping all paths and the devices management interface says all paths active. ISCSI Initiator says the device is connected and active.

I don’t particularly want the guest to connect directly to the ISCSI, I’d rather use VHD. But I need an initial connection to copy the files off the volume in to a VHD.

Jumbo frames are not enabled on any device. Windows Firewall is not enabled. The fault domains have separate IP ranges 192.168.FD.x from the Production network. There are no VLANs configured on the host/guest as the switch itself is split. The NICs are a mixture of Broadcom and Intel, but Ive tried disabling each one and it still drops connection. Doesnt seem to be ISCSI device specific as I have two with same issues: ME5024 and SCv2020.

Could someone give me some pointers please? (This question is also posted on SpiceWorks).

Thanks

Gary

Windows for business | Windows Client for IT Pros | Storage high availability | Virtualization and Hyper-V
Windows for business | Windows Server | User experience | Other
0 comments No comments
{count} votes

3 answers

Sort by: Most helpful
  1. Gary Jones 6 Reputation points
    2023-11-20T16:27:20.0733333+00:00

    Hi,

    I thought I would update on this as I have finally managed to get it working. The changes I made were :-

    Production (x3 nic team) - only available to VM's
    Fault Domain vSwitches (1 nic x2 FD's) - only available to VM's

    The HyperV host has its own migration/management NICs and Fault Domain connections. All in all I have 11 cables plugged across x3 network cards. Could probably drop x2 connections as the HyperV host FD NIC-Teams dont really need a standby adapter.

    On both the HyperV host and VMs I ran PS:

    disable-netadaptervmq *
    disable-netadapterrsc *
    disable-netadapterrss *
    disable-netadapterlso *
    disable-netadapterchecksumoffload -name * -udpipv4
    disable-netadapterchecksumoffload -name * -udpipv6
    disable-netadapterchecksumoffload -name * -ipipv4

    new-msdsmsupportedhw -vendorid "COMPELNT" -productid "Compellent vol"
    new-msdsmsupportedhw -vendorid "Dell EMC" -productid "ME5 "

    I think, if I was to put my finger on something that resolved the issue, it was disable-netadapterlso * within the HyperV guest. This seemed to really kick it in to life. I can now connect the guest OS' directly to the SAN without having to create a CSV, VHD and then copy the files in to the virtual hard disk.

    Gary

    1 person found this answer helpful.
    0 comments No comments

  2. Limitless Technology 44,776 Reputation points
    2023-06-09T12:02:33.9366667+00:00

    Hello Gary,

    Thank you for your question and for reaching out with your question today.

    Based on the information provided, it seems you are experiencing intermittent connection issues with the iSCSI volume within the Hyper-V guest. Here are a few troubleshooting steps you can try to resolve the problem:

    1. Check network connectivity: Ensure that the network connections between the Hyper-V host, virtual switches, and the iSCSI device are stable and without any packet loss. Verify the network configurations, such as IP addresses, subnet masks, and default gateways, to ensure they are correctly set up.
    2. Update NIC drivers and firmware: Ensure that you have the latest drivers and firmware installed for the network interface cards (NICs) on both the Hyper-V host and the guest OS. Outdated drivers or firmware can sometimes cause connectivity issues.
    3. Verify MPIO configuration: Double-check the MPIO (Multipath I/O) configuration on both the Hyper-V host and the guest OS. Make sure you have properly configured MPIO with the correct paths and load balancing settings. You may also want to review any MPIO-specific settings provided by your iSCSI device manufacturer.
    4. Adjust iSCSI timeout settings: The default timeout settings for iSCSI sessions might be too short for your environment. Try increasing the timeout values on the Hyper-V host and the guest OS to allow for longer connection retries. You can modify the timeout settings through the iSCSI Initiator properties.
    5. Monitor network traffic: Use network monitoring tools to observe network traffic between the Hyper-V host and the iSCSI device. Look for any patterns or anomalies that may indicate network issues, such as high latency, dropped packets, or congestion.
    6. Test with a different iSCSI device: If possible, try connecting to a different iSCSI device or target to see if the issue persists. This can help determine if the problem is specific to the current iSCSI devices or if it's a more general configuration issue.
    7. Consider contacting the iSCSI device support: If the issue persists after trying the above steps, it may be beneficial to reach out to the support team of your iSCSI device manufacturer. They can provide further guidance and assistance in troubleshooting the specific device and configuration.

    Remember to backup any important data before making any changes to your configuration or attempting further troubleshooting steps.

    I used AI provided by ChatGPT to formulate part of this response. I have verified that the information is accurate before sharing it with you.

    If the reply was helpful, please don’t forget to upvote or accept as answer.

    Best regards.

    0 comments No comments

  3. Alex Bykovskyi 2,241 Reputation points
    2023-06-12T17:04:54.7533333+00:00

    Hey,

    I think your issue might be related to your network configuration. Have you configured MTU (Jumbo Frames) on your adapters/team? It might cause issues. In addition, I prefer using iSCSI over separate NICs (not in a team) and use MPIO instead of teaming. You can try removing your team and test connection via single interface and see how it works. As another option, you can try using StarWind VSAN to test if other iSCSI targets work in your configuration.
    https://www.starwindsoftware.com/starwind-virtual-san

    Cheers,  
    Alex Bykovskyi  
    StarWind Software  
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.