Hyper-V

Hyper-V Cluster MPIO iSCSI Installation and Configuration

A look at Hyper-V Cluster MPIO iSCSI Installation and Configuration including Windows requirements, iSCSI portal configuration, and connecting MPIO targets

When configuring storage for Windows Server Hyper-V clusters with SAN storage, you always want to make sure you have multiple paths to your storage for redundancy and performance. With SAN storage and iSCSI you won’t want to utilize link aggregation technologies such as LACP switch LAGs or Switch Embedded Teaming. Instead you want to use something called Multipath I/O or MPIO in Microsoft terms for connecting your Hyper-V hosts using multiple paths to your storage. First, let’s see why you want to use MPIO instead of LACP or other link aggregation technologies and the Windows components that need to be installed in Windows Server to allow utilizing MPIO connections. This will allow Hyper-V hosts to make use of multiple paths connected to iSCSI storage. Let’s look at Hyper-V Cluster MPIO iSCSI installation and configuration and why this is important.

Why Use MPIO instead of Link Aggregation?

Why do you use MPIO instead of link aggregation technolgies like NIC teaming or LACP? The link aggregation technologies certainly have their place and would serve to solve the redundancy problem but would not improve performance. This is due to the fact that Link Aggregation technologies improve throughput when there are unique I/O flows that originate with different sources. The iSCSI flows do not appear to be unique to the aggregation technologies and so do not benefit from any performance boost with link aggregation.ย  MPIO on the other hand deals with multi-pathing by looking at the initiator and the target, or client and server level.ย  So multiple initiators can connect to the same or different targets and benefit from the multipathing capabilities that MPIO brings to the mix.

With the above state reason, Hyper-V benefits from the use of MPIO and is the way you want to configure your iSCSI connections when connecting the individual Hyper-V hosts to the iSCSI SAN shared storage.ย  What is involved with configuring MPIO in Hyper-V?ย  Let’s look at what is required from the Windows Server side of things.

Hyper-V Cluster MPIO Installation and Configuration

Configuring MPIO in Windows Server requires just a few steps.ย  They include:

  1. Adding the MPIO Windows Feature
  2. Reboot
  3. Configuring theย Add support for iSCSI devicesย for your particular SAN device
  4. Reboot
  5. Configure iSCSI connections with Enable multi-path
  6. Add all the connections for each target, enabling multi-path
  7. Check multi-path MPIO connections

Configuring MPIO Windows Feature

The first thing you need to do in a Windows Server installation isย add the Multipath I/Oย Windows Feature.ย  To do that, simply add the Feature in Server Manager or PowerShell with the Add-WindowsFeature cmdlet.

Enable-Multi-path-IO-Windows-Feature-for-MPIO
Enable Multi-path IO Windows Feature for MPIO

After the installation of Multipath I/O, you will be prompted to restart Windows to finalize the installation.ย  After restarting Windows, you are ready to enable theย add support for iSCSI devices feature for your selected SAN device.ย  To do this, after the server reboot, launch the MPIO configuration utility by typingย mpiocpl at a run menu or cmd line.

After launching the utility, navigate to theย Discover Multi-Paths tab, select your SPC-3 compliant device, and check the boxย Add support for iSCSI devices.ย  Afterwards, click theย Add button.ย  You will be prompted for a reboot.

Add-support-for-iSCSI-devices-to-configure-MPIO-in-Hyper-V
Add support for iSCSI devices to configure MPIO in Hyper-V

After this reboot, you are ready to start adding your iSCSI connections with multi-path enabled.

Adding Hyper-V iSCSI Connections with MPIO Multi-path

To begin adding target portals and then the targets themselves, launch theย iSCSI Initiator Properties configuration utility.ย  This can be done by typingย iscsicpl at a run or cmd prompt.ย  Navigate to theย Discovery tab and then click theย Discover Portal button.

Discover-ports-in-the-iSCSI-initiator-properties
Discover ports in the iSCSI initiator properties

Enter your first iSCSI portal address, then click theย Advancedย button.

Discover-Target-Portal-in-configuring-MPIO-in-Hyper-V-for-iSCSI
Discover Target Portal in configuring MPIO in Hyper-V for iSCSI

Under the Advanced Settings, you can set a specificย Local adapter andย Initiator IP.

Discover-target-portal-advanced-settings-MPIO-configuration
Discover target portal advanced settings MPIO configuration

Do this for your multipleย target portals.ย  As you can see below, I have different target portal IPs that have been added using the specific network adapter that aligns with the target portal network address/subnet.

After-adding-both-target-ports-for-MPIO-configuration-in-Hyper-V
After adding both target ports for MPIO configuration in Hyper-V

After adding the portals, click on theย Targets tab.ย  You will see the Discovered targets and they will most likely be in the Inactive status.ย  Click theย Connect button.

Connecting-the-iSCSI-discovered-targets
Connecting the iSCSI discovered targets

On theย Connect to Target dialog box, click theย Enable multi-path checkbox and then click theย Advanced button.

Connecting-the-first-target-with-multi-path-and-advanced-settings-for-MPIO-in-Hyper-V
Connecting the first target with multi-path and advanced settings for MPIO in Hyper-V

Here is where the actual magic of the MPIO connections comes into play.ย  Below, I am setting theย Local adapter, then theย Initiator IP andย Target portal IP.ย  So our end goal here is to setup the connections using each portal adapter IP.

Setting-the-Initiator-Initiator-IP-and-Target-Portal-IP-for-first-MPIO-connection-in-Hyper-V-iSCSI
Setting the Initiator Initiator IP and Target Portal IP for first MPIO connection in Hyper-V iSCSI

Click theย Connect button again on theย same discovered target.ย  Now, I use the secondย Initiator IP andย Target portal IPย for the second connection to the same target.

Setting-the-Initiator-Initiator-IP-and-Target-Portal-IP-for-first-MPIO-connection-in-Hyper-V-iSCSI-for-second-connection
Setting the Initiator Initiator IP and Target Portal IP for first MPIO connection in Hyper-V iSCSI for second connection

Now we see theย Connected status.ย  Click theย Properties button so you can verify the MPIO status.

First-Hyper-V-MPIO-iSCSI-connection
First Hyper-V MPIO iSCSI connection

Notice you have two identifiers.ย  Click each one and you will see the respectiveย Target portal group tag.

Verifying-the-first-portal-group-ID-for-MPIO-in-Hyper-V-iSCSI
Verifying the first portal group ID for MPIO in Hyper-V iSCSI

Clicking on the secondย Identifier and notice theย Target portal group tag is the second group tag.

Verifying-the-second-portal-group-ID-for-MPIO-in-Hyper-V-iSCSI
Verifying the second portal group ID for MPIO in Hyper-V iSCSI

Click theย Portal Groups tab and you should see bothย Associated network portals.

Verifying-portal-group-connections
Verifying portal group connections

For each Identifier, you canย Configurer Multiple Connected Session settings by clicking theย MCS button.

Configuring-Multiple-Connected-Session
Configuring Multiple Connected Session

Note theย MCS policy options that control the multi-pathing behavior.ย  Options are:

  • Fail Over Only
  • Round Robin
  • Round Robin With Subset
  • Least Queue Depth
  • Weighted Paths
Configuring-the-MCS-policy-for-Hyper-V-MPIO-connections
Configuring the MCS policy for Hyper-V MPIO connections

You will want to complete the above configuration for every Hyper-V host that you have in the Windows Server Hyper-V Failover Cluster to ensure youย  have multi-path connections to shared storage.

Verifying Hyper-V iSCSI MPIO Multi-path Connections

There is a powerful little built-in cmd line utility that allows easily seeing the state of your MPIO connections in Hyper-V.ย  The utility isย mpclaim.ย  Below, you can see your MPIO enabled devices with the command:

  • mpclaim -s -d
Using-mpclaim-to-check-MPIO-devices
Using mpclaim to check MPIO devices

To look at the multipath information for a specific device use:

  • mpclaim -s -d <device ID>

You can see below, I have two active paths to the specific device ID queried.ย  State isย Active/Optimized.

Using-mpclaim-to-verify-path-ID-state-SCSI-Address-and-Weight-for-specific-MPIO-device-in-Hyper-V
Using mpclaim to verify path ID state SCSI Address and Weight for specific MPIO device in Hyper-V

Takeaways

Hyper-V Cluster MPIO iSCSI Installation and Configuration involves a few steps as shown above but is relatively easy to configure.ย  After adding the Multipath I/O Windows Feature and enabling the iSCSI multi-path plugin for your specific storage array, you can then add your iSCSI targets and connect to them using the multiple portal IP addresses, thus enabling multi-path connectivity to your storage.ย  This allows for not only having a redundant connection to your storage devices but also allows for optimum performance for iSCSI traffic to the SAN storage.

Subscribe to VirtualizationHowto via Email ๐Ÿ””

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

One Comment

  1. The MCS (multiple connected sessions) area is not actually used in this setup at all. Changing the MPIO policy there will have no impact. There are no Multiple Connected Sessions per portal. In your setup you are using MPIO between multiple portals.

    You can set the policy per device by clicking on Devices rather than MCS.

    You can also set the global policy by using the powershell cmdlet (for example):
    Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy FOO
    See: https://learn.microsoft.com/en-us/powershell/module/mpio/set-msdsmglobaldefaultloadbalancepolicy?view=windowsserver2016-ps
    Note that the commands are different between 2016 and 2022 so be sure to select the correct version of windows on the microsoft page above.

    I hope this helps,
    Will

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.