Archive for January 2013

Configure Software iSCSI Port Bindings

What is Software iSCSI port binding?

Software iSCSI port binding is the process of creating multiple paths between iSCSI adapters and an iSCSI Storage target. By default, ESXi does not setup multipathing for iSCSI adapters. As a result, all targets are accessible by only a single path. This is true regardless of if teaming was setup for your NICS on the VMkernel port used for iSCSI. To ensure that your storage is still accessible in the event of a path failure or to take advantage of load balancing features, Software iSCSI Port Binding is required.

Capture

With the software-based iSCSI implementation, you can use standard NICs to connect your host to a remote iSCSI target on the IP network. The software iSCSI adapter that is built into ESXi facilitates this connection by communicating with the physical NICs through the network stack.

Before you can use the software iSCSI adapter, you must

  • Set up networking
  • Activate the adapter
  • Configure parameters such as discovery addresses and CHAP

Setup Networking

Software and dependent hardware iSCSI adapters depend on VMkernel networking. If you use the software or dependent hardware iSCSI adapeters, you must configure connections for the traffic between the iSCSI component and the physical network adapters. Configuring the network connection involves creating a virtual VMkernel interface for each physical network adapter and associating the interface with an appropriate iSCSI adapter.

If you use a single vSphere standard switch to connect VMkernel to multiple network adapters, change the port group policy, so that it is compatible with the iSCSI network requirements.

By default, for each virtual adapter on the vSphere standard switch, all network adapters appear as active. You must override this port group policy setup, so that each VMkernel interface maps to only one corresponding active NIC. For example

  • vmk1 maps to vmnic1
  • vmk2 maps to vmnic2

Procedure

  • Create a vSphere standard switch that connects VMkernel with physical network adapters designated for iSCSI traffic. The number of VMkernel adapters must correspond to the number of physical adapters on the vSphere standard switch
  • Log in to the vSphere Client and select the host from the inventory panel.
  • Click the Configuration tab and click Networking
  • Select the vSphere standard switch that you use for iSCSI and click Properties.
  • On the Ports tab, select an iSCSI VMkernel adapter and click Edit.
  • Click the NIC Teaming tab and select Override switch failover order.

iscsi

  • Designate only one physical adapter as active and move all remaining adapters to the Unused Adapters category. You will see a Warning Trianlge against your iSCSI VMKernel port if you don’t.
  • Repeat Step 4 through Step 6 for each iSCSI VMkernel interface on the vSphere standard switch.
  • Next go to the switch properties and click Add and choose VMkernel

vmkernel

  • Type a name. Eg VMkernel-iSCSI

ISCSI1

  •  Enter an IP Address for this adapter

iscsi2

  • Finish and check Summary Page

Setup Software iSCSI Adapter

  • Within the Host View, click the Configuration tab > Storage Adapters
  • Click Add to add a Software iSCSI Adapter
  • Right click the new Software iSCSI Adapter and select Properties

ISCSI3

  • Enable the adapter if it is not already
  • Open the Network Configuration tab
  • Add the new port group(s) associated with the iSCSI network

ISCSI4

  • Click the Dynamic Discovery tab

ISCSI5

  • Add the IP addresses of the ISCSI targets
  • Click Static Discovery and check the details in here

ISCSI6

  • Click Close
  • Rescan the attached disks

What if you have multiple adapters?

  • If your host has more than one physical network adapter for software and dependent hardware iSCSI, use the adapters for multipathing.
  • You can connect the software iSCSI adapter with any physical NICs available on your host. The dependent iSCSI adapters must be connected only with their own physical NICs.
  • Physical NICs must be on the same subnet as the iSCSI storage system they connect to.

The iSCSI adapter and physical NIC connect through a virtual VMkernel adapter, also called virtual network adapter or VMkernel port. You create a VMkernel adapter (vmk) on a vSphere switch (vSwitch) using 1:1 mapping between each virtual and physical network adapter.

One way to achieve the 1:1 mapping when you have multiple NICs, is to designate a separate vSphere switch for each virtual-to-physical adapter pair. The following examples show configurations that use vSphere standard switches, but you can use distributed switches as well.

Capture1

If you use separate vSphere switches, you must connect them to different IP subnets.
Otherwise, VMkernel adapters might experience connectivity problems and the host
will fail to discover iSCSI LUNs

An alternative is to add all NICs and VMkernel adapters to a single vSphere
standard switch. In this case, you must override the default network setup and
make sure that each VMkernel adapter maps to only one corresponding active
physical adapter.

Capture2

General Information on iSCSI Adapters

http://www.electricmonk.org.uk/2012/04/18/using-esxi-with-iscsi-sans/

Change a Multipath Policy

policy1

Changing Path Policies

You can change path policies with

  • esxcli
  • vicfg-mpath

What Path Policies are there?

  • Most Recently Used (MRU)

Selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESXi/ESX host switches to an alternative path and continues to use the new path while it is available. This is the default policy for Logical Unit Numbers (LUNs) presented from an Active/Passive array. ESXi/ESX does not return to the previous path if, or when, it returns; it remains on the working path until it, for any reason, fails.

Note: The preferred flag, while sometimes visible, is not applicable to the MRU pathing policy and can be disregarded

  • Fixed (Fixed)

Uses the designated preferred path flag, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESXi/ESX host cannot use the preferred path or it becomes unavailable, the ESXi/ESX host selects an alternative available path. The host automatically returns to the previously-defined preferred path as soon as it becomes available again. This is the default policy for LUNs presented from an Active/Active storage array.

  • Round Robin (RR)

Uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths. For Active/Passive storage arrays, only the paths to the active controller will be used in the Round Robin policy. For Active/Active storage arrays, all paths will be used in the Round Robin policy.

Note: This policy is not currently supported for Logical Units that are part of a Microsoft Cluster Service (MSCS) virtual machine.

  • Fixed path with Array Preference

The VMW_PSP_FIXED_AP policy was introduced in ESXi/ESX 4.1. It works for both Active/Active and Active/Passive storage arrays that support Asymmetric Logical Unit Access (ALUA). This policy queries the storage array for the preferred path based on the array’s preference. If no preferred path is specified by the user, the storage array selects the preferred path based on specific criteria.

Note: The VMW_PSP_FIXED_AP policy has been removed from ESXi 5.0. For ALUA arrays in ESXi 5.0, the MRU Path Selection Policy (PSP) is normally selected but some storage arrays need to use Fixed. To check which PSP is recommended for your storage array, see the Storage/SAN section in the VMware Compatibility Guide or contact your storage vendor.

Notes:

  • These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions.
  • Round Robin is not supported on all storage arrays. Please check with your array documentation or storage vendor to verify that Round Robin is supported and/or recommended for your array and configuration. Switching to a unsupported or undesirable pathing policy can result in connectivity issues to the LUNs (in a worst-case scenario, this can cause an outage).

Changing Path Policies with ESXCLI

  • Ensure your device is claimed by the NMP plugin. Only NMP devices allow you to change the path policy.
  • esxcli storage nmp device list

Multipath1

  • Retrieve the list of path selection policies on the system to see which values are valid for the –psp option when you set the path policy.
  • esxcli storage core plugin registration list

multipath2

  • Set the path policy using esxcli.
  • esxcli storage nmp device set - -device naa.xxx - -psp VMW_PSP_RR

MULTIPATH3

(Optional) If you specified the VMW_PSP_FIXED policy, you must make sure the preferred path is set correctly.

  • Check which path is the preferred path for a device.
  • esxcli storage nmp psp fixed deviceconfig get - -device naa.xxx b  If necessary, change the preferred path.
  • Set the preferred path to vmhba32:C0:T0:L0
  • esxcli storage nmp psp fixed deviceconfig set - -device naa.xxx –path vmhba32:C0:T0:L0

multipath4

  • Run the command with –default to clear the preferred path selection.

Perform command line configuration of multipathing options

signpost

Multipathing Considerations

Specific considerations apply when you manage storage multipathing plug-ins and claim rules. The following considerations help you with multipathing

  • If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
  • When the system searches the SATP rules to locate a SATP for a given device, it searches the driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules are searched. If no match occurs, NMP selects a default SATP for the device.
  • If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no claim rule match occurs for this device. The device is claimed by the default SATP based on the device’s transport type.
  • The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no active/optimized path. This path is used until a better path is available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
  • If you enable VMW_PSP_FIXED with VMW_SATP_ALUA, the host initially makes an arbitrary selection of the preferred path, regardless of whether the ALUA state is reported as optimized or unoptimized. As a result, VMware does not recommend to enable VMW_PSP_FIXED when VMW_SATP_ALUA is used for an ALUA-compliant storage array. The exception is when you assign the preferred path to be to one of the redundant storage processor (SP) nodes within an active-active storage array. The ALUA state is irrelevant.
  • By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices.

What can we use to configure Multipath Options

  • vCLI
  • vMA
  • Putty into DCUI console

What we can view and adjust

  • You can display all multipathing plugins available on your host
  • You can list any 3rd Party MPPs as well as your hosts PSP and SATPs and review the paths they claim
  • You can also define new paths and specify which multipathing plugin should claim the path

The ESXCLI Commands

Click the link to take you to the vSphere 5 Documentation Center for each command

These are the 2 commands you need to use to perform configuration of multipathing

nmp

nmp2

esxcli storage nmp psp Namespaces

generic1

Display NMP PSPs

  • esxcli storage nmp psp list

This command list all the PSPs controlled by the VMware NMP

psplist

More complicated commands with esxcli storage nmp psp namespace

  • esxcli storage nmp psp fixed deviceconfig set - -device naa.xxx –path vmhba3:C0:T5:L3

The command sets the preferred path to vmhba3:C0:T5:L3. Run the command with – -default to clear the preferred path selection

esxcli storage nmp satp Namespaces

generic2

Display SATPs for the Host

  • esxcli storage nmp satp list

For each SATP, the output displays information that shows the type of storage array or system this SATP supports and the default PSP for any LUNs using this SATP. Placeholder (plugin not loaded) in the Description column indicates that the SATP is not loaded.

satplist

More complicated commands with esxcli storage nmp satp namespaces

  • esxcli storage nmp satp rule add -V NewVend -M NewMod -s VMW_SATP_INV

The command assigns the VMW_SATP_INV plug-in to manage storage arrays with vendor string NewVend and model string NewMod.

esxcli storage nmp device NameSpaces

generic3

Display NMP Storage Devices

  • esxcli storage nmp device list

This command list all storage devices controlled by the VMware NMP and displays SATP and PSP information associated with each device

devicelist

More complicated commands with esxcli storage nmp device namespaces

  • esxcli storage nmp device set - -device naa.xxx - -psp VMW_PSP_FIXED

This command sets the path policy for the specified device to  VMW_PSP_FIXED

esxcli storage nmp path Namespaces

generic4

Display NMP Paths

  • esxcli storage nmp path list

This command list all the paths controlled by the VMware NMP and displays SATP and PSP information associated with each device

pathlist

More complicated commands with esxcli storage nmp path namespaces

There is only really the list command associated with this command

esxcli storage core Command Namespaces

storagecore

esxcli storage core adapter Command Namespaces

storagecore2

esxcli storage core device Command Namespaces

core3

esxcli storage core path Command Namespaces

core4

esxcli storage core plugin Command Namespaces

core5

esxcli storage core claiming Command Namespaces

core6

The esxcli storage core claiming namespace includes a number of troubleshooting commands. These  commands are not persistent and are useful only to developers who are writing PSA plugins or troubleshooting a system. If I/O is active on the path, unclaim  and reclaim actions fail

The help for esxcli storage core claiming includes the autoclaim command. Do not use this command unless instructed to do so by VMware support staff

esxcli storage core claimrule Command Namespaces

core7

The PSA uses claim rules to determine which multipathing module should claim the paths to a particular device and to manage the device. esxcli storage core claimrule manages claim rules.

Claim rule modification commands do not operate on the VMkernel directly. Instead they operate on the configuration file by adding and removing rules

To change the current claim rules in the VMkernel
1
Run one or more of the esxcli storage core claimrule modification commands (add, remove, or move).
2
Run esxcli storage core claimrule load to replace the current rules in the VMkernel with the modified rules from the configuration file.

Claim rules are numbered as follows.

  • Rules 0–100 are reserved for internal use by VMware.
  • Rules 101–65435 are available for general use. Any third party multipathing plugins installed on your system use claim rules in this range. By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not remove this rule, unless you want to unmask these devices.
  • Rules 65436–65535 are reserved for internal use by VMware.

When claiming a path, the PSA runs through the rules starting from the lowest number and determines is a path matches the claim rule specification. If the PSA finds a match, it gives the path to the corresponding plugin. This is worth noticing because a given path might match several claim rules.

The following examples illustrate adding claim rules.  

  • Add rule 321, which claims the path on adapter vmhba0, channel 0, target 0, LUN 0 for the NMP plugin.
  • esxcli storage core claimrule add -r 321 -t location -A vmhba0 -C 0 -T 0 -L 0 -P NMP
  • Add rule 429, which claims all paths provided by an adapter with the mptscsi driver for the MASK_PATH plugin.
  • esxcli storage core claimrule add -r 429 -t driver -D mptscsi -P MASK_PATH
  • Add rule 914, which claims all paths with vendor string VMWARE and model string Virtual for the NMP plugin.
  • esxcli storage core claimrule add -r 914 -t vendor -V VMWARE -M Virtual -P NMP
  • Add rule 1015, which claims all paths provided by FC adapters for the NMP plugin.
  • esxcli storage core claimrule add -r 1015 -t transport -R fc -P NMP

Example: Masking a LUN

In this example, you mask the LUN 20 on targets T1 and T2 accessed through storage adapters vmhba2 and vmhba3.

  • esxcli storage core claimrule list
  • esxcli  storage core claimrule add -P MASK_PATH -r 109 -t location -A
    vmhba2 -C 0 -T 1 -L 20
  • esxcli storage core claimrule add -P MASK_PATH -r 110 -t location -A
    vmhba3 -C 0 -T 1 -L 20
  • esxcli  storage core claimrule add -P MASK_PATH -r 111 -t location -A
    vmhba2 -C 0 -T 2 -L 20
  • esxcli storage core claimrule add -P MASK_PATH -r 112 -t location -A
    vmhba3 -C 0 -T 2 -L 20
  • esxcli storage core claimrule load
  • esxcli storage core claimrule list
  • esxcli storage core claiming unclaim -t location -A vmhba2
  • esxcli storage core claiming unclaim -t location -A vmhba3
  • esxcli storage core claimrule run

Install and Configure PSA Plugins

scales

Methods of Installing PSA Plugins

  • Using vCenter Update Manager
  • Using vCLI (use the esxcli software vib install command)
  • Using Vendor recommended Installation Guides
  • Using EMC’s Powerpath Installer
  • Using Dell’s Equalogic setup.pl script for their multipathing extension module
  • Using vihostupdate –server esxihost –install –bundle=Powerpath.5.4.SP2.zip

Checking Registration and Adding a Plugin

  • esxcli storage core plugin registration list will check if it is registered
  • esxcli storage core plugin registration add -m class_satp_va -N SATP -P class_satp_VA
  • Reboot the host(s) in order for the new PSP to take effect

Changing the VMW_SATP_CX# default PSP from VMW_PSP_MRU to VMW_PSP_RR

  • esxcli storage nmp satp set -s VMW_SATP_CX -P VMW_PSP_RR
  • Reboot the host(s) in order for the new PSP to take effect

VMware Document

vSphere Command-Line Interface Concepts and Examples ESXi 5.0

Understanding different multipathing Policy Functionalities

images

Types of Multipathing explained

  • VMW_PSP_FIXED
  • The host uses the designated preferred path, if it has been configured. Otherwise, the host selects the first working path discovered at system boot time.
  • If you want the host to use a particular preferred path, specify it through the vSphere Client or by using esxcli storage nmp psp fixed deviceconfig set.
  • The default policy for active‐active storage devices is VMW_PSP_FIXED however VMware does not recommend you use VMW_PSP_FIXED for devices that have the VMW_SATP_ALUA storage array type policy assigned to them.
  • Fixed is the default policy for most active-active storage devices.
  • If the host uses a default preferred path and the path’s status turns to Dead, a new path is selected as preferred. However, if you explicitly designate the preferred path, it will remain preferred even when it becomes inaccessible
  • VMW_PSP_MRU
  • The host selects the path that it used most recently.
  • When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again.
  • There is no preferred path setting with the MRU policy.
  • MRU is the default policy for active‐passive storage devices.
  • VMW_PSP_RR
  • The host uses an automatic path selection algorithm that rotates through all active paths when connecting to active‐passive arrays, or through all available paths when connecting to active‐active arrays.
  • Automatic path selection implements load balancing across the physical paths available to your host.
  • Load balancing is the process of spreading I/O requests across the paths. The goal is to optimize throughput performance such as I/O per second, megabytes per second, or response times.
  • VMW_PSP_RR is the default for a number of arrays and can be used with both active‐active and active‐passive arrays to implement load balancing across paths for different LUNs.

View Datastore Paths

Use the vSphere Client to review the paths that connect to storage devices the datastores are deployed on.

  • Log in to the vSphere Client and select a host from the inventory panel.
  • Click the Configuration tab and click Storage in the Hardware panel.
  • Click Datastores under View.
  • From the list of configured datastores, select the datastore whose paths you want to view, and click Properties.
  • Under Extents, select the storage device whose paths you want to view and click Manage Paths.
  • In the Paths panel, select the path to view
  • The panel underneath displays the path’s name. The name includes parameters describing the path:adapter ID, target ID, and device ID.
  • (Optional) To extract the path’s parameters, right-click the path and select Copy path to clipboard.

View Storage Device Paths

Use the vSphere Client to view which SATP and PSP the host uses for a specific storage device and the status of all available paths for this storage device.

  • Log in to the vSphere Client and select a server from the inventory panel.
  • Click the Configuration tab and click Storage in the Hardware panel.
  • Click Devices under View.
  • Select the storage device whose paths you want to view and click Manage Paths.
  • In the Paths panel, select the path to view
  • The panel underneath displays the path’s name. The name includes parameters describing the path:adapter ID, target ID, and device ID.
  • (Optional) To extract the path’s parameters, right-click the path and select Copy path to clipboard.