Migrate a vSS network to a Hybrid or vDS Solution

lightswitch

Hybrid vSS/vDS/Nexus Virtual Switch Environments

Each ESX host can concurrently operate a mixture of virtual switches as follows:

  • One or more vNetwork Standard Switches
  • One or more vNetwork Distributed Switches
  • A maximum of one Cisco Nexus 1000V (VEM or Virtual Ethernet Module).

Note that physical NICs (vmnics) cannot be shared between virtual switches (i.e. each vmnic only be assigned to one switch at any one time)

Examples of Distributed switch configurations

Single vDS

Migrating the entire vSS environment to a single vDS represents the simplest deployment and administration model as per below picture. All VM networking plus VMkernel and service console ports are migrated to the vDS. The NIC teaming policies configured on the DV Port Groups can isolate and direct traffic down the appropriate dvUplinks (which map to individual vmnics on each host)

singlevds

Hybrid vDS and vSS

The picture below shows an example environment where the VM networking is migrated to a vDS, but the Service Console and VMkernel ports remain on a vSS. This scenario might be preferred for some environments where the NIC teaming policies for the VMs are isolated
from those of the VMkernel and Service Console ports. For example, in the picture, the vmnics and VM networks on vSS-1 could be migrated to vDS-0 while vSS-0 could remain intact and in place.
In this scenario, VMs can still take advantage of Network VMotion as they are located on dv Port Groups on the vDS.

vdsandvss

Multiple vDS

Hosts can be added to multiple vDS’s as shown below (Two are shown, but more could be added, with or without vmnic to dvUplink assignments). This configuration might be used to:

  • Retain traffic separation when attached to access ports on physical switches (i.e. no VLAN tagging and switchports are assigned to a single VLAN).
  • Retain switch separation but use advanced vDS features for all ports and traffic types.

HYBRID

Planning the Migration to vDS

Migration from a vNetwork Standard Switch only environment to one featuring one or more vNetwork Distributed Switches can be accomplished in either of two ways:

  • Using only the vDS User Interface (vDS UI) — Hosts are migrated one by one by following the New vNetwork Distributed Switch process under the Home > Inventory > Network view of the Datacenter from the vSphere Client.
  • Using a combination of the vDS UI and Host Profiles— The first host is migrated to vDS and the remaining hosts are migrated to vDS using a Host Profile of the first host.

High Level Overview

The steps involved in a vDS UI migration of an existing environment using Standard Switches to a vDS are as follows:

  • Create vDS (without any associated hosts)
  • Create Distributed Virtual Port Groups on vDS to match existing or required environment
  • Add host to vDS and migrate vmnics to dvUplinks and Virtual Ports to DV Port Groups
  • Repeat Step 3 for remaining hosts

SWITCHACTION

Create a vSphere Distributed Switch

If you have decided that you need to perform a vSS to vDS migration, a vDS needs to be created first.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Networking (Ctrl+Shift+N)
  3. Highlight the datacenter in which the vDS will be created.
  4. With the Summary tab selected, under Commands, click New vSphere Distributed Switch
  5. On the Switch Version screen, select the appropriate vDS version, ie 5.0.0, click Next.
  6. On the General Properties screen, enter a name and select the number of uplink ports, click Next.
  7. On the Add Hosts and Physical Adapters screen, select Add later, click Next.
  8. On the Completion screen, ensure that Automatically create a default port group is selected, click Finish.
  9. Verify that the vDS and associated port group were created successfully.

Create DV Port Groups

You now need to create vDS port groups. Port groups should be created for each of the traffic types in your environment such as VM traffic, iSCSI, FT, Management and vMotion traffic, as required.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Networking (Ctrl+Shift+N).
  3. Highlight the vDS created in the previous section.
  4. Under Commands, click New Port Group.
  5. On the Properties screen, enter an appropriate Name, ie IPStorage, Number of Ports and VLAN type and ID (if required), click Next. Note: If the port group is associated with a VLAN, it’s recommended to include the VLAN ID in the port group name
  6. On the completion screen, verify the port group settings, click Finish.
  7. Repeat steps for all required port groups.

Add ESXi Host(s) to vSphere Distributed Switch

After successfully creating a vDS and configuring the required port groups, we now need to add an ESXi host to the vDS.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Networking (Ctrl+Shift+N).
  3. Highlight the vDS created previously.
  4. Under Commands, click Add Host.
  5. On the Select Hosts and Physical Adapters screen, select the appropriate host(s) and any physical adapters (uplinks) which are not currently in use on your vSS, click Next. Note: Depending on the number of physical NIC’s in your host, it’s a good idea to leave at least 1 connected to the vSS until the migration is complete. This is particularly relevant if your vCenter Server is a VM.
  6. On the Network Connectivity screen, migrate virtual NICs as required, selecting the associated destination port group on the vDS, click Next.
  7. On the Virtual Machine Networking screen, click Migrate virtual machine networking. Select the VMs to be migrated and the appropriate destination port group(s), click Next..
  8. On the Completion screen, verify your settings, click Finish.
  9. Ensure that the task completes successfully.

Migrate Existing Virtual Adapters (vmkernel ports).

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click Configuration > Networking (Hardware) > vSphere Distributed Switch.
  4. Click Manage Virtual Adapters.
  5. On the Manage Virtual Adapters screen, click Add.
  6. On the Creation Type screen, select Migrate existing virtual adapters, click Next.
  7. On the Network Connectivity screen, select the appropriate virtual adapter(s) and destination port group(s), Click Next.
  8. On the Ready to Complete screen, verify the dvSwitch settings, click Finish.

Create New Virtual Adapters (vmkernel ports)

Perform the following steps to create new virtual adapters for any new port groups which were created previously.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click Configuration > Networking (Hardware) > vSphere Distributed Switch.
  4. Click Manage Virtual Adapters.
  5. On the Manage Virtual Adapters screen, click Add.
  6. On the Creation Type screen, select New virtual adapter, click Next.
  7. On the Virtual Adapter Type screen, ensure that VMkernel is selected, click Next.
  8. On the Connection Settings screen, ensure that Select port group is selected. Click the dropdown and select the appropriate port group, ie VMotion. Click Use this virtual adapter for vMotion, click Next.
  9. On the VMkernel – IP Connection Settings screen, ensure that Use the following IP settings is selected. Input IP settings appropriate for your environment, click Next.
  10. On the Completion screen, verify your settings, click Finish.
  11. Repeat for remaining virtual adapters, as required.

Migrate Remaining VMs

Follow the steps below to migrate any VMs which remain on your vSS.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Right-click the appropriate VM, click Edit Settings.
  4. With the Hardware tab selected, highlight the network adapter. Under Network Connection, click the dropdown associated with Network label. Select the appropriate port group, ie VMTraffic (dvSwitch). Click OK.
  5. Ensure the task completes successfully.
  6. Repeat for any remaining VMs.

Migrate Remaining Uplinks

It’s always a good idea to leave a physical adapter or 2 connected to the vSS, especially when your vCenter Server is a VM. Migrating the management network can sometimes cause issues. Assuming all your VM’s have been migrated at this point, perform the following steps to migrate any remaining physical adapters (uplinks) to the newly created vSS.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click Configuration > Networking (Hardware) > vSphere Distributed Switch.
  4. Click Manage Physical Adapters.
  5. Click Click to Add NIC within the DVUplinks port group.
  6. Select the appropriate physical adapter, click OK.
  7. Click Yes on the remove and reconnect screen.
  8. Click OK.
  9. Ensure that the task completes successfully.
  10. Repeat for any remaining physical adapters

Identify common virtual switch configurations

images

vSphere Standard Switch Architecture

You can create abstracted network devices called vSphere standard switches. A standard switch can..

  1. Route traffic internally between virtual machines and link to external networks
  2. Combine the bandwidth of multiple network adaptors and balance communications traffic among them.
  3. Handle physical NIC failover.
  4. Have a default number of logical ports which for a standard switch is 120. You can
  5. Connect one network adapter of a virtual machine to each port. Each uplink adapter associated with a standard switch uses one port.
  6. Each logical port on the standard switch is a member of a single port group.
  7. Have one or more port groups assigned to it.
  8. When two or more virtual machines are connected to the same standard switch, network traffic between them is routed locally. If an uplink adapter is attached to the standard switch, each virtual machine can access the external network that the adapter is connected to.
  9. vSphere standard switch settings control switch-wide defaults for ports, which can be overridden by port group settings for each standard switch. You can edit standard switch properties, such as the uplink configuration and the number of available ports.

Standard Switch

standardswitch

vSphere Distributed Switch Architecture

A vSphere distributed switch functions as a single switch across all associated hosts. This enables you to set network configurations that span across all member hosts, and allows virtual machines to maintain consistent network configuration as they migrate across multiple hosts

Like a vSphere standard switch, each vSphere distributed switch is a network hub that virtual machines can use.

  • Enterprise Plus Licensed feature only
  • VMware vCenter owns the configuration of the distributed switch
  • Distributed switches can support up to 350 hosts
  • You configure a Distributed switch on vCenter rather than individually on each host
  • Provides support for Private VLANs
  • Enable networking statistics and policies to migrate with VMs during vMotion
  • A distributed switch can forward traffic internally between virtual machines or link to an external network by connecting to physical Ethernet adapters, also known as uplink adapters.
  • Each distributed switch can also have one or more distributed port groups assigned to it.
  • Distributed port groups group multiple ports under a common configuration and provide a stable anchor point for virtual machines connecting to labeled networks.
  • Each distributed port group is identified by a network label, which is unique to the current datacenter. A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional.
  • Network resource pools allow you to manage network traffic by type of network traffic.
  • In addition to vSphere distributed switches, vSphere 5 also provides support for third-party virtual switches.

vdsswitch

TCP/IP Stack at the VMkernel Level

The VMware VMkernel TCP/IP networking stack provides networking support in multiple ways for each of the services it handles.

The VMkernel TCP/IP stack handles iSCSI, NFS, and vMotion in the following ways for both Standard and Distributed Virtual Switches

  • iSCSI as a virtual machine datastore
  • iSCSI for the direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
  • NFS as a virtual machine datastore.
  • NFS for the direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
  • Migration with vMotion.
  • Fault Tolerance logging.
  • Port-binding for vMotion interfaces.
  • Provides networking information to dependent hardware iSCSI adapters.
  • If you have two or more physical NICs for iSCSI, you can create multiple paths for the software iSCSI by configuring iSCSI Multipathing.

Networking Policies

Policies set at the standard switch or distributed port group level apply to all of the port groups on the standard switch or to ports in the distributed port group. The exceptions are the configuration options that are overridden at the standard port group or distributed port level.

  • Load Balancing and Failover Policy
  • VLAN Policy
  • Security Policy
  • Traffic Shaping Policy
  • Resource Allocation Policy
  • Monitoring Policy
  • Port Blocking Policies
  • Manage Policies for Multiple Port Groups on a vSphere Distributed Switch

Networking Best Practices

  • Separate network services from one another to achieve greater security and better performance. Put a set of virtual machines on a separate physical NIC. This separation allows for a portion of the total networking workload to be shared evenly across multiple CPUs. The isolated virtual machines can then better serve traffic from a Web client, for example
  • Keep the vMotion connection on a separate network devoted to vMotion. When migration with vMotion occurs, the contents of the guest operating system’s memory is transmitted over the network. You can do this either by using VLANs to segment a single physical network or by using separate physical networks (the latter is preferable).
  • When using passthrough devices with a Linux kernel version 2.6.20 or earlier, avoid MSI and MSI-X modes because these modes have significant performance impact.
  • To physically separate network services and to dedicate a particular set of NICs to a specific network service, create a vSphere standard switch or vSphere distributed switch for each service. If this is not possible, separate network services on a single switch by attaching them to port groups with different VLAN IDs. In either case, confirm with your network administrator that the networks or VLANs you choose are isolated in the rest of your environment and that no routers connect them.
  • You can add and remove network adapters from a standard or distributed switch without affecting the virtual machines or the network service that is running behind that switch. If you remove all the running hardware, the virtual machines can still communicate among themselves. If you leave one network adapter intact, all the virtual machines can still connect with the physical network.
  • To protect your most sensitive virtual machines, deploy firewalls in virtual machines that route between virtual networks with uplinks to physical networks and pure virtual networks with no uplinks.
  • For best performance, use vmxnet3 virtual NICs.
  • Every physical network adapter connected to the same vSphere standard switch or vSphere distributed switch should also be connected to the same physical network.
  • Configure all VMkernel network adapters to the same MTU. When several VMkernel network adapters are connected to vSphere distributed switches but have different MTUs configured, you might experience network connectivity problems.

How Many NIC Ports should I use?

Whether you are purchasing new servers or trying to reuse existing servers you need to determine how many NIC ports you want/need and what speed NIC’S; 10GB, 1GB, fibre, etc. I would try and install as many NICs as possible and combine NIC ports across switches

  • Redundancy

You want to be able to remove all single points of failure in your network.  You can team NIC’S together to achieve redundancy and use Link Aggregration or Etherchannel to compliment this on your physical switches

  • Throughput

The speed of your NICs is extremely important depending on the amount of network traffic you anticipate creating on your networks. NFS is a consideration along with backup and replication traffic, let alone normal network traffic.

  • Flexibility

You can provision more NIC’s as demand for certain services increase or decrease.

NIC Considerations

  • Jumbo Frames
  • TOE (TCP Offload Engine)
  • Boot from SAN
  • iSCSI or Fibre
  • 1GB, 10GB ethernet or Fibre

Data Plane and Control Planes

vSphere network switches can be broken into two logical sections. These are the data plane and the management plane.

  • The data plane implements the actual packet switching, filtering, tagging, etc.
  • The management plane is the control structure used to allow the operator to configure the data plane functionality.
  • With the vSphere Standard Switch (VSS), the data plane and management plane are each present on each standard switch. In this design, the administrator configures and maintains each VSS on an individual basis.

Virtual Standard Switch Control and Data Plane

Planes

With the release of vSphere 4.0, VMware introduced the vSphere Distributed Switch. VDS eases the management burden of per host virtual switch configuration by treating the network as an aggregated resource. Individual host-level virtual switches are abstracted into a single large VDS that spans multiple hosts at the Datacenter level. In this design, the data plane remains local to each VDS, but the management plane is centralized with vCenter Server acting as the control point for all configured VDS instances.

Virtual Distributed Switch Control and Data Plane

planes2

Limits

switchlimits

VBScript to get Active Directory User Logon Information, Disable and Move

vb

What does this VBScript script do?

  • Checks all accounts to determine what needs to be disabled.
  • If LastLogonTimeStamp is Null and object is older than specified date, it is disabled and moved.
  • If account has been used, but not within duration specified, it is disabled and moved.
  • If account is already disabled it is left where it is.

Please adjust the variables according to your AD and copy into a Notepad file with an extension of .vbs and run

  • ADVBScript_Script.vbs

‘===========================================================================
‘ Checks all accounts to determine what needs to be disabled.
‘ If LastLogonTimeStamp is Null and object is older than specified date, it is disabled and moved.
‘ If account has been used, but not within duration specified, it is disabled and moved.
‘ If account is already disabled it is left where it is.
‘ Created 23/7/09 by Grant Brunton
‘===========================================================================

‘===========================================================================
‘ BEGIN USER VARIABLES
‘===========================================================================

‘ * Change this to your domain *
‘DSEroot=”DC=domain,DC=local”

‘ Flag to enable the disabling and moving of unused accounts
‘ 1 – Will Disable and move accounts
‘ 0 – Will create ouput log only
bDisable=0

‘ Number of days before an account is deemed inactive
‘ Accounts that haven’t been logged in for this amount of days are selected
iLogonDays=30

‘ LDAP Location of OUs to search for accounts
‘ LDAP location format eg: “OU=Users,OU=Test”
strSearchOU=”OU=Users”

‘ Search depth to find users
‘ Use “OneLevel” for the specified OU only or “Subtree” to search all child OUs as well.
strSearchDepth=”OneLevel”

‘ Location of new OU to move disabled user accounts to
‘ eg: “OU=Disabled Users,OU=Test”
strNewOU=”OU=_Disabled”

‘ Log file path (include trailing \ )
‘ Use either full directory path or relational to script directory
strLogPath=”.\logs\”

‘ Error log file name prefix (tab delimited text file. Name will be appended with date and .err extension)
strErrorLog=”DisabledAccounts_”

‘ Output log file name prefix (tab delimited text file. Name will be appended with date and .log extension)
strOutputLog=”DisabledAccounts_”

‘===========================================================================
‘ END USER VARIABLES
‘===========================================================================

‘===========================================================================
‘ MAIN CODE BEGINS
‘===========================================================================
sDate = Year(Now()) & Right(“0” & Month(Now()), 2) & Right(“0” & Day(Now()), 2)
Set oFSO=CreateObject(“Scripting.FileSystemObject”)
If Not oFSO.FolderExists(strLogPath) Then CreateFolder(strLogPath)
Set output=oFSO.CreateTextFile(strLogPath & strOutputLog & sDate & “.log”)
Set errlog=oFSO.CreateTextFile(strLogPath & strErrorLog & sDate & “.err”)
output.WriteLine “Sam Account Name” &vbTab& “LDAP Path” &vbTab& “Last Logon Date” &vbTab& “Date Created” &vbTab& “Home Directory”
errlog.WriteLine “Sam Account Name” &vbTab& “LDAP Path” &vbTab& “Problem” &vbTab& “Error”

Set rootDSE = GetObject(“LDAP://rootDSE”)
Set objConnection = CreateObject(“ADODB.Connection”)
objConnection.Open “Provider=ADsDSOObject;”
Set ObjCommand = CreateObject(“ADODB.Command”)
ObjCommand.ActiveConnection = objConnection
ObjCommand.Properties(“Page Size”) = 10
DSEroot=rootDSE.Get(“DefaultNamingContext”)

Set objNewOU = GetObject(“LDAP://” & strNewOU & “,” & DSEroot)
ObjCommand.CommandText = “<ldap: “=”” &=”” strsearchou=”” “,”=”” dseroot=””>;(&(objectClass=User)(objectcategory=Person));adspath;” & strSearchDepth

msgbox “<ldap: “=”” &=”” strsearchou=”” “,”=”” dseroot=””>;(&(objectClass=User)(objectcategory=Person));adspath;” & strSearchDepth

Set objRecordset = ObjCommand.Execute

On Error Resume Next

While Not objRecordset.EOF
LastLogon = Null
intLogonTime = Null

Set objUser=GetObject(objRecordset.fields(“adspath”))

If DateDiff(“d”,objUser.WhenCreated,Now) > iLogonDays Then
Set objLogon=objUser.Get(“lastlogontimestamp”)
If Err.Number &lt;&gt; 0 Then
WriteError objUser, “Get LastLogon Failed”
DisableAccount objUser, “Never”
Else
intLogonTime = objLogon.HighPart * (2^32) + objLogon.LowPart
intLogonTime = intLogonTime / (60 * 10000000)
intLogonTime = intLogonTime / 1440
LastLogon=intLogonTime+#1/1/1601#

If DateDiff(“d”,LastLogon,Now) > iLogonDays Then
DisableAccount objUser, LastLogon
End If
End If
End If
WriteError objUser, “Unknown Error”
objRecordset.MoveNext
Wend
‘===========================================================================
‘ MAIN CODE ENDS
‘===========================================================================

‘===========================================================================
‘ SUBROUTINES
‘===========================================================================
Sub CreateFolder( strPath )
If Not oFSO.FolderExists( oFSO.GetParentFolderName(strPath) ) Then Call CreateFolder( oFSO.GetParentFolderName(strPath) )
oFSO.CreateFolder( strPath )
End Sub

Sub DisableAccount( objUser, lastLogon )
On Error Resume Next
If bDisable <> 0 Then
If objUser.accountdisabled=False Then
objUser.accountdisabled=True
objUser.SetInfo
WriteError objUser, “Disable Account Failed”
objNewOU.MoveHere objUser.adspath, “CN=”&amp;objUser.CN
WriteError objUser, “Account Move Failed”
Else
Err.Raise 1,,”Account already disabled. User not moved.”
WriteError objUser, “Disable Account Failed”
End If
End If
output.WriteLine objUser.samaccountname &vbTab& objUser.adspath &vbTab& lastLogon &vbTab& objUser.whencreated &vbTab& objUser.homedirectory
End Sub

Sub WriteError( objUser, strProblem )
If Err.Number &lt;&gt; 0 Then
errlog.WriteLine objUser.samaccountname &vbTab& objUser.adspath &vbTab& strProblem &vbTab& Replace(Err.Description,vbCrlf,””)
Err.Clear
End If
End Sub

‘===========================================================================
‘ END SUBROUTINES
‘===========================================================================

Powershell Script to get Active Directory User Logon Information

PowerShell

To get the last logon Date/Time of Users in AD

$Domain = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$ADSearch = New-Object System.DirectoryServices.DirectorySearcher
$ADSearch.PageSize = 100
$ADSearch.SearchScope = “subtree”
$ADSearch.SearchRoot = “LDAP://$Domain”
$ADSearch.Filter = “(objectClass=user)”
$ADSearch.PropertiesToLoad.Add(“distinguishedName”)
$ADSearch.PropertiesToLoad.Add(“sAMAccountName”)
$ADSearch.PropertiesToLoad.Add(“lastLogonTimeStamp”)
$userObjects = $ADSearch.FindAll()

foreach ($user in $userObjects)
{
$dn = $user.Properties.Item(“distinguishedName”)
$sam = $user.Properties.Item(“sAMAccountName”)
$logon = $user.Properties.Item(“lastLogonTimeStamp”)
if($logon.Count -eq 0)
{
$lastLogon = “Never”
}
else
{
$lastLogon = [DateTime]$logon[0]
$lastLogon = $lastLogon.AddYears(1600)
}

“””$dn””,$sam,$lastLogon”
}

Script explained by David Hoelzer

Many Thanks for this excellent explanation

Scripting Video

 

Configure SNMP on VMware

What is SNMP?

Simple Network Management Protocol (SNMP) is an “Internet-standard protocol for managing devices on IP networks.” Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks, and more.” It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.

SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing application

SNMP

SNMP Agents

vCenter Server and ESXi systems include different SNMP agents.

  • vCenter Server SNMP agent

The SNMP agent included with vCenter Server can send traps when the vCenter Server system is started or when an alarm is triggered on vCenter Server. The vCenter Server SNMP agent functions only as a trap emitter and does not support other SNMP operations (for example, GET).

You can manage the vCenter Server agent with the vSphere Client or the vSphere Web Client but not with the vCLI command.

  • Host-based embedded SNMP agent

ESXi 4.0 and later includes an SNMP agent embedded in the host daemon (hostd) that can send traps and receive polling requests such as GET requests.
You can manage SNMP on ESXi hosts with the vicfg-snmp vCLI command or with the ESXCLI command in 5.1

  • Net-SNMP-based agent

Versions of ESX released before ESX/ESXi 4.0 include a Net-SNMP-based agent. You can continue to use this Net-SNMP-based agent in ESX 4.x with MIBs supplied by your hardware vendor and other third-party management applications. However, to use the VMware MIB files, you must use the host-based embedded SNMP agent.

 Configure SNMP Settings on a vCenter Server

You can configure up to four receivers to receive SNMP traps from vCenter Server. For each receiver, specify a host name, port, and community.

  • If necessary, select Administration > vCenter Server Settings to display the vCenter Server Settings dialog box.
  • If the vCenter Server system is part of a connected group, select the server you want to configure from the Current vCenter Server drop-down menu.
  • In the settings list, select SNMP.
  • In Receiver URL, enter the host name or IP address of the SNMP receiver.
  • In the field next to the Receiver URL field, enter the port number of the receiver.
  • The port number must be a value between 1 and 65535.
  • In Community, enter the community identifier.

snmp

Configure SNMP for ESXi

ESXi includes an SNMP agent that can

  • Send notifications (traps and informs)
  • Receive GET, GETBULK, and GETNEXT requests

In ESXi 5.1 and later releases, the SNMP agent adds support for version 3 of the SNMP protocol, offering increased security and improved functionality, including the ability to send informs. You can use esxcli commands to enable and configure the SNMP agent. You configure the agent differently depending on whether you want to use SNMP v1/v2c or SNMP v3.

As an alternative to configuring SNMP manually using esxcli commands, you can use host profiles to configure SNMP for an ESXi host.

Procedure

  • Configure SNMP Communities.

Configure the SNMP Agent. You have the following 2 choices:

  • Configuring the SNMP Agent to Send Traps
  • Configuring the SNMP Agent for Polling

Instructions for Sending Traps

  • Configure at least one community for the agent

An SNMP community defines a group of devices and management systems. Only devices and management systems that are members of the same community can exchange SNMP messages. A device or management system can be a member of multiple communities. In the example below you can see Public and Internal

  • Log into vMA
  • Type vifp addserver
  • Type vifptarget -s
  • Type vicfg-snmp -c public,Internal for each Host that you have.

snmp1

  • Each time you specify a community with this command, the settings that you specify overwrite the previous configuration.
  • Next configure the SNMP Agent to Send Traps

You can use the SNMP agent embedded in ESXi to send virtual machine and environmental traps to management systems. To configure the agent to send traps, you must specify a target (receiver) address, the community, and an optional port. If you do not specify a port, the SNMP agent sends traps to UDP port 162 on the target management system by default

Each time you specify a target with this command, the settings you specify overwrite all previously specified settings. To specify multiple targets, separate them with a comma.
You can change the port that the SNMP agent sends data to on the target using the -t option. That port is UDP 162 by default

  • Enable the SNMP agent if it is not yet running.
  • vicfg-snmp -E
  • (Optional) Send a test trap to verify that the agent is configured correctly.
  • vicfg-snmp <conn_options> –test

Instructions for Polling

  • Configure at least one community for the agent

An SNMP community defines a group of devices and management systems. Only devices and management systems that are members of the same community can exchange SNMP messages. A device or management system can be a member of multiple communities.

  • Type vicfg-snmp -c public, internal
  • Each time you specify a community with this command, the settings that you specify overwrite the previous configuration
  • (Optional) Specify a port for listening for polling requests
  • vicfg-snmp <conn_options> -p 162
  • (Optional) If the SNMP agent is not enabled, enable it
  • vicfg-snmp -E
  • Run vicfg-snmp -T to validate the configuration.

The following example shows how the commands are run in sequence.

  • vicfg-snmp <conn_options> –c public –t example.com@162/private -E
  • next validate your config by doing these things
  • vicfg-snmp <conn_options> -T
  • walk –v1 –c public esx-host

SNMP Diagnostics

  • Type esxcli system snmp test to prompt the SNMP agent to send a test warmStart trap.
  • Type esxcli system snmp get to display the current configuration of the SNMP agent.

Configure SNMP Management Client Software

After you have configured a vCenter Server system or an ESXi host to send traps, you must configure your management client software to receive and interpret those traps.

To configure your management client software

  • Specify the communities for the managed device
  • Configure the port settings
  • Load the VMware MIB files. See the documentation for your management system for specific instructions for these steps.

Instructions

  • Download the VMware MIB files from the VMware Web site: http://communities.vmware.com/community/developer/managementapi.
  • In your management software, specify the vCenter Server or ESXi host as an SNMP-based managed device.
  • If you are using SNMP v1 or v2c, set up appropriate community names in the management software.
  • These names must correspond to the communities set for the SNMP agent on the vCenter Server system or ESXi host.
  • If you are using SNMP v3, configure users and authentication and privacy protocols to match those configured on the ESXi host.
  • If you configured the SNMP agent to send traps to a port on the management system other than the default UDP port 162, configure the management client software to listen on the port you configured.
  • Load the VMware MIBs into the management software so you can view the symbolic names for the vCenter Server or host variables.
  • To prevent lookup errors, load these MIB files in the following order before loading other MIB files:

VMWARE-ROOT-MIB.mib
VMWARE-TC-MIB.mib
VMWARE-PRODUCTS-MIB.mib

  • The management software can now receive and interpret traps from vCenter Server or ESXi hosts.

ESXCLI in vSphere 5 for managing SNMP

You can also now use ESXCLI commands to set up and manage SNMP as per below screenprints

snmp esxcli

Configure Software iSCSI Port Bindings

What is Software iSCSI port binding?

Software iSCSI port binding is the process of creating multiple paths between iSCSI adapters and an iSCSI Storage target. By default, ESXi does not setup multipathing for iSCSI adapters. As a result, all targets are accessible by only a single path. This is true regardless of if teaming was setup for your NICS on the VMkernel port used for iSCSI. To ensure that your storage is still accessible in the event of a path failure or to take advantage of load balancing features, Software iSCSI Port Binding is required.

Capture

With the software-based iSCSI implementation, you can use standard NICs to connect your host to a remote iSCSI target on the IP network. The software iSCSI adapter that is built into ESXi facilitates this connection by communicating with the physical NICs through the network stack.

Before you can use the software iSCSI adapter, you must

  • Set up networking
  • Activate the adapter
  • Configure parameters such as discovery addresses and CHAP

Setup Networking

Software and dependent hardware iSCSI adapters depend on VMkernel networking. If you use the software or dependent hardware iSCSI adapeters, you must configure connections for the traffic between the iSCSI component and the physical network adapters. Configuring the network connection involves creating a virtual VMkernel interface for each physical network adapter and associating the interface with an appropriate iSCSI adapter.

If you use a single vSphere standard switch to connect VMkernel to multiple network adapters, change the port group policy, so that it is compatible with the iSCSI network requirements.

By default, for each virtual adapter on the vSphere standard switch, all network adapters appear as active. You must override this port group policy setup, so that each VMkernel interface maps to only one corresponding active NIC. For example

  • vmk1 maps to vmnic1
  • vmk2 maps to vmnic2

Procedure

  • Create a vSphere standard switch that connects VMkernel with physical network adapters designated for iSCSI traffic. The number of VMkernel adapters must correspond to the number of physical adapters on the vSphere standard switch
  • Log in to the vSphere Client and select the host from the inventory panel.
  • Click the Configuration tab and click Networking
  • Select the vSphere standard switch that you use for iSCSI and click Properties.
  • On the Ports tab, select an iSCSI VMkernel adapter and click Edit.
  • Click the NIC Teaming tab and select Override switch failover order.

iscsi

  • Designate only one physical adapter as active and move all remaining adapters to the Unused Adapters category. You will see a Warning Trianlge against your iSCSI VMKernel port if you don’t.
  • Repeat Step 4 through Step 6 for each iSCSI VMkernel interface on the vSphere standard switch.
  • Next go to the switch properties and click Add and choose VMkernel

vmkernel

  • Type a name. Eg VMkernel-iSCSI

ISCSI1

  •  Enter an IP Address for this adapter

iscsi2

  • Finish and check Summary Page

Setup Software iSCSI Adapter

  • Within the Host View, click the Configuration tab > Storage Adapters
  • Click Add to add a Software iSCSI Adapter
  • Right click the new Software iSCSI Adapter and select Properties

ISCSI3

  • Enable the adapter if it is not already
  • Open the Network Configuration tab
  • Add the new port group(s) associated with the iSCSI network

ISCSI4

  • Click the Dynamic Discovery tab

ISCSI5

  • Add the IP addresses of the ISCSI targets
  • Click Static Discovery and check the details in here

ISCSI6

  • Click Close
  • Rescan the attached disks

What if you have multiple adapters?

  • If your host has more than one physical network adapter for software and dependent hardware iSCSI, use the adapters for multipathing.
  • You can connect the software iSCSI adapter with any physical NICs available on your host. The dependent iSCSI adapters must be connected only with their own physical NICs.
  • Physical NICs must be on the same subnet as the iSCSI storage system they connect to.

The iSCSI adapter and physical NIC connect through a virtual VMkernel adapter, also called virtual network adapter or VMkernel port. You create a VMkernel adapter (vmk) on a vSphere switch (vSwitch) using 1:1 mapping between each virtual and physical network adapter.

One way to achieve the 1:1 mapping when you have multiple NICs, is to designate a separate vSphere switch for each virtual-to-physical adapter pair. The following examples show configurations that use vSphere standard switches, but you can use distributed switches as well.

Capture1

If you use separate vSphere switches, you must connect them to different IP subnets.
Otherwise, VMkernel adapters might experience connectivity problems and the host
will fail to discover iSCSI LUNs

An alternative is to add all NICs and VMkernel adapters to a single vSphere
standard switch. In this case, you must override the default network setup and
make sure that each VMkernel adapter maps to only one corresponding active
physical adapter.

Capture2

General Information on iSCSI Adapters

http://www.electricmonk.org.uk/2012/04/18/using-esxi-with-iscsi-sans/

Change a Multipath Policy

policy1

Changing Path Policies

You can change path policies with

  • esxcli
  • vicfg-mpath

What Path Policies are there?

  • Most Recently Used (MRU)

Selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESXi/ESX host switches to an alternative path and continues to use the new path while it is available. This is the default policy for Logical Unit Numbers (LUNs) presented from an Active/Passive array. ESXi/ESX does not return to the previous path if, or when, it returns; it remains on the working path until it, for any reason, fails.

Note: The preferred flag, while sometimes visible, is not applicable to the MRU pathing policy and can be disregarded

  • Fixed (Fixed)

Uses the designated preferred path flag, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESXi/ESX host cannot use the preferred path or it becomes unavailable, the ESXi/ESX host selects an alternative available path. The host automatically returns to the previously-defined preferred path as soon as it becomes available again. This is the default policy for LUNs presented from an Active/Active storage array.

  • Round Robin (RR)

Uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths. For Active/Passive storage arrays, only the paths to the active controller will be used in the Round Robin policy. For Active/Active storage arrays, all paths will be used in the Round Robin policy.

Note: This policy is not currently supported for Logical Units that are part of a Microsoft Cluster Service (MSCS) virtual machine.

  • Fixed path with Array Preference

The VMW_PSP_FIXED_AP policy was introduced in ESXi/ESX 4.1. It works for both Active/Active and Active/Passive storage arrays that support Asymmetric Logical Unit Access (ALUA). This policy queries the storage array for the preferred path based on the array’s preference. If no preferred path is specified by the user, the storage array selects the preferred path based on specific criteria.

Note: The VMW_PSP_FIXED_AP policy has been removed from ESXi 5.0. For ALUA arrays in ESXi 5.0, the MRU Path Selection Policy (PSP) is normally selected but some storage arrays need to use Fixed. To check which PSP is recommended for your storage array, see the Storage/SAN section in the VMware Compatibility Guide or contact your storage vendor.

Notes:

  • These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions.
  • Round Robin is not supported on all storage arrays. Please check with your array documentation or storage vendor to verify that Round Robin is supported and/or recommended for your array and configuration. Switching to a unsupported or undesirable pathing policy can result in connectivity issues to the LUNs (in a worst-case scenario, this can cause an outage).

Changing Path Policies with ESXCLI

  • Ensure your device is claimed by the NMP plugin. Only NMP devices allow you to change the path policy.
  • esxcli storage nmp device list

Multipath1

  • Retrieve the list of path selection policies on the system to see which values are valid for the –psp option when you set the path policy.
  • esxcli storage core plugin registration list

multipath2

  • Set the path policy using esxcli.
  • esxcli storage nmp device set - -device naa.xxx - -psp VMW_PSP_RR

MULTIPATH3

(Optional) If you specified the VMW_PSP_FIXED policy, you must make sure the preferred path is set correctly.

  • Check which path is the preferred path for a device.
  • esxcli storage nmp psp fixed deviceconfig get - -device naa.xxx b  If necessary, change the preferred path.
  • Set the preferred path to vmhba32:C0:T0:L0
  • esxcli storage nmp psp fixed deviceconfig set - -device naa.xxx –path vmhba32:C0:T0:L0

multipath4

  • Run the command with –default to clear the preferred path selection.

Perform command line configuration of multipathing options

signpost

Multipathing Considerations

Specific considerations apply when you manage storage multipathing plug-ins and claim rules. The following considerations help you with multipathing

  • If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
  • When the system searches the SATP rules to locate a SATP for a given device, it searches the driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules are searched. If no match occurs, NMP selects a default SATP for the device.
  • If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no claim rule match occurs for this device. The device is claimed by the default SATP based on the device’s transport type.
  • The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no active/optimized path. This path is used until a better path is available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
  • If you enable VMW_PSP_FIXED with VMW_SATP_ALUA, the host initially makes an arbitrary selection of the preferred path, regardless of whether the ALUA state is reported as optimized or unoptimized. As a result, VMware does not recommend to enable VMW_PSP_FIXED when VMW_SATP_ALUA is used for an ALUA-compliant storage array. The exception is when you assign the preferred path to be to one of the redundant storage processor (SP) nodes within an active-active storage array. The ALUA state is irrelevant.
  • By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices.

What can we use to configure Multipath Options

  • vCLI
  • vMA
  • Putty into DCUI console

What we can view and adjust

  • You can display all multipathing plugins available on your host
  • You can list any 3rd Party MPPs as well as your hosts PSP and SATPs and review the paths they claim
  • You can also define new paths and specify which multipathing plugin should claim the path

The ESXCLI Commands

Click the link to take you to the vSphere 5 Documentation Center for each command

These are the 2 commands you need to use to perform configuration of multipathing

nmp

nmp2

esxcli storage nmp psp Namespaces

generic1

Display NMP PSPs

  • esxcli storage nmp psp list

This command list all the PSPs controlled by the VMware NMP

psplist

More complicated commands with esxcli storage nmp psp namespace

  • esxcli storage nmp psp fixed deviceconfig set - -device naa.xxx –path vmhba3:C0:T5:L3

The command sets the preferred path to vmhba3:C0:T5:L3. Run the command with – -default to clear the preferred path selection

esxcli storage nmp satp Namespaces

generic2

Display SATPs for the Host

  • esxcli storage nmp satp list

For each SATP, the output displays information that shows the type of storage array or system this SATP supports and the default PSP for any LUNs using this SATP. Placeholder (plugin not loaded) in the Description column indicates that the SATP is not loaded.

satplist

More complicated commands with esxcli storage nmp satp namespaces

  • esxcli storage nmp satp rule add -V NewVend -M NewMod -s VMW_SATP_INV

The command assigns the VMW_SATP_INV plug-in to manage storage arrays with vendor string NewVend and model string NewMod.

esxcli storage nmp device NameSpaces

generic3

Display NMP Storage Devices

  • esxcli storage nmp device list

This command list all storage devices controlled by the VMware NMP and displays SATP and PSP information associated with each device

devicelist

More complicated commands with esxcli storage nmp device namespaces

  • esxcli storage nmp device set - -device naa.xxx - -psp VMW_PSP_FIXED

This command sets the path policy for the specified device to  VMW_PSP_FIXED

esxcli storage nmp path Namespaces

generic4

Display NMP Paths

  • esxcli storage nmp path list

This command list all the paths controlled by the VMware NMP and displays SATP and PSP information associated with each device

pathlist

More complicated commands with esxcli storage nmp path namespaces

There is only really the list command associated with this command

esxcli storage core Command Namespaces

storagecore

esxcli storage core adapter Command Namespaces

storagecore2

esxcli storage core device Command Namespaces

core3

esxcli storage core path Command Namespaces

core4

esxcli storage core plugin Command Namespaces

core5

esxcli storage core claiming Command Namespaces

core6

The esxcli storage core claiming namespace includes a number of troubleshooting commands. These  commands are not persistent and are useful only to developers who are writing PSA plugins or troubleshooting a system. If I/O is active on the path, unclaim  and reclaim actions fail

The help for esxcli storage core claiming includes the autoclaim command. Do not use this command unless instructed to do so by VMware support staff

esxcli storage core claimrule Command Namespaces

core7

The PSA uses claim rules to determine which multipathing module should claim the paths to a particular device and to manage the device. esxcli storage core claimrule manages claim rules.

Claim rule modification commands do not operate on the VMkernel directly. Instead they operate on the configuration file by adding and removing rules

To change the current claim rules in the VMkernel
1
Run one or more of the esxcli storage core claimrule modification commands (add, remove, or move).
2
Run esxcli storage core claimrule load to replace the current rules in the VMkernel with the modified rules from the configuration file.

Claim rules are numbered as follows.

  • Rules 0–100 are reserved for internal use by VMware.
  • Rules 101–65435 are available for general use. Any third party multipathing plugins installed on your system use claim rules in this range. By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not remove this rule, unless you want to unmask these devices.
  • Rules 65436–65535 are reserved for internal use by VMware.

When claiming a path, the PSA runs through the rules starting from the lowest number and determines is a path matches the claim rule specification. If the PSA finds a match, it gives the path to the corresponding plugin. This is worth noticing because a given path might match several claim rules.

The following examples illustrate adding claim rules.  

  • Add rule 321, which claims the path on adapter vmhba0, channel 0, target 0, LUN 0 for the NMP plugin.
  • esxcli storage core claimrule add -r 321 -t location -A vmhba0 -C 0 -T 0 -L 0 -P NMP
  • Add rule 429, which claims all paths provided by an adapter with the mptscsi driver for the MASK_PATH plugin.
  • esxcli storage core claimrule add -r 429 -t driver -D mptscsi -P MASK_PATH
  • Add rule 914, which claims all paths with vendor string VMWARE and model string Virtual for the NMP plugin.
  • esxcli storage core claimrule add -r 914 -t vendor -V VMWARE -M Virtual -P NMP
  • Add rule 1015, which claims all paths provided by FC adapters for the NMP plugin.
  • esxcli storage core claimrule add -r 1015 -t transport -R fc -P NMP

Example: Masking a LUN

In this example, you mask the LUN 20 on targets T1 and T2 accessed through storage adapters vmhba2 and vmhba3.

  • esxcli storage core claimrule list
  • esxcli  storage core claimrule add -P MASK_PATH -r 109 -t location -A
    vmhba2 -C 0 -T 1 -L 20
  • esxcli storage core claimrule add -P MASK_PATH -r 110 -t location -A
    vmhba3 -C 0 -T 1 -L 20
  • esxcli  storage core claimrule add -P MASK_PATH -r 111 -t location -A
    vmhba2 -C 0 -T 2 -L 20
  • esxcli storage core claimrule add -P MASK_PATH -r 112 -t location -A
    vmhba3 -C 0 -T 2 -L 20
  • esxcli storage core claimrule load
  • esxcli storage core claimrule list
  • esxcli storage core claiming unclaim -t location -A vmhba2
  • esxcli storage core claiming unclaim -t location -A vmhba3
  • esxcli storage core claimrule run

Install and Configure PSA Plugins

scales

Methods of Installing PSA Plugins

  • Using vCenter Update Manager
  • Using vCLI (use the esxcli software vib install command)
  • Using Vendor recommended Installation Guides
  • Using EMC’s Powerpath Installer
  • Using Dell’s Equalogic setup.pl script for their multipathing extension module
  • Using vihostupdate –server esxihost –install –bundle=Powerpath.5.4.SP2.zip

Checking Registration and Adding a Plugin

  • esxcli storage core plugin registration list will check if it is registered
  • esxcli storage core plugin registration add -m class_satp_va -N SATP -P class_satp_VA
  • Reboot the host(s) in order for the new PSP to take effect

Changing the VMW_SATP_CX# default PSP from VMW_PSP_MRU to VMW_PSP_RR

  • esxcli storage nmp satp set -s VMW_SATP_CX -P VMW_PSP_RR
  • Reboot the host(s) in order for the new PSP to take effect

VMware Document

vSphere Command-Line Interface Concepts and Examples ESXi 5.0

Understanding different multipathing Policy Functionalities

images

Types of Multipathing explained

  • VMW_PSP_FIXED
  • The host uses the designated preferred path, if it has been configured. Otherwise, the host selects the first working path discovered at system boot time.
  • If you want the host to use a particular preferred path, specify it through the vSphere Client or by using esxcli storage nmp psp fixed deviceconfig set.
  • The default policy for active‐active storage devices is VMW_PSP_FIXED however VMware does not recommend you use VMW_PSP_FIXED for devices that have the VMW_SATP_ALUA storage array type policy assigned to them.
  • Fixed is the default policy for most active-active storage devices.
  • If the host uses a default preferred path and the path’s status turns to Dead, a new path is selected as preferred. However, if you explicitly designate the preferred path, it will remain preferred even when it becomes inaccessible
  • VMW_PSP_MRU
  • The host selects the path that it used most recently.
  • When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again.
  • There is no preferred path setting with the MRU policy.
  • MRU is the default policy for active‐passive storage devices.
  • VMW_PSP_RR
  • The host uses an automatic path selection algorithm that rotates through all active paths when connecting to active‐passive arrays, or through all available paths when connecting to active‐active arrays.
  • Automatic path selection implements load balancing across the physical paths available to your host.
  • Load balancing is the process of spreading I/O requests across the paths. The goal is to optimize throughput performance such as I/O per second, megabytes per second, or response times.
  • VMW_PSP_RR is the default for a number of arrays and can be used with both active‐active and active‐passive arrays to implement load balancing across paths for different LUNs.

View Datastore Paths

Use the vSphere Client to review the paths that connect to storage devices the datastores are deployed on.

  • Log in to the vSphere Client and select a host from the inventory panel.
  • Click the Configuration tab and click Storage in the Hardware panel.
  • Click Datastores under View.
  • From the list of configured datastores, select the datastore whose paths you want to view, and click Properties.
  • Under Extents, select the storage device whose paths you want to view and click Manage Paths.
  • In the Paths panel, select the path to view
  • The panel underneath displays the path’s name. The name includes parameters describing the path:adapter ID, target ID, and device ID.
  • (Optional) To extract the path’s parameters, right-click the path and select Copy path to clipboard.

View Storage Device Paths

Use the vSphere Client to view which SATP and PSP the host uses for a specific storage device and the status of all available paths for this storage device.

  • Log in to the vSphere Client and select a server from the inventory panel.
  • Click the Configuration tab and click Storage in the Hardware panel.
  • Click Devices under View.
  • Select the storage device whose paths you want to view and click Manage Paths.
  • In the Paths panel, select the path to view
  • The panel underneath displays the path’s name. The name includes parameters describing the path:adapter ID, target ID, and device ID.
  • (Optional) To extract the path’s parameters, right-click the path and select Copy path to clipboard.