Archive for January 2013

Understand the use of command line tools to configure appropriate vDS settings on an ESXi host

lightswitch

Distributed Switches Overview

A distributed switch functions as a single virtual switch across all associated hosts. A distributed switch allows virtual machines to maintain a consistent network configuration as they migrate across multiple hosts.

Like a vSphere standard switch, each distributed switch is a network hub that virtual machines can use. A distributed switch can forward traffic internally between virtual machines or link to an external network by connecting to uplink adapters.

Each distributed switch can have one or more distributed port groups assigned to it. Distributed port groups group multiple ports under a common configuration and provide a stable anchor point for virtual machines that are connecting to labeled networks. Each distributed port group is identified by a network label, which is unique to the current datacenter. A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional.

Valid Commands

You can create distributed switches by using the vSphere Client. After you have created a distributed switch, you can

  • Add hosts by using the vSphere Client
  • Create distributed port groups with the vSphere Client
  • Edit distributed switch properties and policies with the vSphere Client
  • Add and remove uplink ports by using vicfg-vswitch

You cannot

  • Create a Distributed Virtual Switches with ESXCLI
  • Add and Remove Uplink Ports with ESXCLI

Note: With the release of 5.0, the majority of the legacy esxcfg-*/vicfg-* commands have been migrated over to esxcli. At some point, hopefully not in the distant future, esxcli will be parity complete and the esxcfg-*/vicfg-* commands will be completely deprecated and removed including the esxupdate/vihostupdate utilities.

 Managing uplinks in Distributed switches

Add an Uplink

  • vicfg-vswitch <conn_options> –add-dvp-uplink <adapter_name> –dvp <dvport_id> <dvswitch_name>

Remove an Uplink

  • vicfg-vswitch <conn_options> –del-dvp-uplink –dvp <dvport_id> <dvswitch_name>

 Examples

nic13

Given a set of network requirements, identify the appropriate distributed switch technology to use

cisco_nexus_1000v_vmware_software_switch

Switch Options

Basically everything comes down to cost, manageability and familiarity and the business requirements for the features provided by each option. I have attached a link below to a very handy comparison breakdown of vSphere switches compared to the Cisco 1000V. There are too many features to place in a blog!

  • Standard Switch

The VMware vSphere Standard Switch (VSS) is the base-level virtual networking alternative. It extends the familiar appearance, configuration, and capabilities of the standard virtual switch (vSwitch) in VMware vSphere 5.

Standard, Advanced and Enterprise License

  • Distributed Switch

The VMware vSphere Distributed Switch (VDS) extends the feature set of the VMware Standard Switch, while simplifying network provisioning, monitoring, and management through an abstracted, single distributed switch representation of multiple VMware ESX and VMware ESXi™ Servers in a VMware data center. VMware vSphere 5 includes significant advances in virtual switching by providing monitoring, troubleshooting and enhanced NIOC features. VMware vSphere Distributed switch provides flexibility to the I/O resource allocation process by introducing User Defined network resource pools. These new features will help Network Administrators in managing and troubleshooting their virtual infrastructure using familiar tools as well as provide advanced capabilities to manage traffic granularly.

Enterprise Plus License

  • Cisco Nexus 1000v

Cisco Nexus 1000V Series Switches are the result of a Cisco and VMware collaboration building on the VMware vNetwork third-party vSwitch API of VMware VDS and the industry-leading switching technology of the Cisco Nexus Family of switches. Featuring the Cisco® NX-OS Software data center operating system, the Cisco Nexus 1000V Series extends the virtual networking feature set to a level consistent with physical Cisco switches and brings advanced data center networking, security, and operating capabilities to the VMware vSphere environment. It provides end-to-end physical and virtual network provisioning, monitoring, and administration with virtual machine-level granularity using common and existing network tools and interfaces. The Cisco Nexus 1000V Series transparently integrates with VMware vCenter™ Server and VMware vCloud™ Director to provide a consistent virtual machine provisioning workflow while offering features well suited for data center-class applications, VMware View, and other mission-critical virtual machine deployments.

Cisco Nexus 1000v is generally used in large enterprises where the management of firewalls, core- and access switches is in the control of the Network administrators. While the management of the VMware virtual Distributed Switch is in the domain of the vSphere Administrators, with a Cisco Nexus 1000v it is possible to completely separate the management of the virtual switches and hand-over to the network administrators. All this without allowing access to the rest of the vSphere platform to the Network administrators.

Cisco Licensed

  • IBM 5000V

The IBM System Networking Distributed Virtual Switch 5000V is an advanced, feature-rich distributed virtual switch for VMware environments with policy-based virtual machine (VM) connectivity. The IBM Distributed Virtual Switch (DVS) 5000V enables network administrators familiar with IBM System Networking switches to manage the IBM DVS 5000V just like IBM physical switches using advanced networking, troubleshooting and management features so the virtual switch is no longer hidden and difficult to manage.

Support for Edge Virtual Bridging (EVB) based on the IEEE 802.1Qbg standard enables scalable, flexible management of networking configuration and policy requirements per VM and eliminates many of the networking challenges introduced with server virtualization. The IBM DVS 5000V works with VMware vSphere 5.0 and beyond and interoperates with any 802.1Qbg-compliant physical switch to enable switching of local VM traffic in the hypervisor or in the upstream physical switch.

IBM Licensed

Cisco Document comparing all 3 switches except IBM

http://www.cisco.com

IBM 5000V Overview Document

http://www-03.ibm.com

Configure Live Port Moving

PG

What is Live Port Moving?

The live port moving policy allows an active port to be migrated into a dvPortGroup without dropping the connection while acquiring the settings of the target dvPortGroup. Some people say that as far as they can tell, like many advanced features, this cannot be set from within the vSphere client for a distributed port group. There is a lack of information on this subject so this is the best I can see at the moment

Edit Advanced dvPort Group Properties

Use the dvPort Group Properties dialog box to configure advanced dvPort group properties such as port override settings.

  • In the vSphere Client, display the Networking inventory view and select the dvPort group.
  • From the Inventory menu, select Network > Edit Settings.
  • Select Advanced to edit the dvPort group properties.
  • Select Allow override of port policies to allow dvPort group policies to be overridden on a per-port level.
  • Click Edit Override Settings to select which policies can be overridden.
  • Choose whether to allow live port moving.
  • Select Configure reset at disconnect to discard per-port configurations when a dvPort is disconnected from a virtual machine
  • Click OK.

dvs

PowerShell example

This is a rough example to show you where some settings are that show Live Port Moving

  • Open PowerCLI as an Administrator
  • Connect to your vCenter
  • To see the properties associated with the Get-View command type the following (See screenprint)
  • We are interested in Config

port moving1

  • In order to get into the config, we will type our previous command into a variable

port moving2

  • Now we can delve deeper into the properties of our variable by typing $pg.config

port moving 3

  • We then need to access the Policy property so type $pg.config.policy

port moving 4

  • Now we can see the LivePortMovingAllowed property
  • To change this to true, type the following below

port moving 6

Useful PowerShell Script

http://thefoglite.com/2012/07/18/configure-live-port-moving-vsphere-5/

Configure and Administer vSphere Network I/O Control

The-Traffic-Light

What is Network I/O Control?

Network I/O Control enables distributed switch traffic to be divided into different resource pools using Shares and Limits to control traffic priority applicable to a hosts outbound network I/O traffic only

Network resource pools determine the bandwidth that different network traffic types are given on a vSphere distributed switch.
When network I/O control is enabled, distributed switch traffic is divided into the following predefined network resource pools:

  • Fault Tolerance traffic
  • iSCSI traffic (Does not apply on a dependent hardware adapter)
  • vMotion traffic
  • Management traffic
  • vSphere Replication (VR) traffic
  • NFS traffic
  • Virtual machine traffic.

You can also create custom user defined network resource pools for Virtual Machine traffic

vSphere Replication

vSphere Replication is a new alternative for the replication of virtual machines. VR is introduced in vSphere Site Recovery Manager. VR is an engine that provides replication of virtual machine disk files. VR tracks changes to virtual machines and ensures that blocks that differ in a specified recovery point objective are replicated to a remote site

Configuring System Defined network Resource Pools

  • Select Home > Inventory > Networking
  • Select the Distributed Switch in the inventory and click the Resource Allocation tab
  • Click the Properties link and select Enable Network I/O Control on this vDS

Capture1

To enable network resource pool settings

  • Select the vDS
  • On the Resource Allocation Tab, right click the network resource pool and click Edit
  • Modify the physical adapter shares value and host limit for the network resource pool

Capture1

  • (Optional) Select the QoS priority tag from the drop down menu. The Qos priority tag specifies an IEE 802.1p tag enabling Quality of Service at the MAC level

Capture

  • Click OK

qos

Configuring User Defined Network Resource Pools

  • Click New Network Resource Pool
  • Put a Name
  • Put a Description
  • Choose an option in the drop-down for Physical Adapter Shares. Options are: High, Normal, Low or a Custom value.
  • Select Unlimited or not
  • Choose a level for QoS Priority Tag

Capture

Assign Port Groups to Network Resource Pools

  • Make sure you have created your own Network Resource Pool first
  • Click Manage Port Groups
  • Select a Network Resource Pool to associate with each Port Group

Capture

  • You can assign multiple Port Groups to the same Network Resource Pool

Capture

Describe the relationship between vDS and vSS

images

vSphere Standard Switch Architecture

You can create abstracted network devices called vSphere standard switches. A standard switch can..

  1. Route traffic internally between virtual machines and link to external networks
  2. Combine the bandwidth of multiple network adaptors and balance communications traffic among them.
  3. Handle physical NIC failover.
  4. Have a default number of logical ports which for a standard switch is 120. You can
  5. Connect one network adapter of a virtual machine to each port. Each uplink adapter associated with a standard switch uses one port.
  6. Each logical port on the standard switch is a member of a single port group.
  7. Have one or more port groups assigned to it.
  8. When two or more virtual machines are connected to the same standard switch, network traffic between them is routed locally. If an uplink adapter is attached to the standard switch, each virtual machine can access the external network that the adapter is connected to.
  9. vSphere standard switch settings control switch-wide defaults for ports, which can be overridden by port group settings for each standard switch. You can edit standard switch properties, such as the uplink configuration and the number of available ports.

Standard Switch

standardswitch

vSphere Distributed Switch Architecture

A vSphere distributed switch functions as a single switch across all associated hosts. This enables you to set network configurations that span across all member hosts, and allows virtual machines to maintain consistent network configuration as they migrate across multiple hosts

Like a vSphere standard switch, each vSphere distributed switch is a network hub that virtual machines can use.

  • Enterprise Plus Licensed feature only
  • VMware vCenter owns the configuration of the distributed switch
  • Distributed switches can support up to 350 hosts
  • You configure a Distributed switch on vCenter rather than individually on each host
  • Provides support for Private VLANs
  • Enable networking statistics and policies to migrate with VMs during vMotion
  • A distributed switch can forward traffic internally between virtual machines or link to an external network by connecting to physical Ethernet adapters, also known as uplink adapters.
  • Each distributed switch can also have one or more distributed port groups assigned to it.
  • Distributed port groups group multiple ports under a common configuration and provide a stable anchor point for virtual machines connecting to labeled networks.
  • Each distributed port group is identified by a network label, which is unique to the current datacenter. A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional.
  • Network resource pools allow you to manage network traffic by type of network traffic.
  • In addition to vSphere distributed switches, vSphere 5 also provides support for third-party virtual switches.

vdsswitch

TCP/IP Stack at the VMkernel Level

The VMware VMkernel TCP/IP networking stack provides networking support in multiple ways for each of the services it handles.

The VMkernel TCP/IP stack handles iSCSI, NFS, and vMotion in the following ways for both Standard and Distributed Virtual Switches

  • iSCSI as a virtual machine datastore
  • iSCSI for the direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
  • NFS as a virtual machine datastore.
  • NFS for the direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
  • Migration with vMotion.
  • Fault Tolerance logging.
  • Port-binding for vMotion interfaces.
  • Provides networking information to dependent hardware iSCSI adapters.
  • If you have two or more physical NICs for iSCSI, you can create multiple paths for the software iSCSI by configuring iSCSI Multipathing.

Data Plane and Control Planes

vSphere network switches can be broken into two logical sections. These are the data plane and the management plane.

  • The data plane implements the actual packet switching, filtering, tagging, etc.
  • The management plane is the control structure used to allow the operator to configure the data plane functionality.
  • With the vSphere Standard Switch (VSS), the data plane and management plane are each present on each standard switch. In this design, the administrator configures and maintains each VSS on an individual basis.

Virtual Standard Switch Control and Data Plane

Planes

With the release of vSphere 4.0, VMware introduced the vSphere Distributed Switch. VDS eases the management burden of per host virtual switch configuration by treating the network as an aggregated resource. Individual host-level virtual switches are abstracted into a single large VDS that spans multiple hosts at the Datacenter level. In this design, the data plane remains local to each VDS, but the management plane is centralized with vCenter Server acting as the control point for all configured VDS instances.

Virtual Distributed Switch Control and Data Plane

planes2