Archive for January 2013

Tune ESXi VM Storage Configuration

tools

Tuning Configuration

  • Use the correct virtual hardware for the VM O/S
  • Use paravirtual hardware for I/O intensive applications
  • LSI Logic SAS for newer O/S’s
  • Size the Guest O/S Queue depth appropriately
  • Make sure Guest O/S partitions are aligned
  • Know what Disk provisioning policies are best. Thick provision lazy zeroed (default), Thick provision eager zeroed and Thin provision.
  • Store swap file on a fast or SSD Datastore

swapfile

  • When deploying a virtual machine, an administrator has a choice between three virtual disk modes. For optimal performance, independent persistent is the best choice. The virtual disk mode can be modified when the VM is powered off.

storage

  • Choose VMFS or RDM Disks to use. RDM Disk generally used by clustering software.

RDM

  • Use Disk Shares to configure more fined grained resource control

vmstroage

  • In some cases large I/O requests issued by applications can be split by the guest storage driver. Changing the VMs registry settings to issue larger block sizes can eliminate this splitting thus enhancing performance. See http://kb.vmware.com/kb/9645697

Tune ESXi VM Network Configuration

tools

Tuning Configuration

  • Use the VMXNet3 adapter and if it is not supported use the VMXNET/VMXNET2 adapter

nicsettings

  • Use a network adapter that supports TCP Checksum, TSO and Jumbo Frames multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery
  • Use the fastest ethernet you can. 10GB preferable
  • Ensure the speed and duplex settings on the network adapters is correct. For 10/100 nics, set the speed and duplex. Make sure the duplex is set to full duplex
  • For NICs, Gigabit Ethernet or higher set the speed and duplex to auto-negotiate
  • DirectPath I/O (DPIO) provides a means of bypassing the vmkernel, giving a VM direct access to hardware devices by leveraging Intel VT-D and AMD-V hardware support. Specific to networking, DPIO allows a VM to connect directly to the hosts physical network adapter without the overhead associated with emulation or paravirtualization. The bandwidth increases associated with DPIO are nominal but the savings on CPU cycles can be substantial for busy workloads. There are quite a few restrictions when utilizing DPIO. For example, unless using Cisco UCS hardware, DPIO is not compatible with hot-add, FT, HA, DRS or snapshots.
  • Use NIC Teaming where possible. VMware’s proprietry network teaming or Etherchannel
  • Virtual Machine Communications Interface (VMCI) is a virtual device that promotes enhanced communication between a virtual machine and the host on which it resides, and between VMs running on the same host. VMCI provides a high-speed alternative to standard TCP/IP sockets. The VMCI SDK enables engineers to develop applications which take advantage of the VMCI infrastructure. With VMCI, VM application traffic (of VMs on the same host) bypasses the network layer, reducing communication overhead. With VMCI, it’s not uncommon for inter-VM traffic to exceed 10 GB/s

Unable to clear the Hardware Status warnings/errors in vCenter Server

Unable to clear the Hardware Status warnings/errors in vCenter Server

Purpose

This article provides steps to clear the warnings/errors in the Hardware Status tab of a host within vCenter Server.

The Hardware Status tab shows warnings/errors related to hardware components. In some cases, these warnings and errors are not cleared even after you ensure that the hardware faults are resolved and trigger vCenter Server alarms. In these cases, you may have to clear these warnings/errors manually.

sensor1

Resolution

To clear the warnings/errors from the Hardware Status tab:
  • Go to Hardware Status tab and select the System event log view.
  • Click Reset event log
  • Click Update. The error should now be cleared.
  • Select the Alerts and warnings view.
  • Click Reset sensors.
  • Click Update. The memory should now be cleared.
  • If the error is not cleared, connect to the host via SSH.
  • Restart the sfcbd service
  • To restart the service in ESXi, run this command:
  • services.sh restart
  • To restart the service in ESX, run this command
  • service sfcbd restart
  • Click Update. The error should now be cleared.
  • Note: If the warning/error is cleared after a reset in Step 2 and Step 5, you need not restart the management agents

Tune ESXi VM CPU Configuration

tools

Tuning Configuration

  • Configuring Multicore Virtual CPUs. There are some limitations and considerations on this subject, like ESXi host configuration, VMware License, Guest OS (license) restrictions and so on. Only then can you decide the number of virtual sockets and the number of cores per socket.
  • CPU affinity is a technique that doesn’t necessarily imply load balancing, but it can be used to restrict a virtual machine to a particular set of processors. Affinity may not apply after a vMotion and it can disrupt ESXi’s ability to apply and meet shares and reservations
  • Duncan Epping raises some good points in this link http://www.yellow-bricks.com/2009/04/28/cpu-affinity/

affinity

  • You can use Hot Add to add vCPUs on the fly

HotAdd

  • Check Hyperthreading is enabled

Advanced CPU

  • Generally keep CPU/MMU Virtualisation on Automatic

CPPU_MMU

  • You can adjust Limits, Reservations and Shares to control CPU Resources

CPUSHARES

Tune ESXi VM Memory Configuration

tools

Tuning Configuration

  • Minimum memory size is 4MB for virtual machines that use BIOS firmware. Virtual machines that use EFI firmware require at least 96MB of RAM or they cannot power on.
  • The memory size must be a multiple of 4MB
  • vNUMA exposes NUMA technology to the Guest O/S. Hosts must have matching NUMA architecture and VMs must be running Hardware Version 8

numa

  • Size the VM so they align with physical boundaries. If you have a system with 6 cores per NUMA node then size your machines with a multiple of 6 vCPUs
  • vNUMA can be enabled on smaller machines by adding numa.vcpu.maxPerVirtualNode=X (Where X is the number of vCPUs per vNUMA node)
  • Enable Memory Hot Add to be able to add memory to the VMs on the fly

HotAdd

  • Use Operating Systems that support large memory pages as ESXi will by default provide them to those O/S’s which request them
  • Store a VMs swap file in a different faster location to the working directory
  • Configure a special host cache on an SSD (If one is installed) to be used for the swap to host cache feature. Host cache is new in vSphere 5. If you have a datastore that lives on a SSD, you can designate space on that datastore as host cache. Host cache acts as a cache for all virtual machines on that particular host as a write-back storage for virtual machine swap files. What this means is that pages that need to be swapped to disk will swap to host cache first, and the written back to the particular swap file for that virtual machine
  • Keep Virtual Machine Swap files on low latency, high bandwidth storage systems
  • Do not store swap files on thin provisioned LUNs. This can cause swap file growth to fail.

swapfile

  • You can use Limits, Reservations and Shares to control Resources per VM

Memoryres