Archive for Objective 1 Storage

(VAAI) vSphere Storage APIs for Array Integration

help-sm

What is VAAI?

VAAI helps storage vendors provide hardware assistance in the form of API components to accelerate VMware I/O operations that are more efficiently run within the storage which reduces CPU in the host

How do I know if my storage array supports VAAI?

To determine if your storage array support VAAI, see the Hardware Compatibility List or consult your storage vendor.
To enable the hardware acceleration on the storage array, check with your storage vendor. Some storage arrays require explicit activation of hardware acceleration support.

vSphere 5

With vSphere 5.0, support for the VAAI capabilities has been enhanced and additional capabilities have been introduced

  1. vSphere thin provisioning – Enabling the reclamation of unused space and monitoring of space usage for thin provisioned LUNs.
  2. Hardware Acceleration for NAS – Enables NAS arrays to integrate with VMware to offload operations such as offline cloning, cold migrations and cloning from templates
  3. SCSI standardisation – T10 compliancy for full copy, block zeroing and hardware assisted locking.
  4. Hardware assisted Full Copy – Enabling the storage to make full copies of data in the array
  5. Hardware assisted Block zeroing – Enabling the array to zero out large numbers of blocks
  6. Hardware assisted locking – Providing an alternative mechanism to protect VMFS metadata

vSphere thin provisioning

Historically the 2 major challenges of thin provisoned LUNs have been the reclamation of dead space and the challenges surrounding the monitoring of space usage. VAAI thin provisoning introduces the following. For Thin Provisioning, enabling/disabling occurs on the array and not on the ESXi host

  • Dead Space reclamation informs the array about the datastore space that is freed when files/disks are deleted or removed from the datastore by general deletion or storage vMotion. The array then reclaims the space

  • Out of Space Condition monitors the space on thin provisioned LUNs to prevent running out of space. A new advanced warning has been added to vSphere

 

 Hardware Acceleration for NAS

Hardware acceleration for NAS will enable faster provisioning and the use of thick virtual disks through new VAAI capabilities

  • Full File Clone – Similar to Full Copy. Enables virtual disks to be cloned by the NAS Device
  • Reserve Space – Enables creation of thick virtual disk files on NAS
  • Lazy File Clone- Emulates the Linked Clone functionality on VMFS datastores. Allows the NAS device to create native snapshots to conserve space for VDI environments.
  • Extended Statistics – Provides more accurate space reporting when using Lazy File Clone

Prior to vSphere 5, a virtual disk was created as a thin provisioned disk, not even enabling the creation of a thick disk. Starting with vSphere 5, VAAI NAS extensions enable NFS vendors to reserve space for an entire virtual disk.

SCSI standardisation – T10 compliancy for full block, block zeroing and hardware assisted locking

vSphere 4.1 introduced T10 compliancy for block zeroing enabling vendors to utilise the T10 standards with the default shipped plugin. vSphere 5 introduces enhanced support for T10 enabling the use of VAAI capabilities without the need to install a plug-in as well as enabling support for many storage devices.

Hardware-Accelerated Full Copy

Enables the storage array to make complete copies of a data set without involving the ESXi host, thereby reducing storage (and network traffic depending on your configuration) traffic between the host and the array. The XSET command offloads the process of copying VMDK blocks which can reduce the time in cloning, deploying from templates and svMotions of VMs.

Hardware-Accelerated Block Zeroing

This feature enables the storage array to zero out a large number of blocks. This eliminates redundant host write commands. Performance improvements can be seen related to creating VMs and formatting virtual disks

Hardware Assisted Locking

Permits VM level locking without the use of SCSI reservations, using VMware’s Compare and Write command. Enables disk locking per sector as opposed to the entire LUN, providing a more efficient way to alter a metadata related file. This feature improves metadata heavy operations, such as concurrently powering on multiple VMs.

How do I know if VAAI is enabled through the vClient?

  • In the vSphere Client inventory panel, select the host
  • Click the Configuration tab, and click Advanced Settings under Software.
  • Click VMFS3
  • Select VMFS3.HardwareAcceleratedLocking
  • Check that this option is set to 1 (enabled)

vaai

  • Click DataMover
  • DataMover.HardwareAcceleratedMove
  • DataMover.HardwareAcceleratedInit

datamover

  • Note: These 3 options are enabled by default.

vaai

How do I know if VAAI is enabled through the command line

  • Type the following commands
  • esxcli storage core device vaai status get -d naa.abcdefg
  • esxcli system settings advanced list -o /DataMover/HardwareAcceleratedMove
  • esxcli system settings advanced list -o /DataMover/HardwareAcceleratedInit
  • esxcli system settings advanced list -o /VMFS3/HardwareAcceleratedLocking
  • esxcli storage core plugin list -N VAAI – displays plugins for VAAI
  • esxcli storage core plugin list -N Filterdisplays VAAI filter

Examples

To check the VAAI Status

Capture

To determine if VAAI is enabled, check is Int Value is set to 1

VAAI

What happens if I have VAAI enabled on the host but some of my disk arrays do not support it?

When storage devices do not support or provide only partial support for the host operations, the host reverts to its native methods to perform the unsupported operations.

Hardware Acceleration Support Status

For each storage device and datastore, the vSphere Client displays the hardware acceleration support status in the Hardware Acceleration column of the Devices view and the Datastores view. The status values are

  • Unknown
  • Supported
  • Not Supported

The initial value is Unknown. The status changes to Supported after the host successfully performs the offload operation. If the offload operation fails, the status changes to Not Supported. When storage devices do not support or provide only partial support for the host operations, your host reverts to its native methods to perform unsupported operations

How to add Hardware Acceleration Claim Rules

To implement the hardware acceleration functionality, the Pluggable  Storage Architecture (PSA) uses a combination of special array  integration plug-ins, called VAAI plug-ins, and an array integration  filter, called VAAI filter. The PSA automatically attaches the VAAI  filter and vendor-specific VAAI plug-ins to those storage devices that  support the hardware acceleration

You need to add two claim rules, one for the VAAI filter and another for the VAAI plug-in. For the new claim rules to be active, you first define the rules and then load them into your system.

  • Define a new claim rule for the VAAI filter
  • esxcli –server servername storage core claimrule add –claimrule-class=Filter –plugin=VAAI_FILTER
  • Define a new claim rule for the VAAI plug-in
  • esxcli –server servername storage core claimrule add –claimrule-class=VAAI
  • Load both claim rules by running the following commands:
  • esxcli –server servername storage core claimrule load –claimrule-class=Filter
  • esxcli –server servername storage core claimrule load –claimrule-class=VAAI
  • Run the VAAI filter claim rule, Only this one needs to be loaded
  • esxcli –server servername storage core claimrule run –claimrule-class=Filter

Examples

VAAI2

VAAI3

Installing a NAS plug-in

  • Place your host into the maintenance mode.
  • Get and set the host acceptance level
  • esxcli software acceptance get
  • esxcli software acceptance set –level=value
  • The value can be one of the following: VMwareCertified, VMwareAccepted, PartnerSupported, CommunitySupported. Default is PartnerSupported
  • Install the VIB package
  • esxcli software vib install -v|–viburl=URL
  • The URL specifies the URL to the VIB package to install. http:, https:, ftp:, and file: are supported.
  • Verify the Plugin is installed
  • esxcli software vib list
  • Reboot your host

VMware RDMs

What is RAW Device Mapping?

A Raw Device Mapping allows a special file in a VMFS volume to act as a proxy for a raw device. The mapping file contains metadata used to manage and redirect disk accesses to the physical device. The mapping file gives you some of the advantages of a virtual disk in the VMFS file system, while keeping some advantages of direct access to physical device characteristics. In effect it merges VMFS manageability with raw device access

A raw device mapping is effectively a symbolic link from a VMFS to a raw LUN. This makes LUNs appear as files in a VMFS volume. The mapping file, not the raw LUN is referenced in the virtual machine configuration. The mapping file contains a reference to the raw LUN.

Note that raw device mapping requires the mapped device to be a whole LUN; mapping to a partition only is not supported.

Uses for RDM’s

  • Use RDMs when VMFS virtual disk would become too large to effectively manage.

For example, When a VM needs a partition that is greater than the VMFS 2 TB limit is a reason to use an RDM. Large file servers, if you choose to encapsulate them as a VM, are a prime example. Perhaps a data warehouse application would be another. Alongside this, the time it would take to move a vmdk larger than this would be significant.

  • Use RDMs to leverage native SAN tools

SAN snapshots, direct backups, performance monitoring, and SAN management are all possible reasons to consider RDMs. Native SAN tools can snapshot the LUN and move the data about at a much quicker rate.

  • Use RDMs for virtualized MSCS Clusters

Actually, this is not a choice. Microsoft Clustering Services (MSCS) running on VMware VI require RDMs. Clustering VMs across ESX hosts is still commonly used when consolidating hardware to VI. VMware now recommends that cluster data and quorum disks be configured as raw device mappings rather than as files on shared VMFS

Terminology

The following terms are used in this document or related documentation:

  • Raw Disk — A disk volume accessed by a virtual machine as an alternative to a virtual disk file; it may or may not be accessed via a mapping file.
  • Raw Device — Any SCSI device accessed via a mapping file. For ESX Server 2.5, only disk devices are supported.
  • Raw LUN — A logical disk volume located in a SAN.
  • LUN — Acronym for a logical unit number.
  • Mapping File — A VMFS file containing metadata used to map and manage a raw device.
  • Mapping — An abbreviated term for a raw device mapping.
  • Mapped Device — A raw device managed by a mapping file.
  • Metadata File — A mapping file.
  • Compatibility Mode — The virtualization type used for SCSI device access (physical or virtual).
  • SAN — Acronym for a storage area network.
  • VMFS — A high-performance file system used by VMware ESX Server.

Compatibility Modes

Physical Mode RDMs

  • Useful if you are using SAN-aware applications in the virtual machine
  • Useful to run SCSI target based software
  • Physical mode is useful to run SAN management agents or other SCSI target based software in the virtual machine
  • Physical mode for the RDM specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software. In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized, so that the VMkernel can isolate the LUN for the owning virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed.

Virtual Mode RDMs

  • Advanced file locking for data protection
  • VMware Snapshots
  • Allows for cloning
  • Redo logs for streamlining development processes
  • More portable across storage hardware, presenting the same behavior as a virtual disk file

Setting up RDMs

  •  Right click on the Virtual Machine and select Edit Settings
  • Under the Hardware Tab, click Add
  • Select Hard Disk
  • Click Next
  • Click Raw Device Mapping

If the option is greyed out, please check the following.

http://kb.vmware.com/RDM Greyed Out

  • From the list of SAN disks or LUNs, select a raw LUN for your virtual machine to access directly.
  • Select a datastore for the RDM mapping file. You can place the RDM file on the same datastore where your virtual machine configuration file resides,
    or select a different datastore.
  • Select a compatibility mode. Physical or Virtual
  • Select a virtual device node
  • Click Next.
  • In the Ready to Complete New Virtual Machine page, review your selections.
  • Click Finish to complete your virtual machine.

Note: To use vMotion for virtual machines with enabled NPIV, make sure that the RDM files of the virtual machines are located on the same datastore. You cannot perform Storage vMotion or vMotion between datastores when NPIV is enabled.

VMware vCLI for vSphere 5

VMware vCLI Instructions

The vSphere Command-Line Interface (vSphere CLI) command set allows you to run common system administration commands against ESX/ESXi systems from any machine with network access to those systems. You can also run most vSphere CLI commands against a vCenter Server system and target any ESX/ESXi system that vCenter Server system manages. vSphere CLI includes the ESXCLI command set, vicfg- commands, and some other commands.

  • Download and Install vCLI
  • http://www.vmware.com/support/developer/vcli/
  • Right click on the vCLI icon and select Run as Administrator
  • Navigate to c:\Program Files (x86)\VMware\VMware vSphere CLI\bin
  • You will see the below vCLI commands (Note the .pl extension on the end)

vcli2

  • An example of running a command would be as per below with vifs.pl
  • Type vifs.pl –help to see the associated switches for this command

vifs

  • Try typing vifs.pl –server esxihostserver –listdc

vifs3

  • Another example of this command as per below screenprints shows how you can create a folder on a Datstore
  • vifs.pl –server esxiserver –mkdir “[Datastore] test”

vifs4

Documentation

vSphere Command-Line Interface Documentation

Getting Started with vSphere Command-Line Interfaces

vSphere Command‐Line Interface Concepts and Examples

vSphere Command‐Line Interface Reference

YouTube Video

You Tube Video

Running Commands on Windows.

In order to stop having to put in credentials everytime you run a command you can can the following

save_session.pl –server esxiserver01 –username usera –password passswordxyz –savesessionfile c:\temp\vclisessionfile

The next time you run a command you can type the following

esxcli –server MyESXiHost –sessionfile c:\temp\vclisessionfile storage core filesystem list

vCLI Poster

http://blogs.vmware.com/tp/files/vmware-management-with-vcli-5.0.pdf