Archive for Storage

Configure FreeNAS 8.3 for sharing and iSCSI and VMWare ESXi

Sharing Configuration

Once you have a volume, create at least one share so that the storage is accessible by the other computers in your network. The type of share you create depends upon the operating system(s) running in your network, your security requirements, and expectations for network transfer speeds. The following types of shares and services are available

  • Apple (AFP) Shares

The Apple File Protocol (AFP) type of share is the best choice if all of your computers run Mac OS X.

  • Unix (NFS) Shares

The NFS type of share is accessible by MAC OS X, Linux, BSD, and the professional/enterprise versions (not the home editions) of Windows. It is a good choice if there are many different operating systems in your network. Depending upon the operating system, it may require the installation or configuration of client software on the desktop.

NFS typically is generally more accessible because it’s a file level protocol and sits higher up on the network stack. This makes it very appealing when working with VMware virtual disks aka vmdk’s simply because they also exist at the same layer. NFS is ubiquitous across NAS vendors and can be provisioned by multiple agnostic implementation endpoints.  An NFS protocol hosts the capability to be virtualized and encapsulated within any Hypervisor instance either clustered or standalone. The network file locking and share semantics of NFS grant it a multitude of configurable elements which can serve a wide range of application

  • Windows (CIFS) Shares

This type of share is accessible by Windows, MAC OS X, Linux and BSD Computers but it is slower than an NFS Share due to the single threaded design of Samba. It provides more configuration options than NFS and is a good choice on a network containing only Windows systems. However, it is a poor choice if the CPU on the FreeNAS™ system is limited; if your CPU is maxed out, you need to upgrade the CPU or consider another type of share.

http://www.freenas.org/images/resources/freenas8.2/freenas8.2_guide.pdf

iSCSI

iSCSI is a protocol standard for the consolidation of storage data. iSCSI allows FreeNAS™ to act like a storage area network (SAN) over an existing Ethernet network. Specifically, it exports disk devices over an Ethernet network that iSCSI clients (called initiators) can attach to and mount. Traditional SANs operate over fibre channel networks which require a fibre channel infrastructure such as fibre channel HBAs, fibre channel switches, and discrete cabling. iSCSI can be used over an existing Ethernet network, although dedicated networks can be built for iSCSI traffic in an effort to boost performance. iSCSI also provides an advantage in an environment that uses Windows shell programs; these programs tend to filter “Network Location” but iSCSI mounts are not filtered. FreeNAS™ uses istgt to provide iSCSI.

Before configuring the iSCSI service, you should be familiar with the following iSCSI terminology:

CHAP: an authentication method which uses a shared secret and three-way authentication to determine if a system is authorized to access the storage device and to periodically confirm that the session has not been hijacked by another system. In iSCSI, the initiator (client) performs the CHAP authentication.

Mutual CHAP: a superset of CHAP in that both ends of the communication authenticate to each other. Initiator: a client which has authorized access to the storage data on the FreeNAS™ system. The client requires initiator software to connect to the iSCSI share.

Target: a storage resource on the FreeNAS™ system.

Extent: the storage unit to be shared. It can either be a file or a device.

LUN: stands for logical unit number and represents a logical SCSI device. An initiator negotiates with a target to establish connectivity to a LUN; the result is an iSCSI connection that emulates a connection to a SCSI hard disk. Initiators treat iSCSI LUNs the same way as they would a raw SCSI or IDE hard drive; rather than mounting remote directories, initiators format and directly manage filesystems on iSCSI LUNs

High Level Overview of iSCSI in FreeNas

  • Decide if you will use authentication, and if so, whether it will be CHAP or mutual CHAP. If using authentication, create an authorized access.
  • Create either a device extent or a file extent to be used as storage.
  • Determine which hosts are allowed to connect using iSCSI and create an initiator.
  • Create at least one portal.
  • Review the target global configuration parameters.
  • Create a target.
  • Associate a target with an extent.
  • Start the iSCSI service in Services -> Control Services

Instructions

  • Log into your FreeNas Box – Instructions detailed in previous post
  • Make sure you have added Disks to the FreeNas Machine and created a volume right up to the point of the previous post or follow the quick steps below
  • Navigate to Storage > Volume Manager

Capture

  • Enter a Volume Name, select disk(s), select Filesystem Type ZFS then click Add Volume. It defaults to Vol1 but I names mine VMware as I want to label it clearly for use as a VMware iSCSI volume

Capture

  • Click Storage, Volume Manager and click in the existing volume you have below

  • Select the fifth button from the left which is to Create a ZFS Volume

Capture

  • Click Add ZFS Volume
  • Once created it will then be listed below as per below screenprint

  • Click the Services box at the top of FreeNas and turn on the iSCSI service

Capture

  • If you will be using CHAP or mutual CHAP to provide authentication, you must create an authorized access in Services – ISCSI – Authorized Accesses – Add Authorized Access

  • Explanation of boxes shown below

  •  Go to iSCSI > Initiators – Add Initators – Keep ALL in the boxes or type in the servers separated by a comma which you want to be able to connect

  •  Next go to iSCSI > Portal and Click Add Portal. You can keep this on 0.0.0.0.This will cause it to listen on all IP Addresses for the initiator or select your IP Address for the FreeNas unit. I have selected my FreeNas IP Address

  •  Go to Target Global Configuration
  • In Discovery Auth Method, choose CHAP
  • In Discovery Auth Group, choose 1
  • Leave the other settings unless you know what you’re doing

  • Next go to iSCSI > Target – Add Target
  • Enter a Target Name and Alias. Select the Portal and Initiator Group IDs, and Authentication Group number and click Save at the bottom

  • Next go to iSCSI > Device Extents. Device extents allows an unformatted disk, a zvol, or an existing HAST device to be exported via iSCSi

There are 2 iSCSI extent types called File Extents or Disk Extents. File Extents which allows you to export a portion of a volume by creating a file and using it as a virtual disk. It can take advantage of snapshots and other volume features. Device extents allows entire disks to be exported by iSCSI. Can perform better than file extents in certain situations but the entire disk is exported instead of a piece as in file extents

  • Specify your extent name and the Disk Device should show up as your ZFS Volume which you created earlier in the steps

  • Click Associated Targets
  • Click Add Extent to Target
  • Select your previously created Target and Extent

  •  It is Best Practive to associate extents to targets in a one to one manner although the software will allow multiple extents to be associated to a target
  • Once ISCSI has been configured, in order to access the data on the iSCSI share, clients will need to use iSCSI initiator software. Clients are available for Windows 7/2003/2008 or VMware
  • Log into VMware using the vClient
  • Click on the VMware Host
  • Select Configuration
  • Select Storage Adapters
  • Click on iSCSI Software Adapter
  • Click Properties of the Software iSCSI adapter click General > Configure

Capture

  • Tick Status > Enable
  • Click on Network Configuration

iscsi0

  • Click Add and choose the Management Network

iscsinetwork1

  • Click Dynamic Discovery and click Add
  • Add Send Target Server > iSCSI Server will be the Freenas Server IP

Capture

  • Click CHAP
  • Enter your user ID for the iSCSI user which you set up on your Freenas box as above steps
  • Note: I had to put this to No Chap to get it to work on vSphere 5 iSCSI adaptor settings but try it anyway, you can always change this if needed without changing any of the Freenas settings

Capture

  • A rescan of the HBA will be carried out
  • Go to Storage
  • Click Add Storage

iscsi3

  • Choose Disk/LUN

iscsi4

  • Choose VMFS Version

iscsi5

  • Review the Current Disk Layout

iscsi6

  • Click Next and Enter a Datastore name

iscsi1

  • Click Next

iscsi2

  • Choose Disk/LUN formatting
  • Click Next

iscsi3

  • Review and Finish
  • Test Storage vMotion

If you want to add your iSCSI target as a disk to a Windows Server

http://www.virtuallyimpossible.co.uk/connect-an-iscsi-san-to-server-2008-r2-using-the-microsoft-iscsi-initiator/

 

Zombie VMDKs

A Zombie VMDK is as mentioned usually a VMDK which isn’t used anymore by a VM. You can double check this by checking if the disk is still linked to the VM which it should be a part off. If it isn’t you can delete it from the datastore via the datastore browser. I would suggest moving it first before you delete is, just in case

Storing a Virtual Machine Swapfile in a different location

By default, swapfiles for a virtual machine are located on a VMFS3 datastore in the folder that contains the other virtual machine files. However, you can configure your host to place virtual machine swapfiles on an alternative datastore.

Why move the Swapfiles?

  • Place virtual machine swapfiles on lower-cost storage
  • Place virtual machine swapfiles on higher-performance storage.
  • Place virtual machine swapfiles on non replicated storage
  • Moving the swap file to an alternate datastore is a useful troubleshooting step if the virtual machine or guest operating system is experiencing failures, including STOP errors, read only filesystems, and severe performance degradation issues during periods of high I/O.

vMotion Considerations

Note: Setting an alternative swapfile location might cause migrations with vMotion to complete more slowly. For best vMotion performance, store virtual machine swapfiles in the same directory as the virtual machine. If the swapfile location specified on the destination host differs from the swapfile location specified on the source host, the swapfile is copied to the new location which causes the slower migration. Copying host-swap local pages between source- and destination host is a disk-to-disk copy process, this is one of the reasons why VMotion takes longer when host-local swap is used.

Swapfile Moving Caveats

  • If vCenter Server manages your host, you cannot change the swapfile location if you connect directly to the host by using the vSphere Client. You must connect to the vCenter Server system.
  • Migrations with vMotion are not allowed unless the destination swapfile location is the same as the source swapfile location. In practice, this means that virtual machine swapfiles must be located with the virtual machine configuration file.
  • Using host-local swap can affect DRS load balancing and HA failover in certain situations. So when designing an environment using host-local swap, some areas must be focused on to guarantee HA and DRS functionality.

DRS

If DRS decide to rebalance the cluster, it will migrate virtual machines to low utilized hosts. VMkernel tries to create a new swap file on the destination host during the VMotion process. In some scenarios, the host might not contain any free space in the VMFS datastore and DRS will not be able to vMotion any virtual machine to that host because the lack of free space. But the host CPU active and host memory active metrics were still monitored by DRS to calculate the load standard deviation used for its recommendations to balance the cluster. The lack of disk space on the local VMFS datastores influences the effectiveness of DRS and limits the options for DRS to balance the cluster.

High availability failover

The same applies when a HA isolation response occurs, when not enough space is available to create the virtual machine swap files, no virtual machines are started on the host. If a host fails, the virtual machines will only power-up on host containing enough free space on their local VMFS datastores. It might be possible that virtual machines will not power-up at-all if not enough free disk space is available

Procedure (Cluster Modification)

  • Right click the cluster
  • Edit Settings
  • Click Swap File Location
  • Select Store the Swapfile in the Datastore specified by the Host

Procedure (Host Modification)

If the host is part of a cluster, and the cluster settings specify that swapfiles are to be stored in the same directory as the virtual machine, you cannot edit the swapfile location from the host configuration tab. To change the swapfile location for such a host, use the Cluster Settings dialog box.

swapfile2

  • Click the Inventory button in the navigation bar, expand the inventory as needed, and click the appropriate managed host.
  • Click the Configuration tab to display configuration information for the host.
  • Click the Virtual Machine Swapfile Location link.
  • The Configuration tab displays the selected swapfile location. If configuration of the swapfile location is not supported on the selected host, the tab indicates that the feature is not supported.
  • Click Edit.
  • Select either Store the swapfile in the same directory as the virtual machine or Store the swapfile in a swapfile datastore selected below.
  • If you select Store the swapfile in a swapfile datastore selected below, select a datastore from the list.
  • Click OK.
  • The virtual machine swapfile is stored in the location you selected.

Storage vMotion fails with the error: Storage vMotion failed to copy one or more of the VM’s disks

The Error

A general system error occurred: Storage vMotion failed to copy one or more of the VM’s disks. Please consult the VM’s log for more details, looking for lines starting with “CBTMotion”.

Resolution

To resolve this issue, create a snapshot of the affected virtual machine and then commit the snapshot.
  • In the vSphere Client, right-click the virtual machine and click Snapshot > Take Snapshot.
  • In the vSphere Client right-click the virtual machine and click Snapshot > Snapshot Manager.
  • Select the snapshot you created in Step 1 and click Delete.

VMware VMDK Files

VMDK Files

These are the disk files that are created for each virtual hard drive in your VM. There are 3 different types of files that use the vmdk extension, they are:

  • *–flat.vmdk file – This is the actual raw disk file that is created for each virtual hard drive. Almost all of a .vmdk file’s content is the virtual machine’s data, with a small portion allotted to virtual machine overhead. This file will be roughly the same size as your virtual hard drive.
  • *.vmdk file – This isn’t the file containing the raw data anymore. Instead it is the disk descriptor file which describes the size and geometry of the virtual disk file. This file is in text format and contains the name of the –flat.vmdk file for which it is associated with and also the hard drive adapter type, drive sectors, heads and cylinders, etc. One of these files will exist for each virtual hard drive that is assigned to your virtual machine. You can tell which –flat.vmdk file it is associated with by opening the file and looking at the Extent Description field.
  • *–delta.vmdk file – This is the differential file created when you take a snapshot of a VM (also known as REDO log). When you snapshot a VM it stops writing to the base vmdk and starts writing changes to the snapshot delta file. The snapshot delta will initially be small and then start growing as changes are made to the base vmdk file, The delta file is a bitmap of the changes to the base vmdk thus is can never grow larger than the base vmdk. A delta file will be created for each snapshot that you create for a VM. These files are automatically deleted when the snapshot is deleted or reverted in snapshot manage

Storage/Datastore Reclamation in VMware

Sometimes, it is worth doing a storage reclamation exercise through all your VMware Datastores in order to remove old folder, files and to check that nothing miscellaneous is going on.

What can you find?

In vCenter > Datastores > Performance Tab, you can find the graph showing all the files it can detect with the selection “Other VM Files” OR “Other” which is what we’re interested in.

When we checked this out on the Host back-end logged in via Putty, we can see the below. The ./ files are not usual to find on LUNs/Datstores and indicate that there are SAN snapshots existing on here

/vmfs/volumes/4e0da454-902c23bf-cb36-e61f13f7c69b # ls -l

SERVER01
SERVER02
SERVER03

/vmfs/volumes/4e0da454-902c23bf-cb36-e61f13f7c69b # find . -exec ls -lh {} \; | grep flat

SERVER01-flat.vmdk
SERVER01_1-flat.vmdk
SERVER01_2-flat.vmdk
SERVER01_3-flat.vmdk

./SERVER01/SERVER01_3-flat.vmdk
./SERVER01/SERVER01_2-flat.vmdk
./SERVER01/SERVER01_1-flat.vmdk
./SERVER01/SERVER01-flat.vmdk

Conclusion

You will need to ask your Storage Admin to check out your LUNs and make sure that any old snapshots are either required or can be deleted.

It is worth keeping an eye on all of this as we found we had nearly 2TB of LUN Snapshots lurking around taking up valuable and expensive storage space.

Storage I/O Control

What is Storage I/ Control?

*VMware Enterprise Plus License Feature

Set an equal baseline and then define priority access to storage resources according to established business rules. Storage I/O Control enables a pre-programmed response to occur when access to a storage resource becomes contentious

With VMware Storage I/O Control, you can configure rules and policies to specify the business priority of each VM. When I/O congestion is detected, Storage I/O Control dynamically allocates the available I/O resources to VMs according to your rules, enabling you to:

  • Improve service levels for critical applications
  • Virtualize more types of workloads, including I/O-intensive business-critical applications
  • Ensure that each cloud tenant gets their fair share of I/O resources
  • Increase administrator productivity by reducing amount of active performance management required.
  • Increase flexibility and agility of your infrastructure by reducing your need for storage volumes dedicated to a single application

How is it configured?

It’s quite straight forward to do. First you have to enable it on the datastores. Only if you want to prioritize a certain VM’s I/Os do you need to do additional configuration steps such as setting shares on a per VM basis. Yes, this can be a bit tedious if you have very many VMs that you want to change from the default shares value. But this only needs to be done once, and after that SIOC is up and running without any additional tweaking needed

The shares mechanism is triggered when the latency to a particular datastore rises above the pre-defined latency threshold seen earlier. Note that the latency is calculated cluster-wide. Storage I/O Control also allows one to tune &  place a maximum on the number of IOPS that a particular VM can generate  to a shared datastore. The Shares and IOPS values are configured on a per VM basis. Edit the Settings of the VM, select the Resource tab, and the Disk setting will allow you to set the Shares value for when contention arises (set to Normal/1000 by default), and limit the IOPs that the VM can generate on the datastore (set to Unlimited by default):

Why enable it?

The thing is, without SIOC, you could definitely hit this noisy neighbour problem where one VM could use more than its fair share of resources and impact other VMs residing on the same datastore. So by simply enabling SIOC on that datastore, the algorithms will ensure fairness across all VMs sharing the same datastore as they will all have the same number of shares by default. This is a great reason for admins to use this feature when it is available to them. And another cool feature is that once SIOC is enabled, there are additional performance counters available to you which you typically don’t have

What threshold should you set?

30ms is an appropriate threshold for most applications however you may want to have a discussion with your storage array vendor, as they often make recommendations around latency threshold values for SIOC

Problems

One reason that this can occur is when the back-end disks/spindles have other LUNs built on them, and these LUNs are presented to non ESXi hosts. Check out

KB 1020651 for details on how to address this and previous posts

and

http://www.electricmonk.org.uk/2012/04/20/external-io-workload-detected-on-shared-datastore-running-storage-io-control-sioc/

Deleting a VM with Raw Disk Mappings

How do you delete a VM with Raw Disk Mappings?

To the best of my knowledge, if you delete the VM it will only delete the pointer file and you would have to delete the RDM on the SAN.

Before deleting the VM, you could always click on Edit Settings and select the RDM and Remove Disk – Delete from disk. This should delete both the pointer file and the RDM.

VMware vSphere Storage Appliance

A VMware vSphere Storage Appliance (VSA) is a virtual appliance that provides small and medium businesses with the benefit of VMware vSphere vMotion and High Availability without requiring shared storage

VSA runs on an ESXi host. A VSA cluster is a group of ESXi hosts each running its own VSA instance

A VSA cluster enables the following features

  • Shared Datastores for all hosts in the cluster
  • vMotion and HA
  • Datastore Replication
  • Hardware and Software Datastore failover capabilities

VSA is an alternative to SAN storage

  • A SAN system provides a centralised array of storage
  • A VSA cluster provides a distributed array of storage
  • Elimates the need to purchase expensive SAN storage

VSA Cluster Architecture

The architecture of a VSA cluster includes the physical servers that have local hard disks, ESXi as the operating system of the physical servers, and the vSphere Storage Appliance virtual machines that run clustering services to create volumes that are exported as the VSA datastores.

vSphere Storage Appliance supports the creation of a VSA cluster with two or three members. A vSphere Storage Appliance uses the hard disks of an ESXi host to create two volumes of the same size. It exports one of the volumes as a datastore. The other volume is a replica of the volume that is exported by another vSphere Storage Appliance from another host in the VSA cluster

VSA Cluster with 2 hosts

In a VSA cluster with two VSA cluster members, an additional service called VSA cluster service runs on the vCenter Server machine. The service participates as a member in the VSA cluster, but it does not provide storage. To remain online, a VSA cluster requires that more than half of the members are also online. If one instance of a vSphere Storage Appliance fails, the cluster can remain online only if the remaining VSA cluster member and the VSA cluster service are online.

A VSA cluster with 2 members has 2 VSA datastores and maintains a replica of each datastore

VSA Cluster with 3 hosts

A VSA cluster with 3 members has 3 VSA datastores and maintains a replica of each datastore. This configuration does not require the VSA cluster service running on the vCenter Server system

How does it work?

VSA uses the hard disks of the ESXi 5 hosts to maintain a datastore and its replica. VSA created 2 volumes of the same size. It exports one of the volumes as a datastore. The other volume is a replica of the volume that is exported by another VSA from another host in the VSA cluster. A VSA cluster with 2 members has 2 VSA datastores and maintains a replica of each datastore

A VSA cluster with 3 members has 3 datastores and maintains a replica of each datastore

How is Data accessed?

Data is accessed in the VSA cluster using the NFS Version 3 protocol. The NFS exports are used as ESXi datastores which provide shared storage to members in the VSA cluster. The default RAID configuration for the VSA cluster is RAID 10

vCenter server continues to manage the ESXi hosts and the VM’s. vCenter can only manage one VSA cluster at a time

VSA Manager

A VSA cluster is created and managed by the VSA Manager. This is a vCenter Server 5.0 extension that you install on a vCenter Server system. After you install VSA Manager, the VSA Manager tab is displayed in the vSphere client. The VSA Manager is used to do the following

  • Deploying a VSA Cluster
  • Mounting as datastores the volumes that each VSA instance exports
  • Monitoring and maintaining and troubleshooting a VSA cluster

More Information

VMware VSA Documentation

http://www.vmware.com/support/pubs

Unable to add new LUNS on VMware 4.1 U2

Problem

This week we have upgraded our hosts to VMware ESXi 4.1.0, 582267. Our storage guy has given us 2 x 2TB LUN’s but I was unable to add them as per screen-print below. Previously he has created 2TB LUNs and these have been fine

Unable to read partition information from disk

Solution

It seems Update 2 enforces the maximum LUN size, which is 2TB minus 512 Bytes with vSphere 4.x. Depending on the storage system, 2 TB could be either 2.000 GB (marketing size) or 2.048GB (technical size). The above mentioned maximum relates to the technical size, so with the storage system you have, you may need to configure 2.047GB max.

See Also

http://virtualgeek.typepad.com/virtual_geek/2009/06/vsphere-and-2tb-luns-changes-from-vi3x.html