VMware vSphere updates

 VMware released the following new product updates:

·        VMware ESX(i) 4.0 Update 2

·        VMware vCenter Server 4.0 Update 2

·        VMware vCenter Update Manager 4.0 Update 2

·        VMware Data Recovery 1.2

Always test these updates before deploying in production and make sure that other products (for example VMware View and VMware SRM) are supported with the new updates! Here are the highlights per product update:

VMware ESX(i) 4.0 Update 2

ESX 4.0 Update 2 | 10 Jun 2010 | Build 261974
VMware Tools | 10 Jun 2010 | Build 261974

What’s New

The following information provides highlights of some of the enhancements available in this release of VMware ESX:

  • Enablement of Fault Tolerance Functionality for Intel Xeon 56xx Series processors— vSphere 4.0 Update 1 supports the Intel Xeon 56xx Series processors without Fault Tolerance. vSphere 4.0 Update 2 enables Fault Tolerance functionality for the Intel Xeon 56xx Series processors.

  • Enablement of Fault Tolerance Functionality for Intel i3/i5 Clarkdale Series and Intel Xeon 34xx Clarkdale Series processors— vSphere 4.0 Update 1 supports the Intel i3/i5 Clarkdale Series and Intel Xeon 34xx Clarkdale Series processors without Fault Tolerance. vSphere 4.0 Update 2 enables Fault Tolerance functionality for the Intel i3/i5 Clarkdale Series and Intel Xeon 34xx Clarkdale Series processors.

  • Enablement of IOMMU Functionality for AMD Opteron 61xx and 41xx Series processors— vSphere 4.0 Update 1 supports the AMD Opteron 61xx and 41xx Series processors without input/output memory management unit (IOMMU). vSphere 4.0 Update 2 enables IOMMU functionality for the AMD Opteron 61xx and 41xx Series processors.

  • Enhancement of the esxtop/resxtop utility— vSphere 4.0 Update 2 includes an enhancement of the performance monitoring utilities, esxtop and resxtop. The esxtop/resxtop utilities now provide visibility into the performance of NFS datastores in that they display the following statistics for NFS datastores: Reads/s, writes/s, MBreads/s, MBwrtn/s, cmds/s, GAVG/s(guest latency).

  • Additional Guest Operating System Support— ESX/ESXi 4.0 Update 2 adds support for Ubuntu 10.04. For a complete list of supported guest operating systems with this release, see the VMware Compatibility Guide.

Resolved Issues – In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

The following bug is solved in this release:

For devices using the roundrobin PSP the value configured for the –iops option changes after ESX host reboot. If a device that is controlled by the roundrobin PSP is configured to use the --iops option, the value set for the --iops option is not retained if the ESX Server is rebooted.

VMware vCenter Server 4.0 Update 2

ESX 4.0 Update 2 | 10 Jun 2010 | Build 261974
VMware Tools | 10 Jun 2010 | Build 261974

What’s New

  Guest Operating System Customization Improvements: vCenter Server now supports customization of the following guest operating systems:

  • Windows XP Professional SP2 (x64) serviced by Windows Server 2003 SP2

  • SLES 11 (x32 and x64)

  • SLES 10 SP3 (x32 and x64)

  • RHEL 5.5 Server Platform (x32 and x64)

  • RHEL 5.4 Server Platform (x32 and x64)

  • RHEL 4.8 Server Platform (x32 and 64)

  • Debian 5.0 (x32 and x64)

  • Debian 5.0 R1 (x32 and x64)

  • Debian 5.0 R2 (x32 and x64)

  Resolved Issues:In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

 

VMware vCenter Update Manager 4.0 Update 2

VMware vCenter Update Manager 4.0 Update 2 | 10 Jun 2010 | Build 264019

What’s New

  • Improved reliability of operations on hosts in low bandwidth, high latency, or lossy networks – Update Manager 4.0 Update 2 performs operations on hosts reliably when working in slow networks, networks where packet loss occurs, or WAN environments. In earlier Update Manager releases, if host operations took more than two hours to complete, the tasks might timeout and fail. See Extend the default timeout periods for vCenter Server, ESX/ESXi hosts, and vCenter Update Manager (KB 1017253) for more information about the problem. In Update Manager 4.0 Update 2 such tasks complete successfully.

VMware vCenter Update Manager 4.0 Update 2 adds enhancements and bug fixes, which are described in the Resolved Issues section. This release contains known issues described in Known Issues.

 

VMware Data Recovery 1.2

Data Recovery | 18 MAY 2010 | Build 260251

What’s New

The following enhancements have been made for this release of Data Recovery.

  • File Level Restore (FLR) is now available for use with Linux.

  • Each vCenter Server instance supports up to ten Data Recovery backup appliances.

  • The vSphere Client plug-in supports fast switching among Data Recovery backup appliances.

  • Miscellaneous vSphere Client Plug-In user interface enhancements including:

    • The means to name backup jobs during their creation.

    • Additional information about the current status of destination disks including the disk’s health and the degree of space savings provided by the deduplication store’s optimizations.

    • Information about the datastore from which virtual disks are backed up.

Here’s a handy comparison table between VDR 1.1 and VDR 1.2:

image

All the product updates can be found on the VMware download page.

 

Best practices for HP EVA, vSphere 4 and Round Robin multi-pathing

 

 To get your HP EVA storage system  and VMware ESX hosts storage balances you get a better performance. Here are some Best practices.

The VMware vSphere and the HP EVA 4×00, 6×00 and 8×00 series are ALUA compliant. ALUA compliant means in simple words that it is not needed to manually identify preferred I/O paths between VMware ESX hosts and the storage controllers.

When you create a new Vdisk on the HP EVA the LUN is set default set to No Preference. The No Preference policy means the following: 

  • Controller ownership is non-deterministic. The unit ownership is alternated between controllers during initial presentation or when controllers are restarted

  • On controller failover (owning controller fails), the units are owned by the surviving controller

  • On controller failback (previous owning controller returns), the units remain on the surviving controller. No failback occurs unless explicitly triggered.

To get a good distribution between the controllers the following Vdisk policies can be used:

 Path A-Failover/failback

– At presentation, the units are brought online to controller A

– On controller failover, the units are owned by the surviving controller (B)

– On controller failback, the units are brought online on controller A implicitly.

 Path B-Failover/failback

– At presentation, the units are brought online to controller B

– On controller failover, the units are owned by surviving controller (A)

– On controller failback, the units are brought online on controller B implicitly.

On the HP EVA half of the Vdisks are set on path A-Failover/failback and the other half  of the Vdisks are set to B-Failover/failback, so that they alternate between controller A and B. This can be done from the HP EVA command view.  Now the vDisk are distributed between the two controllers we can go to the vSphere configuration. On every vSphere host perform an rescan or reboot.

In VMware vSphere the Most Recently Used (MRU) and Round Robin (RR) multi-pathing policies are ALUA compliant. Round Robin load balancing is now officially supported.  These multi-path policies have the following characteristics:

MRU:

– Will give preference to an optimal path to the LUN

– When all optimal paths are unavailable, it will use a non-optimal path

 – When an optimal path becomes available, it will failover to the optimal

– Although each ESX server may use a different port through the optimal controller to the LUN, only a single controller port is used for LUN access per ESX server

 Round Robin:

– Will queue I/O to LUNs on all ports of the owning controllers in a round robin fashion providing instant bandwidth improvement

– Will continue queuing I/O in a round robin fashion to optimal controller ports until none are available and will failover to the non-optimal paths

– Once an optimal path returns it will failback to it

Can be configured to round robin I/O to all controller ports for a LUN by ignoring optimal path preference. (May be suitable for a write intensive environment due to increased controller port bandwidth)

The fixed multi-path policy is not ALUA compliant and therefore not recommend to use.

In vSphere 4 there is new multi-pathing framework. There are three core components:

– Native Multi-pathing Plugin (NMP), handles the multi-pathing configuration, communicates with the SATP and PSP to indentify path failure conditions.

– Storage Array Type Plugin (SATP), handles specific operations such as device discovery, error codes and failover.

– Path Selection Plugin (PSP), handles the best available path, there are three policies fixed, MRU and Round Robin.

PSP are set per LUN, meaning that it is possible to have some LUNs use MRU and other use Round Robin policy. Best practice from HP is to change to PSP from MRU to Round Robin we use the following command in the Service Console:

esxcli nmp satp setdefaultpsp --satp VMW_SATP_ALUA --psp VMW_PSP_RR

Another Best practice is to set the IOPS (Default the IOPS value is 1000) with a value of 1 (controls how many IOs are sent down a given path before vSphere starts to use the next path) for every LUN by using the following command:

for i in `ls /vmfs/devices/disks/ | grep naa.600` ; 
do esxcli nmp roundrobin setconfig --type "iops" --iops=1 --device $i ;done

But there is a bug when rebooting the VMware ESX server, the IOPS value reverted to a random value. More information can be found on the Virtual Geek blog from Chad Sakac. To check the IOPS values on all LUNs use the following command:

for i in `ls /vmfs/devices/disks/ | grep naa.600` ; 
do esxcli nmp roundrobin getconfig --device $i ;done

image

To solve this IOPS bug, edit the /etc/rc.local file on every VMware ESX host and and add the  IOPS=1 command. The rc.local file execute after all init scripts are executed.

clip_image002[5]

After adding the IOPS=1 command restart the VMware ESX host and check the IOPS values when its back online.

clip_image002[7]

Now you can check if the the Round Robin policy is active and the LUNs are spread over the two controllers.

clip_image002 clip_image002[4]

Here are some great PowerCLI one-liners created by Luc Dekens. Thanks for creating so quickly these one-liners for me!

Set the multi-path policy to Round Robin for all hosts:

Get-VMHost|Get-ScsiLun -LunType "disk"|where {$_.MultipathPolicy –ne 
"RoundRobin"}|Set-ScsiLun -MultipathPolicy "RoundRobin" 

Get the multi-path policy for one host:

Get-VMHost <ESXname> | Get-ScsiLun | Select CanonicalName, MultiPathPolicy

 Get the multi-path policy for all the hosts:

Get-VMHost | %{$_.Name; $_ | Get-ScsiLun | Select CanonicalName, MultiPathPolicy}

[ad#banner]

 

source: Configuration best practices for HP StorageWorks Enterprise Virtual Array (EVA) family and VMware vSphere 4