VMware patch alert

VMware released the following six new pathes:

Patch name Description Type
ESX350-200811401-SG – PATCH Updates VMkernel, hostd, and Other RPMs Security
ESX350-200811402-SG – PATCH Updates ESX Scripts General
ESX350-200811405-SG – PATCH Security Update to libxml2 Security
ESX350-200811406-SG – PATCH Security Update to bzip2 Security
ESX350-200811408-BG – PATCH Updates QLogic Software Driver CRITICAL
ESX350-200811409-BG – PATCH Updates Kernel Source and VMNIX CRITICAL


ESX350-200811401-SG – PATCH solves some spontaneously reboots when the setting “VMware Tools to automatically upgrade Tools before each power-on” is on, see  below the summaries and symptoms:

This patch fixes the following issues:

  • A memory corruption condition may occur in the virtual machine hardware. A malicious request sent from the guest operating system to the virtual hardware may cause the virtual hardware to write to uncontrolled physical memory.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2008-4917 to this issue.

  • VMotion might trigger VMware Tools to automatically upgrade. This issue occurs on virtual machines that have the setting for Check and upgrade Tools before each power-on enabled, and the affected virtual machines are moved, using VMotion, to a host with a newer version of VMware-esx-tools.

    • Virtual machines unexpectedly restart during a VMotion migration.
    • The guest operating systems might stall (reported on forums).

    Note: After patching the ESX host, you need to upgrade VMware Tools in the affected guests that reside on the host.

    Symptoms seen without this patch:

  • Swapping active and standby NICs results in a loss of connectivity to the virtual machine. 
  • A race issue caused an ASSERT_BUG to unnecessarily run and caused the ESX host to crash. This change removes the invalid ASSERT_BUG.Symptoms seen without this patch: The ESX host crashes with an ASSERT message that includes fs3DiskLock.c:1423.Example:
    ASSERT /build/mts/release/bora-77234/bora/modules/vmkernel/vmfs3/fs3DiskLock.c:1423 bugNr=147983


  • A virtual machine can become registered on multiple hosts due to a .vmdk file locking issue. This issue occurs when network errors cause HA to power on the same virtual machine on multiple hosts, and when SAN errors cause the host on which the virtual machine was originally running to lose its heartbeat. The original virtual machine becomes unresponsive.

    With this patch, the VI Client displays a dialog box warning you that a .vmdk lock is lost. The virtual machine is powered off after you click OK.

  • This change fixes confusing VMkernel log messages in cases where one of the storage processors (SP) of an EMC CLARiiON CX storage array is hung. The messages now correctly identify which SP is hung.Example of confusing message:
    vmkernel: 1:23:09:57.886 cpu3:1056)WARNING: SCSI: 2667: CX SP B is hung.
    vmkernel: 1:23:09:57.886 cpu3:1056)SCSI: 2715: CX SP A for path vmhba1:2:2 is hung.

    vmkernel: 1:23:09:57.886 cpu3:1056)WARNING: SCSI: 4282: SP of path vmhba1:2:2 is
    hung. Mark all paths using this SP as dead. Causing full path failover.

    In this case, research revealed that SP A was hung, but SP B was not.

  • This patch allows VMkernel to successfully boot on unbalanced NUMA configurations—that is, those with some nodes having no CPU or memory. When such unbalanced configuration is detected, VMkernel shows an alert and continues booting. Previously, VMkernel failed to load on such NUMA configurations.

    Sample alert message when memory is missing from one of the nodes (here, node 2):

    No memory detected in SRAT node 2. This can cause very bad performance.

  • When the zpool create command from a Solaris 10 virtual machine is run on a LUN that is exported as a raw device mapping (RDM) to that virtual machine, the command creates a partition table of type GPT (GUID partition table) on that LUN as part of creating the ZFS filesystem. Later when a LUN rescan is run on the ESX server through VirtualCenter or through the command line, the rescan takes a significantly long amount of time to complete because the VMkernel fails to read the GUID partition table. This patch fixes this problem.
    Symptoms seen without this patch: Rescanning HBAs takes a long time and an error message similar to the following is logged in /var/log/vmkernel:

    Oct 31 18:10:38 vmkernel: 0:00:45:17.728 cpu0:8293)WARNING: SCSI: 255: status Timeout for vml.02006500006006016033d119005c8ef7b7f6a0dd11524149442030. residual R 800, CR 80, ER 3

  • A race in LVM resignaturing code can cause volumes to disappear on a host when a snapshot is presented to multiple ESX hosts, such as in SRM environments.
    Symptoms: After rescanning, VMFS volumes are not visible.
  • This change resolves a rare VMotion instability.

    Symptoms: During a VMotion migration, certain 32-bit applications running in 64-bit guests might crash due to access violations.

  • Solaris 10 Update 4, 64-bit graphical installation fails with the default virtual machine RAM size of 512MB.
  • DRS development and performance improvement. This change prevents unexpected migration behavior.
  • In a DRS cluster environment, the hostd service reaches a hard limit for memory usage, which causes hostd to restart itself.

    Symptoms: The hostd service restarts and temporarily disconnects from VirtualCenter. The ESX host stops responding before hostd reconnects.

  • Fixes for supporting Site Recovery Manager (upcoming December 2008 release) on ESX 3.5 Update 2 and Update 3.
When installing this patch you must reboot your ESX server(s) and update the VMware tools on the VMs, this requires a reboot to. 
The Virtual Machine Monitoring reboot problem (see post https://www.ivobeerens.nl/?p=180) is NOT fixed, :-(. I hope VMware will fix this soon.

VMware View released

VMware has VMware View released. The solution consist of:

  • – VMware Virtual Desktop Infrastructure (VMware Infrastructure 3 + VMware View Manager 3)
  • – Storage Optimization with VMware View Composer
  • – Application Virtualization with VMware ThinApp
  • – Client Virtualization with Offline Desktop – Experimental Use

VMware infrastructure 3:

VMware View 3 is built on VMware Infrastructure 3 and as a result, IT organizations can extend the benefits of industry-leading virtualization to the desktop. Integrating desktop infrastructure with VMware Infrastructure 3 provides unified management and a host of features that improve performance, reliability and business continuity, including:

  • Ability to group servers, which host virtual desktops, together for redundancy and eliminate single point of failure.
  • Consolidated Backup centralizes backup for desktop virtual machines.
  • Automated failover and recovery to keep desktops running nonstop.
  • Dynamic load balancing for desktop computing resources

View Manger:

View Manager 3, a key component of VMware View, is an enterprise class desktop management solution, which streamlines the management, provisioning and deployment of virtual desktops. Users securely and easily access virtual desktops hosted on VMware Infratructure 3, Terminal Servers, Blade PCs or even remote physical PCs through View Manager. Virtual desktop upgrading and patching are done centrally from a single console so you can efficiently manage hundreds or even thousands of desktops—saving time and resources.

VMware View Composer:

A new component of the VMware View solution, View Composer uses VMware Linked Clone technology to rapidly create desktop images that share virtual disks with a master image to conserve disk space and streamline management. User data and settings are separated from the desktop image, so they can be administered independently. All desktops that are linked to a master image can be patched or updated simply by updating the master image, without affecting users’ settings, data or applications. This reduces storage needs and costs by up to 70% while simplifying desktop management.

VMware ThinApp:

VMware ThinApp application virtualization software decouples applications from operating systems and packages them into an isolated and encapsulated file. This allows you to run multiple versions of applications on a single desktop without conflict, or the same version of an application on multiple operating systems without modification. Reduce storage needs for virtual desktops and simplify application management by streaming applications packaged with ThinApp from a centralized server or a shared network drive.

Offline Desktop – Experimental Use:

Offline Desktop for experimental use allows complete virtual desktops to be moved between the datacenter and physical desktop devices with security policies intact. Changes to the virtual desktop are intelligently synchronized between datacenter and physical desktop device. Offline Desktop allows an end user to access their virtual desktop while not connected to the network or simply take advantage of a physical desktop devices local resources to run their virtual desktop for an enhanced end user experience


You can download VMware View @ here.

Move the VirtualCenter SQL database or migrate the VirtualCenter SQL database to different server.

VMware released two new handy KB articles about how to move or migrating  the SQL VirtualCenter database to different server.

To move the database

When relocating your SQL database:

  1. To move SQL Server databases to a new location by using Detach and Attach functions in SQL Server, see Microsoft KB article http://support.microsoft.com/kb/224071/en-us
  2. If you are using the Copy Database Wizard in SQL Server 2000 see, Microsoft KB article http://support.microsoft.com/kb/274463
  3. Update the Datasource (DSN) in the OBDC Administrator to reflect any changes made.
  4. If either the database login or password changed during the relocation, VirtualCenter must be updated to with the new credentials. For more information, see the Modifying the username and password VirtualCenter uses to connect to the database server section of Troubleshooting the database data source used by VirtualCenter Server (1003928).

Read the KB article here.

Migrate the database to a new server:

  1. Shutdown the VirtualCenter Server service. For more information, see Stopping, starting, or restarting the VirtualCenter Server service (1003895).
  2. Take a backup of the SQL database.
  3. If the SQL database is also being moved, create a second instance of your database and use the vendor’s tools to migrate the data.Note: For SQL Server, use Microsoft’s Copy Database Wizard. For more information, see http://support.microsoft.com/?kbid=274463





  4. Create the appropriate System DSN connections on the new VirtualCenter Server host.
  5. Begin the installation of the VirtualCenter software on the new server. If you are installing VirtualCenter in a virtual machine, guidelines for deploying VirtualCenter in a virtual machine, including sizing, installation, functionality, and configuration of VMware HA can be found at http://www.vmware.com/vmtn/resources/798.
  6. When prompted, select Use existing database, and provide the correct credentials to that database.
  7. When prompted, select to not re-initialize the data, as this erases all your inventory data.
  8. Reboot the virtual machine when the installation is finished.
  9. When you first start the VirtualCenter Client, it may ask for licenses. Configure the licenses as you had previously in your environment. For more information about licensing for ESX Server 3 hosts, see the Installation Guide. For more information about licensing for ESX Server 3i hosts, see the Setup Guide. You are now able to see the same settings and configuration details.Note: If the IP address of the new VirtualCenter Server has changed your ESX Servers must be made aware of that change, otherwise the ESX Servers continue to send their heartbeats to the original IP address of VirtualCenter and appear as Not Responding.To correct this situation:



    1. Log in as root with an SSH client to each ESX host.
    2. Use a text editor to change the IP address inside the <serverIp>xxx.xxx.xxx.xxx</serverIp> tags the following file:/etc/opt/vmware/vpxa/vpxa.cfg (VirtualCenter 2.5.x)
      /etc/vmware/vpxa.cfg (VirtualCenter 2.0.x). 


    3. Save your changes and exit.
    4. Restart the management agents. For more information, see Restarting the Management agents on an ESX Server (1003490).
    5. Repeat for all ESX hosts connected to the VirtualCenter Server.


Read the KB article here.