SCVMM 2012 R2 agent update error

After updating  System Center Virtual Machine Manager to Update Rollup 4 (UR4), one of the post installation tasks is updating the VMM agents on the Hyper-V hosts. Updating the VMM agent can be performed from the VMM console or using PowerShell. On one of the Hyper-V hosts I got the following error when trying to update the VMM agent from the console:

Error (10429) An older version of the Virtual Machine Manager server is installed on <server>. The VMM agent on the VMM management server cannot be upgraded independently

Error

I didn’t found a reason why the VMM console was unable to update the agent on the Hyper-V host. The other hosts in the cluster are upgraded without a problem.

After some troubleshooting, the following steps resolved the VMM Agent update error:

  • Manually uninstall the VMM agent from the Hyper-V host
  • Copy the VMM agent from the VMM server to the Hyper-V host. The agent is default located on the following location on the VMM server: “Systemdrive\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\agents\amd64\3.2.7768.0\vmmAgent.msi”
  • Install the VMM Agent. When trying to install the VMM agent the following error occurred:

Microsoft System Center Virtual Machine Manager Agent (x64) Setup Wizard ended prematurely

Ended prematurely

  • Open a Command Prompt (Run as Administrator)
  • Browse to the location where the VMM agent is stored
  • Execute the following command: msiexec /I vmmAgent.msi
  • In the SCVMM Console Reassociate the Hyper-V host and perform a cluster refesh

After these steps the updated VMM agent is installed on the Hyper-V server.

New whitebox for extending my home lab

For a couple of months I’m searching for an extra whitebox host for extending my home lab environment. My current lab whitebox is a Haswell based whitebox (see: link).  Here is an overview of the new lab environment:

lab environment

For the new whitebox I  had the following requirements:

  • Hardware such as NICs must be recognized by VMware ESXi
  • Use 32 GB memory or more
  • Low power consumption
  • Expandable
  • Small formfactor
  • Quiet
  • Possibility to run nested hypervisors such as VMware ESXi and Hyper-V
  • Remote Management
  • Possibility to create a VMware Cluster and use vMotion, HA, DRS and DPM with the existing Haswell host

I reviewed the following popular home lab systems:

  • Intel NUC
  • Apple Mini
  • Gigabyte BRIX

The main reason to NOT choose for one of the above systems is the only support 16 GB of memory. In November 2014 I  found a motherboard that passes all the requirements, after reading a review on a Dutch hardware website. The review was about the ASRock C2750 motherboard. After some additional research I ordered the following parts to build this whitebox:

  • ASRock C2750 motherboard
  • Kingston 4 x8GB, DDR3, PC12800, CL11, Total 32 GB memory
  • be quiet System Power supply 7 300W
  • Cooler Master Midi Tower N300 ATX

VMware ESXi boots from an USB stick and the VM’s are placed on a iSCSI target so no extra storage is needed. The above parts cost me around € 735,00.

The ASRock C2750D4I motherboard has the following specifications:

  • Mini ITX motherboard
  • CPU: Intel Avoton C2750 64-bit 8 cores processor (passive cooled)
  • Graphics: ASPEED AST2300 16 MB
  • Memory: 4 x DDR3 DIMM slots, max: 64 GB memory
  • Controller: Intel C2750: 2 SATA3, 4 x SATA2 Marvell SE9172 2 x SATA 3, Marvell SE9230 4 x SATA3. Total of 12 SATA ports.
  • NIC: Dual Intel i210 Gigabit LAN adapter
  • 1 x PCIe 2.0 x8 slot
  • Remote Management: BMC Controller with IPMI dedicated LAN adapter
  • TPD of 20 watt
ASrock IMG_3839

CPU

The Intel Avoton C2750 is an atom based processor and contains 8 cores. It is passive cooled and quiet. The Avoton processor is 64-bit and supports Intel VT-x with Extended Page tables (EPT), so it is possible to nest hypervisors such as ESXi and Hyper-V. The Atom processor with 8 cores gives enough CPU performance for my lab environment.

Memory

The motherboard contains 4 memory banks with a maximum of 64 GB DDR3 memory (4 x 16 GB). I choose to use 4 x 8 GB Kingston DDR3, PC12800, CL11 dimms because of the price. 16 GB modules are to expensive on the moment. The motherboard has 32 GB of memory.

NICs

The ASRock C2750D4I system contains a dual Intel i210 Gigabit LAN adapter. The Intel i210 adapters are standard recognized by ESXi 5.5 and Windows Server 2012 R2. No additional modifications or drivers are needed.

Power consumption

The 300 W Power Supply is more than enough. The processor has a TPD of 20 watt. This whitebox consumes around 35 W with a couple of VMware VMs on it.

IMG_3949

The ASRock C2750D4I system is part a VMware cluster with Distributed Power Management (DPM) enabled. When DPM kicks in only 4 watt is used.

Remote Management

Management and remote control is possible because of  the BMC (Baseboard Management Controllers) and IPMI (Intelligence Platform Management Interface).

remote remote2

VMware ESXi support

On the ASRock C2750D4I system, VMware ESXi 5.5 Update 2 with the latest updates is installed.

esxicpu esxi

The Intel i210 Gigabit NICs and the Avoton AHCI controllers are recognized out of the box. So VMware VSAN (unsupported) could be an option to use.

esxiintel esxicontroller

 Windows Hyper-V support

As test I installed vNext Server Technical Preview on the ASRock C2750D4I system (with SSD as local storage) with the Hyper-V role enabled. The two Intel i210 Gigabit NICs are recognized out of the box. It has great performance.

vnext hyper-v hvtaskmgr01

Conclusion

The ASRock C2750D4I motherboard is a great system for building or extending a home lab environment. It board gives enough performance for a home lab and meets all the requirements I had for an additional whitebox host. I use it mainly for nesting VMware ESXi and Hyper-V hypervisors.

VIBSearch Finding VIB versions

VIBSearch is a simple PowerShell script with a GUI that will search for a specified VIB or all the VIBS installed on the ESXi hosts. A VIB stands for vSphere Installation Bundle (VIB). VIBs are used to package and distribute ESXi software such as drivers. The GUI is designed with SAPIEN PowerShell Studio 2014.

With VIBSearch it is easily to verify that all the ESXi host in the cluster have the same VIB versions installed. VIBSearch can be used for example to easily identify the HP-AMS driver version on all the ESXi hosts.

Requirements

VIBSearch is tested with:

  • PowerShell 4.0
  • PowerCLI 5.8 Release 1
  • For the Out-GridView cmdlet, PowerShell ISE is needed. Install ISE by using the following PowerShell commands:
    • Import-Module ServerManager
    • Add-WindowsFeature PowerShell-ISE

Installing and executing VIBSearch

  • Download VIBSearch.txt, link
  • Rename the *.txt  file to *.ps1
  • Open PowerShell and execute:
    • Set-ExecutionPolicy  unrestricted
    • ./vibsearch.ps1

After executing the script the following GUI appears:

0

  • To connect enter the FQDN or IP address of the vCenter name (1) and click on Connect (2) button

1

  • A credential window appear, enter the credentials for authenticating (administrator) to the vCenter Server. For a domain login use: user user@domainname  or domainname\username

5

  • After successfully authenticating to the vCenter Server there are two options to choose:
    • List All the VIBS:  List all the ESXi hosts in the vCenter Server
    • Search VIB: specify a VIB name for example “HP-AMS”

6

If authentication to the vCenter Server fails the following error is displayed in the PowerShell window:

authen

Example output:

HP-AMS VIB versions

hpams

Intel NIC VIB “net-igb” versions

Intel

NVIDIA VIB versions:

nvidia

Thanks to Francois-Xavier Cat (@LazyWinAdm) for helping me with the VIBSearch tool.

 

Identify the Single Sign-On (SSO) deployment method for the vCenter Server

With vSphere 5.5 you have the following deployment methods for Single Sign-On (SSO):

  • vCenter Single Sign-On for your first vCenter Server
  • vCenter Single Sign-On for an additional vCenter Server in an existing site (formerly HA Cluster)
  • vCenter Single Sign-On for an additional vCenter Server with a new site (formerly Multisite)

Once SSO is installed it can be usefull to identify what deployment options are used for example in a Site Recovery Manager (SRM) deployment. The following steps can be used to identify what deployment option are used for SSO on a vCenyter Server 5.5:

  • Browse to the following directory on the vCenter Server: C:\ProgramData\VMware\VMware VirtualCenter
  • Use a type command to display  the “LS_ServiceID.prop” file. The file contains the site name and indentifier.  For example:  SiteName1:10b042be-9b7a-467c-aa05-047a895c60fb
  • Repeat the above steps on the other vCenter Server(s)

If the string is the same in both sites SSO has deployed as:  “vCenter Single Sign-On for an additional vCenter Server in an existing site”. If the string is different, the vCenter Single Sign-On instance is deployed as:  “vCenter Single Sign-On for an additional vCenter Server with a new site”.

Microsoft Virtual Machine Converter 3.0 multiple disks bug

When converting a Windows 2008 R2 VM that contains multiple disks (VMDKs), the converting process with Microsoft Virtual Machine Converter (MVMC) 3.0 fails with the following error:

Microsoft Virtual Machine Converter encountered an error while attempting to convert the virtual machine. Log 

Details: A task may only be disposed if it is in a completion state (RanToCompletion, Faulted or Canceled).

This is a bug that was already available in MVMC version 2.0 and fixed in MVMC 2.1.  MVMC 2.1 is hard to find so here is a link to my OneDrive.