What to know about Horizon Instant Clones

Horizon 7 introduced the Instant Clone feature leverages the vmFork technology introduced in vSphere 6.0 U1. With Instant Clone technology it is possible to clone and deploy a VDI desktop VMs in seconds. This is called the Just-In-Time Desktop.

Below is an overview how an Instant Clone VM is created:

Instant Clone

  1. Master Image. The Master Image or Golden Image is a Windows 7 or 10 optimized image that contains installed software such as the Horizon Agent for example.
  2. Snapshot. A snapshot is taken of the master image.
  3. Template. Per snapshot a linked clone template is created of the master image.
  4. Replica. The replica is a full clone of the Template that is thin provisioned. The replica uses Content Based Read Cache (CBRC) and can be place on a specific datastore. This is the shared read disk for the desktop VMs.
  5. Parent. Per ESXi host a parent VM is created. The parent  is powered on. From this Parent Instant Clones are created using vmFork.
  6. Instant Clone. In a couple of seconds the the Instant Clone is created.  The Instant Clone will grow depending on the change rate but at logoff the Instant Clone is deleted and a new Instant Clone is created.

In the vCenter Server the following VM conventions are created:

  • Template: cp-template-xxx
  • Replica: cp-replica-xxx
  • Parent: cp-parent-xxxx

What to know about Instant Clones

  • vSphere 6.0 U1 or higher is needed for Instant Clones.
  • For the Instant Clone feature Horizon Enterprise or Horizon Air in Hybrid-mode is needed.
  • The Horizon View Storage Accelerator must be enabled.
  • Instant clones VMs are always powered-on.
  • Each time a user logs out of an Instant Clone, the desktop is deleted and a new desktop is provisioned and powered on.
  • The Master image must be installed with Hardware Version 11.
  • Uses ClonePrep for customization specification. All the Instant Clones uses a Single SID.
  • Parent VMs are tied to VMware ESXi host they are on and cannot be migrated or powered off through the vSphere (web) Client. This prevents an ESXi host from entering maintenance mode. Follow KB2144808 to put an ESXi host in maintenance mode.

Maintenance Mode

  • In the Horizon Agent, the Instant Clone feature must be enabled and the Composer must be disabled. It is not possible to enable both the View Composer and Instant Clone in the same Horizon Agent.
  • Instant Clones is part of the Horizon Connection Server. So no additional infrastructure component is needed. If the Horizon Connection Server fails another Horizon Connection Server takes over.
  • A Instant Clone Domain Admin is needed to add the Instant Clone to the Active Directory.
  • Deploy applications and System updates by updating the parent image and create a new snapshot. With the new push image feature you can point to the new snapshot.


What is supported:

  • Only Single-user desktops
  • Only floating pools
  • 1 vCenter Server maximum
  • 1 VLAN only
  • Windows 10 (32-64 bit) and Windows 7 SP1 (32-64 bit) as desktop Operating Systems
  • Maximum number of 2 monitors with a resolution up to 2560×1600
  • VMFS and Virtual SAN (VSAN) storage
  • Scales up to 2000 Instant Clones per pool
  • vMotion, DRS and HA

What is not supported

  • Persona Management
  • RDSH
  • 3D Graphics (NVIDIA GRID), only limited SVGA support
  • Virtual Volumes, VAAI, NFS or local datastores
  • Disposable disk
  • Dedicated pools
  • Sysprep
  • PowerCLI
  • Persistent disks. If you need persistency use for example App Volumes with a writable disk and User Environment Manager (UEM).
  • Storage vMotion of the Instant Clone


Horizon 7 includes a great new enhancement called “Instant Clones”. Within seconds VDI desktop VMs can be deployed. There is no extra infrastructure components needed such as a Composer service. With this version of Instant Clones there are some caveats you need aware off before implementing in production. Further versions of VMware Horizon will improve Instant Clones to add more support.

Horizon View Administrator displays a blank error window

After upgrading to VMware Horizon View 7, the administrator webpage displays a blank error window when trying to connect using the IP address of the Connection Server.


Horizon View 7 adds new security features that checks for the original URL for the web request. If not, it rejects the request and display the blank error window.

Steps to resolve this:

  • Use https://FQDN/admin


  • On every Connection Server create a locked.properties text file in c:\Program Files\VMware\VMware View\Server\sslgateway\conf
  • Add the following line:
    • checkOrigin=false


  • Save the file
  • Restart the “VMware Horizon View Connection Server” service

After the modification you’re able to connect to the View Administrator URL using the IP address of the Connection Server.


vRealize Log Insight 3.3 available for free

VMware released vRealize Log Insight 3.3. vRealize Log Insight is a log monitoring and analytics dashboarding tool like Splunk.


In version 3.3 a new product license is added. For each vCenter Server Standard license you will get a free 25 OSI pack license for vRealize Log Insight. So every customer with a vCenter standard license can use vRealize Log Insight with no additional costs. OSI stands for Operating System Instance. For example 1 vCenter Server and 10 ESXi hosts counts as 11 OSI licenses. With a 25 OSI pack, 14 licenses are available for monitoring other devices or Operating Systems (Windows and Linux).


vRealize Log Insight 3.3 is available as appliance and can be downloaded here, link. The appliance is installed and configured within 15 minutes.


In the appliance enter the license number of the vCenter Server standard license. There is no additional license needed.


One vCenter Standard license is limited to 25 OSIs and  only Content Pack published by VMware can be installed.


In order to enable other non VMware Content Packs, you’ll need to purchase a full-feature license for Log Insight.

vRealize Log Insight is a log analyzer and troubleshoot tool that is available for every vCenter Standard customer. A great way to get a better view of the VMware vSphere environment you’re hosting.

Home lab extension with an Intel NUC 6th generation

For my home lab I bought a 6th generation Intel NUC. The Intel NUC has following specifications:

  • Intel NUC6i3SYH
  • Intel i3-6100u (Skylake) 2.3 GHz Dual Core, 3 MB cache, 15W TDP
  • 2 memory slots for DDR4-2133 SODIMM memory, maximum is 32 GB memory
  • Intel HD Graphics 520 GPU
  • Intel I219-V Gigabit network adapter and Intel Wireless-AC 8260 WIFI adapter
  • Option to install a 2.5″ HDD/SDD and a M.2 SSD card (2242 or 2280)
  • 4 USB 3.0 ports (2 in the front and 2 on the rear)
  • SD card reader (SDXC cards)
  • Case and a 19V AC-DC adapter

IMG_9206 IMG_9209

The Intel NUC will be used as management server for my Software Defined DataCenter (SDDC) home lab environment. The Intel NUC will host VMs such as:

  • Domain Controller + DNS
  • vCenter Server Appliance
  • Virtual SAN witness appliance
  • Veeam backup
  • Etc.

The VMs are stored on a Synology NAS. The Intel NUC will use a NFS connection to the Synology NAS.  The NUC will not have any disks. It will boot ESXi from the USB stick


The 6th generation Intel NUC leaves two choices for choosing a CPU:

  • Intel I3 Skylake available on the NUC6i3SYH model
  • Intel I5 Skylake available on the NUC6i5SYH model

Both CPUs have 2 cores and support hypertreading. The table below gives a quick comparison between both processors:


For this configuration the Intel NUC with the I3-6100u processor is sufficient and saves 100 euro. The I3 has 2 cores and hypertreading, so 4 logical processors are displayed in the hypervisor.


Other advanced technologies such as VT-x, VT-d, EPT are fully supported.


The Intel NUC has 2 memory slots and support up to 32 GB DDR4 2133 MHz SODIMM memory. I added  2 Crucial 16 GB DDR4-2133(CT16G4SFD8213) modules which makes a total of 32 GB memory.

IMG_9186 IMG_9190

I use the same memory as suggested by the blog “virten.net” link.

Network card

The Intel NUC has an Intel I219-V Gigabit network adapter and a wireless network card. Only the Intel I219-V can be used with VMware ESXi.


The NUC has a M.2 (PCIe Gen 3 x4) slot and a Intel AHCI SATA-600 controller. It is possible to install a 2.5″ SDD or harddisk in the drive cage.


The VMs are on a Synology NAS. So the NUC will not have any disks other than a USB drive for booting VMware ESXi.

VMware ESXi

An USB 3 stick is used to boot VMWare ESXi. On the USB stick is VMware ESXi 6.0 U1b (VMware-VMvisor-Installer-201601001 3380124.x86_64) installed. For creating a USB stick with ESXi 6 you can use the blogpost here. Only step 1 till 3 are needed.

There is no need to add extra drivers to the ESXI image because the network and storage adapter are recognized by default.

LAN Storage

Passthrough is supported by the CPU and motherboard.


Nesting such as VMware in VMware and Hyper-V in VMware is possible. Below is an screenshot of a Hyper-V Server with a VM hosted on ESXi.


Power consumption

The average power consumption of the NUC is between 20 and 30 watt with a couple of VMs active.


 Component Amount Total
Intel NUC NUC6i3SYH 1 € 299,00
Crucial 16 GB DDR4-2133 2 € 235,80
 USB3 Stick 16 GB 1  € 10,00
Total € 544,80


The 6th generation Intel NUC is an great and easy option for creating a small ESXi home lab. I use the Intel NUC as management server with a couple of VMs. Another use case is creating a 2/3- node hybrid Virtual SAN (VSAN) cluster. Put a Samsung 950 PRO in the M.2 slot for caching and a 2.5″ HDD as capacity tier. Easy.

Pros and cons


  • All in-one-package including a motherboard, processor, enclosure and power adapter.
  • Supports up to 32 GB of memory
  • Easy to install
  • Small Form Factor
  • Low noise & power consumption


  • The hardware is not on the VMware HCL
  • Need a converter to connect to a DVI or VGA monitor
  • Only 2 cores available
  • No expansion possibilities such as adding an extra netwerk card
  • No remote management

Virtual SAN (VSAN) ROBO and SMB environment considerations

Virtual SAN requires minimal 3 ESXi hosts. With version 6.1 of Virtual SAN, Remote Office/Branch Office (ROBO) and small SMB customer environments are supported with Virtual SAN on 2 ESXi nodes. With a 2 node Virtual SAN cluster options such as HA, DRS and vMotion are fully supported.

In a ROBO configuration you have two Virtual SAN data nodes and one witness node. The Virtual SAN data nodes can be in one location. The witness node can reside in the same or another location (not on the Virtual SAN).

A virtual witness appliance is needed when a split brain occurs or performing maintenance to figure out what VMs does have quorum (more than 50% VMs  objects needs to be available). This can can be 1 ESXi host with Virtual SAN and the witness or 2 ESXi hosts with Virtual SAN.

A Virtual SAN ROBO environment example looks like this:

vsan robo

  • 2 VMware ESXI with Virtual SAN enabled
  • A witness appliance is running on a ESXi server in the same or other site.

Here are some considerations for using Virtual SAN ROBO:


  • With Virtual SAN ROBO, a witness appliance is needed. The witness appliance is placed on a third ESXi server. This hosts does not need a Virtual SAN license and SSD disk.
  • The witness appliance is a nested ESXi host (ESXi running in a VM).
  • It is not supported to run the witness on Virtual SAN.
  • The witness hosts stores only VM witness components (metadata).
  • The VMs are only protected by a single failure (FTT=1).


  • The virtual witness appliance can be configured in the following flavors (depending on the supported VMs):

<=10 VMs)


<=500 VMs


> 500 VMs

vCPUs 2 2 2
RAM 8 16 32
Virtual disks (*1) 8 GB boot



8 GB boot


350 GB HDD

8 GB boot


350 GB HDD

MAX witness components  750  22000  45000

(*1) The SSD and HDD are virtual disks. There is no need to have a physical SSD disksin the ESXi host were the witness appliance resides



  • Deploy Virtual SAN on certified hardware. Check the Virtual SAN HCL!
  • For a Virtual SAN disk configuration a minimal of 1 SSD and 1 Magnetic disk is needed. These disk cannot be used for booting ESXi
  • For booting ESXi use a USB, SD or SATADOM device
  • A small ESXi host can be used for the witness appliance. The witness appliance has no data, only metadata.


  • Cross connecting 2 Virtual SAN ESXi nodes is NOT supported
  • For 10 or less VMs a 1 Gbps network connection can be used. For >10 VMs use 10 Gbps
  • Network bandwith to the witness: 1.5 Mbps
  • Latency to the witness: 500 Milliseconds RTT
  • Latency between the data nodes: 5 Milliseconds RTT


  • Virtual SAN is licensed separately.
  • Virtual SAN for ROBO is a license that includes a 25 VM pack license. This license does not include the stretched cluster and All-flash options.
  • A maximum of 1 Virtual SAN for ROBO license may be used per site.
  • When running less than 25 VMs consider a VSAN standard of advanced license. The standard and advanced licenses are licensed per CPU socket.
  • Consider single socket CPU servers to decrease the licensing costs.
  • Consider vSphere Essentials (plus) for licensing the vSphere environment to reduce licensing costs.
  • Consider ESXi Hypervisor (free) for placing the witness appliance. ESXi Hypervisor cannot be managed by a vCenter Server!
  • For each ROBO Virtual SAN you need a dedicated witness appliance.

vCenter Server

  • When running the vCenter Server on top of Virtual SAN, powering down the Virtual SAN cluster involves a special procedure (link). Consider placing the vCenter Server on the witness host for simplicity.