Install Home Assistant as Virtual Machine (VM) on VMware ESXi

I started exploring Home Assistant on a Raspberry Pi. After several SD card crashes I decided to installed as Virtual Machine (VM) on VMware ESXi. There is a VMDK version available (link) that can attached (this involves manual steps) but I prefer a clean installation. VMware ESXi is installed on my Shuttle SH370R6 plus home lab server (link).

Other advantages of running as VM on VMware ESXi are for example:

  • The Raspberry PI has limited hardware resources and can be a performance bottleneck when using more and more sensors and installing add-ons. A home lab server offers more CPU power, memory and storage performance.
  • Snapshot functionality. Quickly make a Virtual Machine snapshot before upgrading the add-ons or itself. When something went wrong during the upgrade simply revert the snapshot and everything works again within seconds.
  • The installation of in a Ubuntu VM on ESXi is simple.
  • USB sticks like Z-Wave or Zigbee2MQTT can be attached to the VM using ESXi USB passthrough.

Here are the steps outlined how-to install an Ubuntu VM and install

Configure the Virtual Machine hardware specifications

  • Download Ubuntu 18.04.2 LTS (Long-Term Support), link.
  • Make a connection to the ESXi host: https://<ip-address>/ui
  • Upload the Ubuntu ISO to a datastore
  • Create a new virtual machine with the following specifications:
    • Name: HA-01
    • Compatibility: ESXi 6.7 virtual machine
    • Guest OS family: Linux
    • Guest OS version: Ubuntu Linux (64-bit)
    • Storage: datastore with 30 GB free space
    • CPUs: 2
    • Memory: 2048 MB
    • Hard disk 1: 30 GB
      • Disk Provisioning: Thin provisioned
    • SCSI Controller 0: VMware Paravirtual
    • USB controller 1: USB 2.0 or 3.0 depending on the ESXi hardware
    • Network adapter 1: Select the portgroup
      • Adapter type: VMXNET 3
    • CD/DVD Drive 1: Datastore ISO file
      • Browse to the Ubuntu ISO
      • Connect: checked
    • Video Card: Default settings

  • Next
  • Finish
  • Power on the VM
  • Open a console session

The VM has a paravirtualized SCSI controller (PVSCSI) and Virtual NIC (VMXNET3)

Install Ubuntu on ESXi

  • Language: English
  • Select: Install Ubuntu Server
  • Choose your preferred language: English
  • Keyboard configuration: Select the layout and variant: English US (
  • Installation: Install Ubuntu
  • Networking connections: The VMXNET3 NIC of the VM is displayed. Select for the IPv4 method DHCP or a manual fixed IP address
  • Configure proxy: leave this blank
  • Ubuntu mirror: Use the mirror address suggested
  • Filesystem setup: Use an Entire Disk
    • Choose the disk to install to: /dev/sda 30.00G
    • Filesystem summary: Done
    • Confirm destructive action. Are you sure you want to continue: Continue

  • Profile setup: Fill in the following fields (remember the username and password)
    • Your name:
    • Your server’s name:
    • Pick a username:
    • Choose a password:
    • Confirm a password:
  • SSH Setup: Install OpenSSH server
    • Import SSH identity: No
  • Featured Server Snaps: Select none
  • The installation of Ubuntu begins
  • The installation is complete! Reboot the system

  • Remove the attached Ubuntu ISO from the VM and press enter
  • After the reboot it’s time to install in the VM

The Open VM Tools is already installed  by default so there no need to install this package.


  • Because we installed OpenSSH we are using a SSH session for the configuration.
  • Connect to the IP address of thje Ubuntu VM using SSH (i’m using putty) for the connection.
  • Packages requirements (link) for
    • apparmor-utils
    • apt-transport-https
    • avahi-daemon
    • ca-certificates
    • curl
    • dbus
    • jq
    • network-manager
    • socat
    • software-properties-common (already installed in Ubuntu 18.04)
    • As docker package Docker-CE must be installed.
  •  Use the following commands to install all the required packages and install
sudo -i
add-apt-repository universe
apt-get update
apt-get install -y apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat
curl -fsSL | sh
curl -sL "" | bash -s
  • After the installation check if there are two containers running using the following command:
root@ha-01:~# docker ps
8def326c0ce7 homeassistant/qemux86-64-homeassistant "/bin/ pytho…" About a minute ago Up About a minute homeassistant
47945d4fe0f4 homeassistant/amd64-hassio-supervisor "python3 -m hassio" 2 minutes ago Up 2 minutes hassio_supervisor
  • Connect to http://<IP address>:8123

Home Assistant is now running as VM on VMware ESXi.

Monitor vSAN with ControlUp

One of the new enhancements of ControlUp 7.3 is vSAN monitoring support. ControlUp will detect the vSAN cluster(s), objects and displays real-time vSAN specific metrics and metadata. In this blog post I highlight the features of the new vSAN integration in ControlUp 7.3.


The vSAN cluster is automatically recognized by ControlUp when the following requirements are met:

  • PowerShell minimum Version 5.0
  • VMware PowerCLI 10.1.1.x
  • .NET framework version 4.5
  • vSAN Performance service should be turned on on the cluster
  • The user account configured for the hypervisor connection requires the “storage.View” permission.

Running ControlUp is easy, no installation is needed, simple execute a single executable (ControlUpConsole.exe). After starting ControlUp, add the vCenter server and the vSAN cluster(s) are automatically recognized. When clicking on the vSAN cluster you see real-time metadata and performance metrics.


There are several preset views available with vSAN metrics such as:

  • vSAN Performance. Includes vSAN performance metrics such as IOPS, latency, cache and buffers.
  • vSAN Health. Includes the vSAN health checks
  • vSAN Host Network. Includes vSAN network I/O and packet loss metrics.

You can easily switch between predefined views in the “Colum Preset”. Here is an overview of vSAN metrics used by ControlUp:

Datastores: Name, Type, Capacity, Read/Write IOPS, Read/Write Rate, Read/Write Latency, Compression, Capacity Deduplication, Congestion, Outstanding IO, Disk Configuration, Total Used Capacity, Total Used – Physically Written, Total Used – VM Overreserved, Total Used – System Overhead, vSAN Free Capacity, vSAN Health, vSAN Cluster Health, vSAN Network Health, vSAN Physical Disk Health, vSAN Data Health, vSAN Limits Health, vSAN Hardware Compatibility Health, vSAN Performance Service Health, vSAN Build Recommendation, vSAN Online Health.
Datastores on Hosts: Name, Type, Capacity, Read/Write IOPS, Read/Write Rate, Read/Write Latency, Compression, Capacity Deduplication, Congestion, Outstanding IO, Local Client Cache Hit IOPS, Local Client Cache Hit Rate, vSAN Max Read Cache Read Latency, vSAN Max Write Buffer Write Latency, vSAN Max Read Cache Write Latency, vSAN Max Write Buffer Read Latency, vSAN Min Read Cache Hit Rate, vSAN Write Buffer Min Free Percentage, vSAN Host Network Inbound/Outbound I/O Throughput, vSAN Host Network Inbound/Outbound Packets Per Second, vSAN Host Network Inbound/Outbound Packet Loss Rate

When navigating you see all those metrics available on the vSAN cluster, vSAN datastores on hosts, virtual disks and vSAN Host network utilization views. You can easily drill down by double clicking from the vSAN datastore to the diskgroup(s) on each ESXi host and then drill down to the the virtual disk(s). From the virtual disk(s) you can drill down to the Windows process.

Example: Find the root cause of high IOPS load on the vSAN cluster.

In the following example we will identify a Windows process that is causing high IOPS stress on the vSAN cluster. We drill down from the vSAN cluster to the vSAN diskgroup of the ESXi host to the virtual disk to the process level in the VM to find the root cause of the high IOPS.

  • In the vSAN Performance view we see the stress level has changed and a high IOPS load.

  • In the IOPS we see that the threshold of 2000 is crossed. This threshold is default and can be adjusted. The Virtual Expert suggest to navigate to the “Datastore on Hosts (IOPS detailed View).

  • When double clicking on the “Datastore on Host” we see that “esxin04.lab.local” is generating the IOPS load.

  • The vSAN diskgroup of the “esxin04.lab.local” host has a virtual disk that belongs to the “ControlUp-vSAN-Test” VM that is causing the high IOPS load.

  • When double clicking on the virtual disk we go the the “Processes” view and see that “diskspd.exe” process is causing the high IOPS load.

  • Optional: Right click on the process and select kill to end the “diskspd.exe” process. This stops the IOPS load on the vSAN cluster.

This example shows how easy it is to identify what process is causing stress on the vSAN cluster.

Alerting and reporting

For alerting you can add triggers in ControlUp to notify you when something happens on the vSAN cluster such as a change in the stress level for a period of time.

When using the triggers you’re able to start investigating it right away when something happening on the vSAN cluster. All the vSAN data is transferred to ControlUp Insight for historical reporting and analytics. This is great for analyzing data and trends over time and can be very useful when investigate issues and understanding what is going on you’re environment.


ControlUp is easy to set-up and great for fast troubleshooting. In version 7.3 is vSAN support added. As shown in the this blog post with a couple of double clicks you’re able to perform a root cause analysis and find what process is causing the high IOPS on the vSAN.

There is a free trail available. Give it a try here: link

VMware Unified Access Gateway (UAG) 3.4 RADIUS license change

The VMware Unified Access Gateway (UAG) acts as reverse proxy and tunnels sessions (PCoIP and Blast) to desktops and remote apps. Besides Horizon support, new features are added for AirWatch and Identity Manager. With version 3.4, the VMware Unified Access Gateway is offered in three editions based on the Horizon or Workspace ONE licenses.

  • Standard
  • Advanced
  • Enterprise

Per edition the following features are supported:

One of the new features is high availability support for the Unified Access Gateway. Without the use of load balancers a UAG high availability environment can be created. This makes the environment less complex and is available as enterprise feature.

Another feature is RADIUS support. RADIUS is not a new feature and is available for a very long time. RADIUS offers two-factor authentication and is always a requirement for production environments. When looking at the editions table you see that this is now an advanced feature. Before version 3.4 of the UAG, with VMware Access Point and VMware Security Server, RADIUS was supported in all the editions!

In my opinion RADIUS is not a advance feature and belongs to all the editions of Horizon. This was always the case!

I’ll have a lot of customers who are using Horizon Standard with RADIUS support for two-factor authentication. Now they are stuck with the UAG 3.3.1 appliance or must heavily invest ($$$$$) in the advanced (or higher) edition of Horizon.

I’ll hope VMware will  judge again and make RADIUS support available in all the editions of Horizon.

Update: March 13, 2019: VMware Unified Access Gateway 3.5 is released. In this version there is no license requirement anymore based on the edition. All the features have been made available for all the Workspace ONE or Horizon editions. This is great news! RADIUS support is available for all editions in version 3.5 of the UAG.  More information: link.