Using the new Shuttle SH370R8 as home lab server with VMware ESXi

In January 2019 I did a review of the Shuttle SH370R6 (link) using VMware ESXi. A couple of weeks ago the new Shuttle SH370R8 is released. The main differences between the Shuttle SH370R6 and SH370R8 are:

  • Ready for the 8th/9th Gen Intel Coffee Lake processors
  • Dual Intel Gigabit Ethernet
  • An extra fan in front of the chassis for a better airflow
  • Front panel (Microphone input (3.5 mm), Headphones output (3.5 mm), 2x USB 3.0 (Type A, USB 3.1 Gen 1), Power button, Power indicator (Blue LED) and Hard disk drive indicator (Yellow LED).

  • Supports four 3.5″ hard drives (with an optional 2.5″ kit available)

The recommended retail price from Shuttle for the SH370R8 is € 317.00 (ex VAT).

Installation

The Shuttle SH370R8 comes with a Black aluminium chassis, a motherboard and a 500W Power Supply Unit (PSU)” that also the cooling is included. The only hardware you need to add is a CPU, Memory and disk(s) that match.

I’m  using the following hardware (same as in the Shuttle SH370R6 review) for testing the Shuttle SH370R8:

  • Intel Core i7 8700 with 6 cores and 12 threads 65W
  • 4 x 16 GB Kingston ValueRAM KVR26N19D8/16
  • Samsung 970 EVO 1 TB M.2 SSD
  • Samsung 250 and 500 GB SATA SSD
  • Kingston Datatraveler 100 G3 32 GB USB stick for booting VMware ESXi

The 16 GB memory modules are now much cheaper as in January 2019 when i did the SH370R6 review. With four 16 GB module you can save around € 160,00. The documentation describes the installation steps very clear which makes the hardware installation easy.

VMware ESXi

After installing the hardware and swapping the USB stick from the G6 to the G8, it’s time to press the power button. First thing is to enter the BIOS and change the boot order so that the VMware ESXi hypervisor can boot. After a short time VMware ESXi 6.7 Update 2 is up and running.

The two onboard Intel Corporation I211 Gigabit NICs are recognized by default in ESXi 6.7. In my configuration one NIC is used for VM and management traffic and the other for NFS traffic to my QNAP storage. The optional wireless LAN adapter is not recognized in ESXi.

The USB, NVMe and 4x SATA 3.0 (6G) controllers are recognized by default in ESXi.

Most of my VMs are running from the NVMe SSD storage which makes them fast de power up. The power consumption is the same as the SH370R6 Plus, 20-24w with ESXi booted (no VMs active) and between 35-70w when 10 VMs are running.

Conclusion

The differences between the Shuttle SH370 R6 and R8 are minimal but I really like the dual Intel Gigabit NICs and the extra space for placing 4x 3.5″ hard drives. For 2.5″ drives there is a optional adapter available. With the support for 4x 3.5″ hard drives you can host a lot of storage.

With the Shuttle SH370R6 I uses one PCI-E slot for a NIC. With the onboard dual Intel Gigabit I have an extra PCIe slot available for an extra NVMe controller or a 10 Gigabit NIC for example. The PCI-E x 16 slot can be used for a large dual-slot graphics card (up to 280 mm).  The Shuttle has great expansion possibilities with the two PCIe slots and support for 4x 3.5″ hard drives.

The Shuttle SH370R8 with VMware ESXi is running 24/7 for a couple weeks now without any problems and the performance is great with the Intel i7 CPU, 64 GB memory and NVMe storage. I like to welcome the Shuttle SH370R8 to the VMware ESXi homelab club :-).

Specifications

The Shuttle SH370R8 specifications:

  • Chassis: Black aluminium chassis (33.2 x 21,5 x 19.0 cm)
  • CPU: Based on the Intel H370 chipset, the XPC Barebone SH370R8 supports all the latest Intel Core processors of the “Coffee Lake” series for socket LGA1151v2 with up to 95 W TDP, including the top model Core i9-9900K with eight cores and 16 threads
  • Cooling: A special passive heatpipe I.C.E. (Integrated Cooling Engine) cooling system ensures cooling of the barebone.
  • Memory: Four memory slots, up to 64 GB of DDR4-2400/2666 memory. Intel Optane Ready which boosts speed of one hard disk through data caching.
  • LAN: Dual Intel Gigabit Ethernet.
  • Slots:
    • M.2 2280M slot for a NVMe SSD.
    • M.2 2230E slot for WLAN cards.
    • 1x PCIe-x16 slot for powerful dual-slot graphics cards ((Max. size: 280 x 120 x 40 mm)
    • 1x PCIe-x4 slot for an expansion card
    • Optional accessories include a WLAN/Bluetooth module (WLN-M), an RS-232 port (H-RS232), and a bracket for two 2.5-inch drives (PHD3).
  • PSU: A 500 Watt, 80-PLUS-Silver-certified power supply unit with the following connectors:
    • ATX main power 2×10 and 2×2 pins
    • Graphics power connector: 6 pins and 8 pins
    • 4 x SATA, 2x Molex and 1x floppy

  • Ports:
    • 4x Serial ATA 6G connector onboard (rev. 3.0, max. 6 Gbit/s)
    • 4x USB 3.1 Gen 2, 4x USB 3.1 Gen 1, 4x USB 2.0
  • Supports four 3.5″ hard drives
  • Official Operating system compatible: Windows 10 and Linux 64-bit
  • The recommended retail price from Shuttle for the SH370R8 is EUR 317.00 (ex VAT)

More information about the Shuttle SH370R8 can be found here, link.

Blue circle in the vSphere client after upgrading to vCenter Server 6.7 Update 2

After upgrading the vCenter Server Appliance (VCSA) to version 6.7 Update 2, I tried to log in using the vSphere Client. After entering the credentials an endless blue running circle appears.

In the VAMI interface (https://vcsa-fqdn:5480) of the VCSA, the health statistics of all the components are green (okay) so I decided to reboot the VCSA.

After the VCSA reboot I encountered the same blue running circle when trying to log in using the vSphere Client. I tried Firefox and Google and the Internet Explorer browser. The only browser that worked was Internet Explorer. I never used  Internet Explorer before so I tried to clear the cache of Google Chrome and Firefox using the following methods:

Clear cache, cookies and history of Google Chrome:

  • Open Chrome.
  • At the top right, click More More
  • Click More tools and then Clear browsing data
  • Time range: All time
  • Select Browser history, cookies and cache images and files
  • Click Clear data

Clear cache and cookies of Firefox browser:

  • Open firefox
  • In the address bar enter: about:preferences
  • Click Privacy & Security
  • Under Cookies and Site Data select Clear Data
  • Check Cookies and Site Data and Cached Web Content
  • Click Clear and select Clear Now

After clearing the cache I was able to log in using the vSphere Client without the endless blue circle. So make sure to clear the cache of the browser(s) when experiencing the circle problem.

Install Home Assistant Hass.io as Virtual Machine (VM) on VMware ESXi

I started exploring Home Assistant Hass.io on a Raspberry Pi. After several SD card crashes I decided to installed Hass.io as Virtual Machine (VM) on VMware ESXi. There is a VMDK version available (link) that can attached (this involves manual steps) but I prefer a clean installation. VMware ESXi is installed on my Shuttle SH370R6 plus home lab server (link).

Other advantages of running Hass.io as VM on VMware ESXi are for example:

  • The Raspberry PI has limited hardware resources and can be a performance bottleneck when using more and more sensors and installing Hass.io add-ons. A home lab server offers more CPU power, memory and storage performance.
  • Snapshot functionality. Quickly make a Virtual Machine snapshot before upgrading the add-ons or Hass.io itself. When something went wrong during the upgrade, simply revert the snapshot and everything works again within seconds.
  • The installation of Hass.io in a Ubuntu VM on ESXi is simple.
  • USB sticks like Z-Wave or Zigbee2MQTT can be attached to the Hass.io VM using ESXi USB pass-through.

Here are the steps outlined how-to install an Ubuntu VM and install Hass.io.

Configure the Virtual Machine hardware specifications

  • Download Ubuntu 18.04.2 LTS (Long-Term Support), link.
  • Make a connection to the ESXi host: https://<ip-address>/ui
  • Upload the Ubuntu ISO to a datastore
  • Create a new virtual machine with the following specifications:
    • Name: HA-01
    • Compatibility: ESXi 6.7 virtual machine
    • Guest OS family: Linux
    • Guest OS version: Ubuntu Linux (64-bit)
    • Storage: datastore with 30 GB free space
    • CPUs: 2
    • Memory: 2048 MB
    • Hard disk 1: 30 GB
      • Disk Provisioning: Thin provisioned
    • SCSI Controller 0: VMware Paravirtual
    • USB controller 1: USB 2.0 or 3.0 depending on the ESXi hardware
    • Network adapter 1: Select the portgroup
      • Adapter type: VMXNET 3
    • CD/DVD Drive 1: Datastore ISO file
      • Browse to the Ubuntu ISO
      • Connect: checked
    • Video Card: Default settings

  • Next
  • Finish
  • Power on the VM
  • Open a console session

The VM has a paravirtualized SCSI controller (PVSCSI) and Virtual NIC (VMXNET3)

Install Ubuntu on ESXi

  • Language: English
  • Select: Install Ubuntu Server
  • Choose your preferred language: English
  • Keyboard configuration: Select the layout and variant: English US (
  • Installation: Install Ubuntu
  • Networking connections: The VMXNET3 NIC of the VM is displayed. Select for the IPv4 method DHCP or a manual fixed IP address
  • Configure proxy: leave this blank
  • Ubuntu mirror: Use the mirror address suggested
  • Filesystem setup: Use an Entire Disk
    • Choose the disk to install to: /dev/sda 30.00G
    • Filesystem summary: Done
    • Confirm destructive action. Are you sure you want to continue: Continue

  • Profile setup: Fill in the following fields (remember the username and password)
    • Your name:
    • Your server’s name:
    • Pick a username:
    • Choose a password:
    • Confirm a password:
  • SSH Setup: Install OpenSSH server
    • Import SSH identity: No
  • Featured Server Snaps: Select none
  • The installation of Ubuntu begins
  • The installation is complete! Reboot the system

  • Remove the attached Ubuntu ISO from the VM and press enter
  • After the reboot it’s time to install Hass.io in the VM

The Open VM Tools is already installed  by default so there no need to install this package.

Install Hass.io

  • Because we installed OpenSSH we are using a SSH session for the Hass.io configuration.
  • Connect to the IP address of thje Ubuntu VM using SSH (i’m using putty) for the connection.
  • Packages requirements (link) for Hass.io:
    • apparmor-utils
    • apt-transport-https
    • avahi-daemon
    • ca-certificates
    • curl
    • dbus
    • jq
    • network-manager
    • socat
    • software-properties-common (already installed in Ubuntu 18.04)
    • As docker package Docker-CE must be installed.
  •  Use the following commands to install all the required packages and install Hass.io
sudo -i
add-apt-repository universe
apt-get update
apt-get install -y apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat
curl -fsSL get.docker.com | sh
curl -sL https://raw.githubusercontent.com/home-assistant/hassio-installer/master/hassio_install.sh | bash -s
  • After the installation check if there are two containers running using the following command:
root@ha-01:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8def326c0ce7 homeassistant/qemux86-64-homeassistant "/bin/entry.sh pytho…" About a minute ago Up About a minute homeassistant
47945d4fe0f4 homeassistant/amd64-hassio-supervisor "python3 -m hassio" 2 minutes ago Up 2 minutes hassio_supervisor
root@ha-01:~#
  • Connect to Hass.io: http://<IP address>:8123

Home Assistant Hass.io is now running as VM on VMware ESXi.