How to build an ESXi on ARM Pi cluster?

Shortly after VMworld 2020, VMware released (after years of announcing and demoing) the ESXi On ARM fling (*1). On Social media and the community ESXi on ARM is a very hot topic. The ESXi ARM fling makes it possible to run the VMware ESXi hypervisor on ARM platforms such as:

    • Avantek Workstation and server (Ampere eMAG)
    • Lenovo ThinkSystem HR330A and HR350A (Ampere eMAG)
    • SolidRun Honeycomb LX2
    • Raspberry Pi (rPi) 4b model (4GB and 8GB only).

Because it supports the Raspberry Pi 4b model is very interesting for the home labbers.

(*1) A fling shows an early stage of software to the VMware community. There is no official (only community) support available. The ESXi on ARM fling can be download for the following location: link.

Use cases

Some use cases for ESXi On ARM are:

  • vSAN Witness node, link
  • Automation environment for PowerCLI and Terraform and packer, link.
  • Security at the edge
  • Other home lab projects such as running Home Assistant (blog post is coming).

For my home lab environment, I wanted to build an ESXi ARM cluster for my IoT stuff (such as Home Assistant) with two Pi nodes attached to my existing QNAP NAS. With the two ESXi ARM nodes, a vCenter Server, and shared storage, cluster functions such as vMotion, High Availability, DRS, and even FT are available. How cool is that!

Every day there are new use cases created in the community. That’s one reason why ESXi on ARM is such a cool technology!

My Environment build

Here is a simple diagram of how my setup looks like:

 

build of materials (BOM)

In this blog article, I will mention the build of materials (BOM). The following components I use:

Number Component ~Cost €  Link (cheapest Pi shop in the Netherlands)
1 Raspberry Pi 4 Model B with 8GB memory. 87,50 (per Pi) Link
2 Raspberry Pi 15W USB-C Power Supply  9,95 (per Pi) Link
3 Argon One Pi 4 case 28,95 (per Pi) Link
4 Official Raspberry Pi USB keyboard 17,95 Link
5 Micro SD card, 32 GB 13,95 (per Pi) Link
6 Delock USB 3.2 16 GB flash drive  8,99 (per Pi) I reused the USB drives
7 Micro-HDMI to HDMI cable 1,5m.  7,95 Link

1. Raspberry Pi 4 Model B with 8GB memory.

This Pi model has the following specifications:

    • 1.5GHz quad-core ARM Cortex-A72 CPU
    • VideoCore VI graphics
    • 4kp60 HEVC decode
    • True Gigabit Ethernet
    • 2 × USB 3.0
    • 2 × USB 2.0 ports
    • 2 × micro-HDMI ports (1 × 4kp60 or 2 × 4kp30)
    • USB-C for input power, supporting 5V 3A operation
    • 8Gb LPDDR4 memory

2. Raspberry Pi 15W USB-C Power Supply.

The power Supply uses the USB-C for charging the Pi. Make sure to use a decent power supply such as this one.

3. Argon One Pi 4 case

This case (which looks like the Tesla Cybertruck) has an aluminum enclosure for passive cooling and a fan inside for active cooling. Proper cooling is very important for the Pi because when running VMware ESXi it can get hot. You can control the FAN by software or enable the always-on mode. In software mode when the CPU temp reaches 55 degrees, the fan will run at 10%, at 60 degrees it will run at 55%, and at 60 degrees it will run 100%. The driver does not work on VMware ESXi, it is designed for the Pi OS. Hopefully, there will be a VIB available in the future that makes software control of the fan possible.  For VMware ESXi, you need to enable the always-on mode by switching the jumper pen next to the fan.

The assembly of the Pi and case is very easy:

  • Next to the fan, you see two cooling blocks (grey ones), one for the CPU and the other for the RAM chip
  • Add some terminal paste to the cooling blocks
  • Plug the PCB board into the Pi and the case. With the PCB board, all the ports and buttons are accessed from the back!
  • Tighten the screws.

The GPIO pins are still available when removing the magnetic cap from the top of the case.

4. Official Raspberry Pi USB keyboard.

This is a 78-key QWERTY keyboard with a built-in 3 ports hub on the back. It has a small form factor.

5 & 6. Micro SD card and USB disk.

The SD card is for storing the UEFI firmware that is required to boot the VMware ESXi-ARM installer. I used 32 GB SD. The USB drive is for installing the VMware ESXi ARM ISO.

7. Micro-HDMI to HDMI cable 1,5m.

The following components I already have in my home lab environment and will be re-used:

  • Netgear switch
  • 2 x Delock USB 3.2 16 GB flash drives
  • 2 x UTP CAT 5e cables
  • QNAP NAS

After the assembly of the case, connect the USB drive, SD card, Power-Supply, Monitor, keyboard, UTP cable, and you’re ready to install the VMware ESXi for ARM fling.

In the next ESXi on ARM blog, I will highlight the ESXi on ARM installation process and how to install and configure Home Assistant.

Here are some great links to follow:

Thanks to the Raspberry Store for the quick delivery.

Using the new Shuttle SH370R8 as home lab server with VMware ESXi

In January 2019 I did a review of the Shuttle SH370R6 (link) using VMware ESXi. A couple of weeks ago the new Shuttle SH370R8 is released. The main differences between the Shuttle SH370R6 and SH370R8 are:

  • Update October 19, 2020: Supports up to 128 GB of RAM!
    • Memory compatibility list: link
  • Ready for the 8th/9th Gen Intel Coffee Lake processors
  • Dual Intel Gigabit Ethernet
  • An extra fan in front of the chassis for a better airflow
  • Front panel (Microphone input (3.5 mm), Headphones output (3.5 mm), 2x USB 3.0 (Type A, USB 3.1 Gen 1), Power button, Power indicator (Blue LED) and Hard disk drive indicator (Yellow LED).

  • Supports four 3.5″ hard drives (with an optional 2.5″ kit available)

The recommended retail price from Shuttle for the SH370R8 is € 317.00 (ex VAT).

Installation

The Shuttle SH370R8 comes with a Black aluminium chassis, a motherboard and a 500W Power Supply Unit (PSU)” that also the cooling is included. The only hardware you need to add is a CPU, Memory and disk(s) that match.

I’m  using the following hardware (same as in the Shuttle SH370R6 review) for testing the Shuttle SH370R8:

  • Intel Core i7 8700 with 6 cores and 12 threads 65W
  • 4 x 16 GB Kingston ValueRAM KVR26N19D8/16
  • Samsung 970 EVO 1 TB M.2 SSD
  • Samsung 250 and 500 GB SATA SSD
  • Kingston Datatraveler 100 G3 32 GB USB stick for booting VMware ESXi

The 16 GB memory modules are now much cheaper as in January 2019 when i did the SH370R6 review. With four 16 GB module you can save around € 160,00. The documentation describes the installation steps very clear which makes the hardware installation easy.

VMware ESXi

After installing the hardware and swapping the USB stick from the G6 to the G8, it’s time to press the power button. First thing is to enter the BIOS and change the boot order so that the VMware ESXi hypervisor can boot. After a short time VMware ESXi 6.7 Update 2 is up and running.

The two onboard Intel Corporation I211 Gigabit NICs are recognized by default in ESXi 6.7. In my configuration one NIC is used for VM and management traffic and the other for NFS traffic to my QNAP storage. The optional wireless LAN adapter is not recognized in ESXi.

The USB, NVMe and 4x SATA 3.0 (6G) controllers are recognized by default in ESXi.

Most of my VMs are running from the NVMe SSD storage which makes them fast de power up. The power consumption is the same as the SH370R6 Plus, 20-24w with ESXi booted (no VMs active) and between 35-70w when 10 VMs are running.

Conclusion

The differences between the Shuttle SH370 R6 and R8 are minimal but I really like the dual Intel Gigabit NICs and the extra space for placing 4x 3.5″ hard drives. For 2.5″ drives there is a optional adapter available. With the support for 4x 3.5″ hard drives you can host a lot of storage.

With the Shuttle SH370R6 I uses one PCI-E slot for a NIC. With the onboard dual Intel Gigabit I have an extra PCIe slot available for an extra NVMe controller or a 10 Gigabit NIC for example. The PCI-E x 16 slot can be used for a large dual-slot graphics card (up to 280 mm).  The Shuttle has great expansion possibilities with the two PCIe slots and support for 4x 3.5″ hard drives.

The Shuttle SH370R8 with VMware ESXi is running 24/7 for a couple weeks now without any problems and the performance is great with the Intel i7 CPU, 64 GB memory and NVMe storage. I like to welcome the Shuttle SH370R8 to the VMware ESXi homelab club :-).

Specifications

The Shuttle SH370R8 specifications:

  • Chassis: Black aluminium chassis (33.2 x 21,5 x 19.0 cm)
  • CPU: Based on the Intel H370 chipset, the XPC Barebone SH370R8 supports all the latest Intel Core processors of the “Coffee Lake” series for socket LGA1151v2 with up to 95 W TDP, including the top model Core i9-9900K with eight cores and 16 threads
  • Cooling: A special passive heatpipe I.C.E. (Integrated Cooling Engine) cooling system ensures cooling of the barebone.
  • Memory: Four memory slots, up to 128 GB of DDR4-2400/2666 memory. Intel Optane Ready which boosts speed of one hard disk through data caching.
  • LAN: Dual Intel Gigabit Ethernet.
  • Slots:
    • M.2 2280M slot for a NVMe SSD.
    • M.2 2230E slot for WLAN cards.
    • 1x PCIe-x16 slot for powerful dual-slot graphics cards ((Max. size: 280 x 120 x 40 mm)
    • 1x PCIe-x4 slot for an expansion card
    • Optional accessories include a WLAN/Bluetooth module (WLN-M), an RS-232 port (H-RS232), and a bracket for two 2.5-inch drives (PHD3).
  • PSU: A 500 Watt, 80-PLUS-Silver-certified power supply unit with the following connectors:
    • ATX main power 2×10 and 2×2 pins
    • Graphics power connector: 6 pins and 8 pins
    • 4 x SATA, 2x Molex and 1x floppy

  • Ports:
    • 4x Serial ATA 6G connector onboard (rev. 3.0, max. 6 Gbit/s)
    • 4x USB 3.1 Gen 2, 4x USB 3.1 Gen 1, 4x USB 2.0
  • Supports four 3.5″ hard drives
  • Official Operating system compatible: Windows 10 and Linux 64-bit
  • The recommended retail price from Shuttle for the SH370R8 is EUR 317.00 (ex VAT)

More information about the Shuttle SH370R8 can be found here, link.

Using the Shuttle SH370R6 plus as home lab server

For my home lab I needed a new host that replaces my Intel NUC (link) that act as management host. The hardware resources (CPU and memory) on the NUC were limiting my lab activities.

I had the following requirements for the new home lab host:

  • Ability to run the latest VMware ESXi 6.7 U1 version
  • Support for 64 GB memory
  • Fast storage support (M.2 NVMe SSD)
  • Room for PCI-Express add-on cards
  • Run nested environments
  • Low power consumption
  • Limited budget (around 1300 euro).

I did some research for a new home lab host and investigated hosts such as the Intel NUC, Apple Mac Mini 2018, Supermicro and Shuttle. In my searching journey I’ll ended having the following Bill Of Materials (BOM) shopping list:

Parts ~Price (€) Link 
Barebone System: Shuttle XPC cube barebone SH370R6 Plus (500 Watt (80 PLUS Silver) PSU.  291,00

 

Link
CPU: Intel Core i7 8700 with 6 cores and 12 threads 65W  333,45

 

Link
Memory: 4 x 16 GB, Kingston ValueRAM KVR26N19D8/16  488,00 Link
Disk: Samsung 970 EVO 1 TB M.2  217,99 Link
USB stick: Kingston Datatraveler 100 G3 32 GB    7,30 Link
Total 1337,74

Update May 13, 2019 A new Shuttle SH370R8 is released. The Shuttle SH370R8 has the following new enhancements:

  • 9th generation Intel Core i9 processor support (The latest BIOS of the SH370R6 Shuttle also supports the 9th generation of Intel Core CPUs).
  • Dual Intel Gigabit Ethernet onboard NICs
  • Support for four 3.5″ hard drives

A review can be found here, link.

Barebone System

A barebone is pre-installed system with a mainboard, GPU, Power Supply Unit (PSU), CPU cooler and cables in a small form factor. You’ll need to pick certain parts such as CPU, memory and disk(s) to match you’re needs. The Shuttle XPC cube barebone SH370R6 Plus has the following specifications:

  • Chassis: black aluminium (33.2 x 21.5 x 19 cm)
  • Bays: 1 x 5.25 and 2 x 3.5″
  • CPU: Socket LGA 1151 v2. Supports 8th and 9th generation Intel Core “Coffee Lake” processors such as Core i9 / i7 / i5 / i3, Pentium or Celeron  Shuttle has a I.C.E. heatpipe cooling system. A CPU with a maximum of 95 Watt Thermal Design Power (TDP) is supported.
  • Integrated Graphics: Intel UHD graphics 610/630 (in the processor). Supports three digital UHD displays at once
  • Chipset: Intel H370 PCH
  • Memory: up to 4 x 16 GB DDR-2400/2666 DIMM modules. Max 64 GB
  • Slots: 1 x PCIe X16 (v3.0) supports dual-slot graphics cards up to 273 mm length, 1 x PCIe X4 (v3.0), 1 x M.2-2280 (SATA / PCIe X4) supports M.2 SSDs, 1 x M.2-2230 supports WLAN cards
  • SATA: 4 x SATA 3.0 (6 Gb/s) supports RAID and RST
  • Video: HDMI 2.0a and 2 x DisplayPort 1.2
  • Connections: 4 x USB 3.1 Gen 2, 4 x USB 3.1 Gen 1, 4 x USB 2.0,
  • Audio: 5 x Audio (2 x front, 3 x rear)
  • Network: Intel Gigabit I211 adapter
  • PSU: Integrated 300  Watt (80 PLUS Bronze) in the SH370R6 or 500 Watt (80 PLUS Silver) in the SH370R6 plus version.

 

 

 

 

 

 

The Shuttle  SH370R6 plus has a 500 W Power supply so a GPU can be added  if needed. For now I will use the onboard GPU. There is one Intel Gigabit I211 NIC onboard.

Slots

The Shuttle has two slots available for add-on cards:

  • 1 x PCIe X16 (v3.0) supports dual-slot graphics cards up to 273 mm length,
  • 1 x PCIe X4 (v3.0)

I will use both slots with spare sparts, one slot for a Intel Gigabit  NIC and the other slot for an extra M.2 slot. In the future I can change the 1 Gigabit NIC for a 10 Gigabit NIC for example.

CPU

As CPU i decided to use an Intel Core i7 8700 8th generation Intel Core “Coffee Lake” processor with 6 cores and 12 threads.  It has a Thermal Design Power (TDP) of 65 W. The decision was based on pricing, horse power and  power consumption.

Memory

The maximum memory the Shuttle can handle is 64 GB. This is very unique for a barebone system on the moment. There a 4 DIMM slots for 4 x 16 GB. I used Kingston ValueRAM KVR26N19D8/16 that is not on the Shuttle hardware compatibility list (*1) but can be found in the Kingston compatibility list.

Storage

The Shuttle has one M.2 2280 slot. The Samsung EVO 970 1 TB NVMe SSD will be used as local datastore for storing VMs. Besides the SSD I use an existing QNAP NAS with a NFS connection to this host. ESXi will be installed in the USB stick.

(*1) Shuttle has a hardware compatibility list available (link).

Hardware installation

Once all the hardware parts were delivered it was time to build the Shuttle XPC SH370R6 Plus. Shuttle provides step by step documentation that describes every step.

  • Open the barebone Shuttle by removing 3 screws on the back and the move the aluminium case away.
  • Install the Intel CPU and add some thermal paste.

 

  • Shuttle has a own Integrated Cooling Engine (I.C.E.) heatpipe technology that delivers cooling for the CPU with a 92 mm fan.

  • Mount the I.C.E. heatpipe cooler on the CPU and attach the fan.

  • The Shuttle supports up to 64 GB DDR4 memory. Insert the 4 memory modules in the DIMM slots.

  • Attach the Samsung EVO 970 1 TB NVMe SSD in the M.2 2280 slot.

  • The mainboard has two slots for PCI-e cards, 1 x PCIe X16 (v3.0) supports dual-slot graphics cards up to 273 mm length for example and 1 x PCIe X4 (v3.0). I added the following spare parts that are lying around:
    • Intel Gigabit NIC (dedicated for my storage connection to my QNAP NAS).
    • PCIe to M.2 adapter (Lycom DT-120 M.2 PCIe to PCIe 3.0 x4 Adapter) and  installed a Samsung 950 Pro 512 GB M.2 SSD.

The documentation that Shuttle provides is very clear so the hardware installation went without any problems. Once the hardware parts are installed it was time for the first power-on. After the power-on I entered the BIOS and modified some settings such as:

  • Boot order
  • Disabled some hardware devices you don’t use such as audio
  • Use Smart FAN mode for controlling the FAN speed
  • Modify the power management by enabling “Power-On after Power-Fail”

Operating Systems

The Shuttle has no remote management functionality so you’ll need a physical monitor, mouse and keyboard  physically connected for the software installation. Once the installation is done, remote management in the software can be enabled.  Windows Server 2019 with the Hyper-V role and VMware ESXi 6.7 U1 is installed on the Shuttle SH370R6 plus. Both OSes are not officially supported. Here are my installation experiences:

Windows Server 2019

As test I installed Windows Server 2019 on the Samsung EVO 970 1 TB. The installation of Windows Server 2019 is finished within a couple of minutes. The onboard Intel Gigabit I211 NIC is not recognized by the Windows. The Intel driver is only for Windows 10 and won’t install on a server OSes by default. This issue is there for a long time (see link). Intel does not want that desktop NICs are being used on Windows Server OSes. With some hacking (link) it is possible to enable the NIC. The add-on Intel Gigabit NIC is recognized by default in Windows Server 2019. Enabled the Hyper-V role and installed some VMs without problems. The user experience is fast, very fast!

 

VMware ESXi

After the Windows Server 2019 test, it was time to create an USB stick (link) and install VMware ESXi  6.7 U1. The installation is a piece of cake. The onboard Intel Gigabit I211 NIC is recognized by ESXi and will be used for management and VM traffic in my config. The add-on Intel 1 GbE NIC is configured as NFS connection to my existing QNAP NAS. After some initial configuration (network and storage connection, NTP, and licensing) I migrated all the VMs from my old NUC to the new Shuttle. Most VMs are placed on the NVMe SSDs. I really like to performance boost of the VMs when they are running on the new hardware. When the load increases on the Shuttle the fan will make more noise (Smart FAN mode ). This can be annoying when you’re in the same room. In the BIOS you can tweak the FAN speed. For me this is not a problem because i’ll have a separate room were my home lab servers resides.

The onboard SATA controller is recognized by ESXi. I attached two single SSDs to the controller and created a VMFS-6 partition.

vSAN

By adding a PCIe  to M.2 adapter and a 10 Gigabit NIC it’s possible (not tested yet) to create a All Flash NVMe vSAN node. I added a PCIe to M.2 adapter and added a NVMe SSD and having two NVMe SSDs that are recognized by ESXi (see storage screenshot above).

Power Consumption

When the Shuttle is booted with ESXi and the two add-on cards are added without any VMs powered-on, the power consumption is around 20-24 W. The power consumption is depending on the amount resources that are being used. In my configuration, running 10 VMs that are using between 35 and 70 W.

Conclusion

The Shuttle SH370R6 Plus home lab host is now running for a couple of weeks. Here are my findings:

  • Depending on your requirements (such as budget) you can customize the SH370R6 Plus for you’re needs.
  • The hardware installation was easy and without problems. The documentation that Shuttle provides is very clear.
  • There two version of the  Shuttle: the SH370R6 and the SH370R6 Plus. The PSU is the difference, the plus version has a 500 W PSU (for adding a GPU).
  • The Shuttle barebone has a lightweight aluminium case (33.2 x 21.5 x 19 cm). This is a lot bigger than a Intel NUC but gives more extensible by adding hardware.
  • Using a 8th generation Intel I7 8700 with 6 cores and 12 threads with a TDP of 65 W is cost efficient and consumes less power but still having a powerful CPU (VMs with 12 vCPU are possible to create).
  • The Shuttle Barebone System supports up to 64 GB memory. For a barebone is every unique on the moment. Most barebone systems supports a maximum of 32 GB on the moment!
  • There is one M.2-2280M SSD slot for a NVMe SSD on the mainboard. By adding a PCIe to M.2 adapter you can add an extra NVMe SSD. Great for a vSAN All Flash (AF) use cases.
  • There is room for two 3.5″ HDDs. I didn’t test if the SATA controller are recognized by ESXi because I use M.2 SSDs and NFS storage.
  • Two PCI-Express slots are available: 1 x PCIe X16 (v3.0) and 1 x PCIe X4 (v3.0).  This makes it possible to add a GPU and 10 Gigabit adapter for example.
  • When the load increases on the Shuttle the fan will make more noise (Smart fan mode ). This can be annoying when you’re in the same room. In the BIOS you can tweak the fan speed. For me this is not a problem because i’ll have a separate room the the home lab servers.
  • The Shuttle is not officially certified and supported by VMware and Microsoft
  • The performance and user experience are fast, very fast!
  • More information about the Shuttle SH370R6 Plus can be found here, link.

I’m happy with my new home lab member, the Shuttle SH370R6 Plus because all my requirements are met and I really like to performance boost of the VMs.