Virtual SAN requires minimal 3 ESXi hosts. With version 6.1 of Virtual SAN, Remote Office/Branch Office (ROBO) and small SMB customer environments are supported with Virtual SAN on 2 ESXi nodes. With a 2 node Virtual SAN cluster options such as HA, DRS and vMotion are fully supported.
In a ROBO configuration you have two Virtual SAN data nodes and one witness node. The Virtual SAN data nodes can be in one location. The witness node can reside in the same or another location (not on the Virtual SAN).
A virtual witness appliance is needed when a split brain occurs or performing maintenance to figure out what VMs does have quorum (more than 50% VMs objects needs to be available). This can can be 1 ESXi host with Virtual SAN and the witness or 2 ESXi hosts with Virtual SAN.
A Virtual SAN ROBO environment example looks like this:
- 2 VMware ESXI with Virtual SAN enabled
- A witness appliance is running on a ESXi server in the same or other site.
Here are some considerations for using Virtual SAN ROBO:
- With Virtual SAN ROBO, a witness appliance is needed. The witness appliance is placed on a third ESXi server. This hosts does not need a Virtual SAN license and SSD disk.
- The witness appliance is a nested ESXi host (ESXi running in a VM).
- It is not supported to run the witness on Virtual SAN.
- The witness hosts stores only VM witness components (metadata).
- The VMs are only protected by a single failure (FTT=1).
- The virtual witness appliance can be configured in the following flavors (depending on the supported VMs):
> 500 VMs
|Virtual disks (*1)||8 GB boot
10 GB SSD
15 GB HDD
|8 GB boot
10 GB SSD
350 GB HDD
|8 GB boot
10 GB SSD
350 GB HDD
|MAX witness components||750||22000||45000|
(*1) The SSD and HDD are virtual disks. There is no need to have a physical SSD disksin the ESXi host were the witness appliance resides.
- Deploy Virtual SAN on certified hardware. Check the Virtual SAN HCL!
- For a Virtual SAN disk configuration a minimal of 1 SSD and 1 Magnetic disk is needed. These disk cannot be used for booting ESXi
- For booting ESXi use a USB, SD or SATADOM device
- A small ESXi host can be used for the witness appliance. The witness appliance has no data, only metadata.
- Cross connecting 2 Virtual SAN ESXi nodes is NOT supported
- For 10 or less VMs a 1 Gbps network connection can be used. For >10 VMs use 10 Gbps
- Network bandwith to the witness: 1.5 Mbps
- Latency to the witness: 500 Milliseconds RTT
- Latency between the data nodes: 5 Milliseconds RTT
- Virtual SAN is licensed separately.
- Virtual SAN for ROBO is a license that includes a 25 VM pack license. This license does not include the stretched cluster and All-flash options.
- A maximum of 1 Virtual SAN for ROBO license may be used per site.
- When running less than 25 VMs consider a VSAN standard of advanced license. The standard and advanced licenses are licensed per CPU socket.
- Consider single socket CPU servers to decrease the licensing costs.
- Consider vSphere Essentials (plus) for licensing the vSphere environment to reduce licensing costs.
- Consider ESXi Hypervisor (free) for placing the witness appliance. ESXi Hypervisor cannot be managed by a vCenter Server!
- For each ROBO Virtual SAN you need a dedicated witness appliance.
- When running the vCenter Server on top of Virtual SAN, powering down the Virtual SAN cluster involves a special procedure (link). Consider placing the vCenter Server on the witness host for simplicity.
When trying to pair the Broker agent 6.2 with the Horizon adapter in vRealize Operations Manager (vROps), it fails with the following error:
Could not Pair with Adapter Address …. An Error has Occurred. Failed to pair the Adapter. Operation Adapter Pairing Failed.
This issue occurs when the firewall rules on the vRealize Operations Manager are incorrect. To resolve this issue update the firewall rules using the following steps:
- Open the VM console of the vRealize Operations Appliance or enable SSH (link)
- Open the “/opt/vmware/etc/vmware-vcops-firewall.conf” file in vi
- Make sure the following entries are added to the V4V Adapter specific ports section
# v4V Adapter specific ports
- Save the file
- Restart the firewall by using the following command:
Try to pair the adapter again. The pairing must now be successful.
Last month I extended my VMware ESXi and Hyper-V home lab with a Samsung 950 Pro SSD. The Samsung 950 Pro SSD is the next-gen SSD that has the following characteristics:
- Uses V-NAND memory and the Non-Volatile Memory Express (NVME) protocol. This takes away the 600 MBps bandwidth limit with the SATA protocol.
- Takes advantage of the PCIe Gen 3 x4 (up to 32 Gb/s) interface
- Available in 256 and 512 GB at the moment. In 2016 larger sizes will be available
- The Samsung 950 PRO has a M.2 form factor (2208) interface
These improvements results in being one of the fastest consumer SSD on the market today. In my current home lab don’t have a PCIe Gen3 x4 slot or a M.2 interface. I found a “interface converter PCI-Express, M.2 NGFF” (link) adapter. It’s the same adapter as the Lycom DT-120 (link) another great blog called TinkerTry is referring to (link). The adapter converts the M.2 interface to a PCIe interface slot.
The Lycom DT-120 adapter has the following specifications:
- Single M.2 slot (There are Dual M.2 controllers available on the market)
- The adapter cost around € 17,00
- Does not require any driver
- Supports PCIe 2280,2260 and 2242
- Supports PCIe 1.0, 2.0 and 3.0 slots on the motherboards
My first step was checking if the firmware is up to date with the Samsung Magician software.
In Windows 2012 R2 the Samsung 950 Pro SSD is standard recognized. The latest drivers and software can be download here, link.
The Samsung 950 PRO is recognized by default in VMware ESXi 6 Update 1.
With ATTO software a simple benchmark on a Samsung 840 SSD (based on the SATA protocol) and the Samsung 950 Pro is performed. The ESXi host is a whitebox with the following hardware specifications:
- Gigabit GA-Z870D3HP
- Intel i5 4570S
- 32 GB memory
- Lycom DT-120 adapter is placed in a PCIe x16 slot
- Samsung 840 connected via SATA
- Samsung 950 Pro SSD placed on the Lycom DT-120 adapter
A VM with a 10 GB Thick Provisioned Eager Zeroed VMDK disk is attached. The disk is formatted as NTFS with a standard (4 KB) block size.
The Left picture is the Samsung EVO 840 and the right picture is the Samsung 950 Pro.
As you can see the read and write performance is multiplied by 3 on the Samsung 950 Pro with the M.2 interface converter . These are pretty impressive numbers for a consumer SSD.
After upgrading vRealize Operations Manager to version 6.1 the Horizon View metrics are not collected anymore. On the Horizon View Connection Server where the vRealize broker agent is installed the following error is displayed in the logs:
vRealize Operations Manager broker error "javax.naming.NameNotFoundException" ERROR BrokerPoll message sending error: javax.naming.NameNotFoundException: V4V-BrokerMessageServer
The logs can be found in the following location on the Horizon View Connection broker:
c:\ProgramData\VMware\vCenter Operations for View\logs\v4v_broker_agent_cfg.... log
To resolve this error you need to restart the collector service on the vRealize Operations Manager Appliance. Run the following steps:
- Open a console session on of the vRealize Operations Manager collector node (SSH is disabled by default)
- Press ALT- F1 in the console session
- Log in as root user with no password
- Change the password the first time
- Restart the collector service by using the following command:
service vmware-vcops restart collector
It can take serveral minutes to pair the Broker Agent with the Horizon View adapter.
The SSH Service on the vRealize Operations Manager Appliance is disabled by default. To enable and start the SSH service use the following commands:
- Start the SSH service
service sshd start
- To configure SSH to start automatically
chkconfig sshd on
- Check the status of the SSH service
When configuring a Windows desktop or RDSH session with Horizon View, different software components must be installed such as the VMware Tools and the VMware Horizon View Agent. When using User Environment Manager and App Volumes they require an agent too. All these software components must be installed in the correct order to prevent problems such as a black screen when connecting to a Windows VDI desktop using the PCoIP protocol.
The following order can be used with a clean installation:
- VMware Tools (*1).
- VMware Horizon View Agent.
- View Agent Direct-Connection.
- VMware User Environment (UEM) agent.
- VMware App Volumes (Agent) (*2).
(*1) The NSX File and Network Introspection drivers are not installed by default.
(*2) In App Volumes 2.9 and later you can install the agent in any order.
When upgrading VMware tools I always uninstall and reinstall the agents in the order as mentioned .