Slow network throughput between Ubiquiti EdgeRouter VLANs

After putting my NAS in a separate storage VLAN I noticed that my network throughput was slow to and from the NAS. The throughput of a simple file copy from a windows laptop (wired at 1 GbE) to a SMB share on the NAS was only 17,7 MB. The Windows laptop was on a different VLAN then the NAS.

After some troubleshooting I noticed that the throughput within the VLAN was okay  (120 MB/s on a 1 GbE network). The routing between the VLANs is done by Ubiquiti EdgeRouter Lite. I investigated the Ubiquiti EdgeRouter settings by using SSH and noticed that VLAN forwarding for IPv4 was disabled by using the following command:

ubnt@ubnt:~$ show ubnt offload

IP offload module : loaded
IPv4
forwarding: enabled
vlan : disabled
pppoe : disabled
gre : disabled
IPv6
forwarding: disabled
vlan : disabled
pppoe : disabled

IPSec offload module: loaded

Traffic Analysis :
export : disabled
dpi : disabled
version : 1.354

The following commands enable VLAN forwarding for IPv4 and save the setting:

ubnt@ubnt:~$ configure
[edit]
ubnt@ubnt:# set system offload ipv4 vlan enable
[edit]
ubnt@ubnt: commit
[edit]
ubnt@ubnt:# save
Saving configuration to '/config/config.boot' ...
Done
[edit]

This config change can be done without a reboot of the router. After making this change the network throughput was as expected. So make sure to do some tests and check VLAN offloading in the Ubiquiti EdgeRouter (it’s default disabled).

DHCP problems after Ubiquiti EdgeRouter firmware upgrade

In my homelab I use a Ubiquiti EdgeRouter Lite 3-port and UniFi AC Access Points for some time now. After upgrading the Ubiquiti EdgeRouter to the latest firmware (EdgeOS 1.10.1) my WIFI devices where unable to get an IP address. I have different VLANs defined on the EdgeRouter for the WIFI networks. Each VLAN has it’s own DHCP scope configured.

In the EdgeRouter GUI I didn’t find any clue why the WIFI devices didn’t get an IP address anymore, so I opened a SSH session to the EdgeRouter and start troubleshooting. First I tried to start the DHCP service by using this command.

 
sudo service dhcpd start

The following error is displayed:

[….] Cannot start the DHCP server because configuration file /opt/vyatta/etc/d [FAILconf is absent. … failed!

The DHCP service cannot be started, that’s the problem why the WIFI devices didn’t get an IP address anymore. Then i looked in the following log files:

  • cat /var/log/messages
  • cat /var/log/vyatta/vyatta-commit.log

In the vyatta-commit.log the following error is displayed under the [service dhcp-server] section:

[ service dhcp-server ]
Static DHCP lease IP ‘192.168.249.11’ under static mapping ‘Chromecast’
under shared network name ‘WIFI’ is already is in by static-mapping ”.
DHCP server configuration commit aborted due to error(s).

In the DHCP scope for the WIFI VLAN there was a static IP mapping called “Chromecast”. I removing the “Chromecast” static IP mapping in the GUI of the EdgeRouter. In the SSH session tried to start the DHCP service by using the following command.

 
sudo service dhcpd start

Starting DHCP server daemon…

The DHCP service is started.  In the vyatta-commit.log no new errors are displayed and the WIFI devices were able to get an IP address.  Removing the “Chromecast” static mapping cleared the duplicate static IP error.

The mobile SDDC and EUC lab environment

With my  company I work for (Ictivity), we decided to develop a mobile Software Defined Data Center (SDDC) and End User Computing (EUC) lab environment. This mobile lab environment will be used to demo the VMware SDDC and End User Computing (EUC) stack with integration of third party solutions.  One of the reasons to use a physical lab environment instead of cloud services was flexibility and  having no dependencies.

The past moths I’ve got asked what components we used to build this lab environment. So here is an quick overview. The environment logically looks like the picture below:

Demo Environment

This environment contains three physical hosts with VMware ESXi installed and one switch. One ESXi host function as management host. On this management host the following software bits are installed:

  • vSphere 6
  • VSAN Witness
  • NSX Manager
  • Fortigate VMX
  • vRealize components
  • The End User Computing stack such as Horizon View, App Volumes, User Environment Manager and Identity Manager
  • Veeam

The other 2 ESXi hosts function as demo cluster. On this 2-node cluster the following software bits are installed:

  • vSphere 6
  • Virtual SAN (VSAN) All Flash (AF) configuration
  • NSX integration
  • Windows 10l
  • Windows Server 2012 R2

A laptop is used to connect to the lab environment.

What component are used? 

Some highlights of this lab are:

  • 4U rackmount flightcase
  • Mini-ITX motherboard
  • Intel Xeon D-1541 single socket System-on-Chip 8 core processor
  • 2 x 10 GbE Ethernet adapters
  • Only SSD is used
  • IPMI port

Case

The case is a robust custom made 19″ 4U rackmount flightcase with a removable front and back. It has two wheels so you can carry it easily around. This case contains 3 servers and one switch.Here is a picture of the case including alle the hosts and the switch.

IMG_0622Indeling Flightcase

Hosts

The flightcase contains three SuperMicro SYS-5018D-FN4T 1U Rackmount hosts with the following hardware specifications:

  • Chassis: SuperMicro 19″ 1U with a 200W Gold level power supply. Optimized for Mini-ITX (SuperChassis SC505-203B)
  • Motherboard: Super X10SDV-8C-TLN4F Mini-ITX board
  • Processor: 1 x Intel Xeon D-1541 single socket System-on-Chip. This processor contains 8 cores with 16 threads (hypertreading)
  • Memory: 4x DDR4 DIMM sockets (maximum 128 GB, 4 x 32GB DDR4 ECC  memory)
  • LAN: 2 x 10GbE and 2 x 1 GbE and 1 x IPMI LAN port
  • Expansion slots: 1 x PCIe 3.0 x16 slot and a M.2 PCIe 3.0 x4
  • Video: Aspeed AST2400
  • USB: 2x USB 3.0 and 4x USB 2.0

Management host

  • Memory: 4 x 32GB = 128 GB
  • SSD: 2 x Samsung PM863 MZ-7LM1T9E – SSD Enterprise – 1.92 TB – intern – 2.5″ – SATA 6Gb
  • Disk: Seagate Enterprise 6 TB disk (for backup)
  • USB Stick: Sandisk Ultra Fit USB3 16 GB (for booting ESXi)

Demo hosts 

Each host contains the following hardware:

  • Memory: 2 x 16GB = 32 GB per server
  • SSD: 1 x Intel P3500 SSD 1.2 TB PCIe 3.0 x4 (NVMe) and Samsung 950 Pro V-Nand M.2 PCI-e SSD 512GB
  • USB Stick: Sandisk Ultra Fit USB3 16 GB (for booting ESXi)

Switch

  • Switch: Netgear ProSafe Plus XS708E 8 x 10 Gbps +SFP slot

Cables

  • 6 x UTP CAT6 0.50 cm cables
  • 1 x UTP CAT6 5m
  • 1 x UTP CAT6 10m

 

Processor host

The two Intel X552/X557-AT NICs are not recognized by ESXi 6.5 and lower versions by default. To enable the Intel X552/x557 2 x 10GbE NICs download the Intel driver on the VMware website (link). Extract the ZIP file and install the offline bundle by using the following command:

esxcli software vib install -d /vmfs/volumes/datastore/driver/ixgbe-4.4.1-2159203-offline_bundle-3848596.zip

With this mobile SSDC lab environment we archived the following benefits:

  • Mobile and easy to carry around
  • Flexibility to install the latest VMware SDDC and 3e party software
  • No dependency
  • Enough horsepower
  • Low noise and power consumption
  • Remote accessible from our datacenter
  • IPMI and KVM support