
The idea makes sense, even if I didn’t find a way to do the same under linux. The FreeBSD wiki suggests to disable entropy harvesting when doing benchmarks, so I did it adding harvest_mask="351" in /etc/rc.conf. 2 CPU and 4 GB RAM were given to each VM.
#FREEBSD MEMINFO CPUINFO DRIVERS#
Guest OS setupĮach VCPU was pinned to an isolated physical core, RAM was backed by 1 GB hugepages and VirtIO drivers and peripherals were used when possible. While storage was not relevant for our purposes, an LVM logical group was created for every VM, instead of using files or even worse, sparse qcow2 images. Guests were interconnected via a DPDK switch, which can handle milions of packets per second with a single core. Spectre and Meltdown mitigations were disabled via kernel command line. Intel Turbo Boost and P-states are known to skew benchmarks, so they were disabled by the kernel command line and writing with wrmsr-pX 0x1a0 0x4000850089 in the appropriate register on each CPU. Both the 10 Gbit cards were on the first NUMA node, so I decided to completely disable the second NUMA node by putting all the CPUs and memory of the second node off with the mem=64G kernel command line and a script run at boot.

The server was partitioned so that any task of the host OS couldn’t interfere with the guests VCPU: 8 physical CPUs (12,14,16,18,20,22,24,26) were removed from the scheduler with RCU callback and timer off, and system was booted with nosmt=force to avoid using their HT siblings. The hypervysor is KVM on Fedora 29 with latest 4.19 kernel.

