So today we faced a situation where one level 3 opened his mouth and said virtualisation only has 5% performance loss when put against a standard OS on a physical dedicated server, so we decided to put these into some real world tests. The results are shocking to say the least.
Hardware Specifications:
Dell Poweredge R410 Enterprise Server
Dual E5520 2.26Ghz
12GB DDR3 1066Mhz
4 x 300GB 15K SAS Drives in a RAID 10 Array.
Perc6/i Hardware RAID Card with BBU
Operating System Setup
Setup One: ESXi 5.5 with Single instance of Ubuntu Server 13.04 with VMtools Installed (All resources assigned , 2 Processors with 8 Cores - 16 Cores total & 12GB memory)
Setup Two: Bare install of Ubuntu Server 13.04
We decided to run some tests that are universally available to all readers to test this themselves so we decided to use www.serverbear.com.
So lets get straight into the results
UnixBench Processor Results:
ESXi 5.5 with Ubuntu Server 13.04
UnixBench (w/ all processors) 3620.3
Ubuntu Server 13.04
UnixBench (w/ all processors) 5316.9
We are seeing almost a 32% drop in processor performance which left a few jaws dropped!
I/O Results using IOPING:
ESXi 5.5 with Ubuntu Server 13.04
I/O Seek Test (No Cache) - 7420 iops, 29.0 mb/s
I/O Reads - Sequential - 1825 iops, 456.3 mb/s
I/O Reads - Cached - 370452 iops, 1447.1 mb/s
Ubuntu Server 13.04
I/O Seek Test (No Cache) - 9520 iops, 37.2 mb/s
I/O Reads - Sequential - 2286 iops, 571.6 mb/s
I/O Reads - Cached - 736473 iops, 2876.8 mb/s
As you can see above we saw a substantial drop in performance in the I/O Reads - Sequential test which showed a 21% drop in performance in these tests.
As you can see above we saw a substantial drop in performance in the I/O Reads - Sequential test which showed a 21% drop in performance in these tests.
DD Command:
ESXi 5.5 with Ubuntu Server 13.04
dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync - 6.03376 s, 178 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync - 6.10183 s, 176 MB/s
dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync - 6.02596 s, 178 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync - 13.6239 s, 78.8 MB/s
Ubuntu Server 13.04
dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync - 4.03745 s, 266 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync - 4.10287 s, 262 MB/s
dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync - 3.81293 s, 282 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync - 11.9385 s, 89.9 MB/s
Again across the board we saw around a 25-35% drop in performance at the disk level.
FIO Input & Outputs per Second (IOPS) results:
ESXi 5.5 with Ubuntu Server 13.04
Read: 3313.0 iops @ 13.2 MB/second total throughput
Write: 1471.0 iops @ 5.8 MB/second total throughput
Ubuntu Server 13.04
Read IOPS 4006.0 @ 16.0 MB/second total throughput
Write IOPS 1590.0 @ 6.3 MB/second total throughput
We can see a drop in Read IOPS of about 18%
So now we are left scratching our head what is going on here so we decided to put 2 VM’s on the ESXi 5.5 server and start the tests simultaneously and give you guys the results as many have said ESXi only shines when multiple instances are run. We had to start both instances benchmarking at the exact time to distributed the load evenly but we also did a sneaky modification to the sb.sh script and remove the extra network throughput tests as these made the tests loose sync, we left one there as it was a local CDN.
UnixBench VM 1
UnixBench (w/ all processors) 2846.5
UnixBench VM 2
UnixBench (w/ all processors) 2720.3
Total Combined UnixBench Score: 5566.8
DD VM 1
dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync - 10.5966 s, 101 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync - 10.296 s, 104 MB/s ←
dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync - 10.2442 s, 105 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync - 15.4381 s, 69.6 MB/s
DD VM 2
dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync - 10.8446 s, 99.0 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync - 10.2212 s, 105 MB/s←
dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync - 9.9889 s, 107 MB/s
dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync - 14.4733 s, 74.2 MB/s
Total Combined : 209MB/sec - Roughing 22% Slower than the Physical Server
FIO Input & Outputs per Second (IOPS) results:
FIO VM 1:
Read IOPS 1403.0 @ 5.6 MB/second total throughput
Write IOPS 680.0 @ 2.7 MB/second total throughput
FIO VM 2:
Read IOPS 1402.0 @ 5.6 MB/second total throughput
Write IOPS 681.0 @ 2.7 MB/second total throughput
Total Combined Read IOPS: 2805 IOPS @ 11.2MB/sec total throughput
The IOPS were severely affected by running multiple instances , worsening the performance by almost 30% when compared to the single VM instance of 3300iops+
So where do we go from here? Virtualisation has huge benefits and the positives far outway the negatives and ESXi certainly has a vast amount of features that physical boxes don’t. If you are looking for serious performance nothing beats using physical Dedicated Servers, setting up High Availability on physical servers is quite simple these days especially in the situation of Active/Active Database servers with a a few front web end servers, run of mill sort of stuff











