In a recent blog post, I offered Tuning tips to run low-latency applications in virtual machines. The blog post and the Best Practices white paper highlighted that by following certain tuning tips, a broad array of I/O latency sensitive applications which were historically difficult to virtualize due to overheads and added latency can now be successfully deployed on VMware vSphere 5.0.
I’m excited to annouce that my colleage Shilpi Agarwal, who manages the vSphere Network Performance Engineering team, just published a companion white paper that examines Network I/O Latency on VMware vSphere 5.
The paper is an excellent and highly recommended read to learn about the exact sources of latency overhead in the vSphere network virtualization stack. It also includes specific ping and netperf (common networking benchmarks) numbers comparing latencies achievable on physical non-virtualized environments versus vSphere VMs on the same hardware.
The paper compares latencies between a VM on an ESXi host connected to a non-virtualized physical host, two VMs on the same ESXi host connected across a vSwitch, as well as two VMs on two ESXi hosts using DirectPath I/O. It also measures the impact of CPU and network resource contention on latency in vSphere 5.0.
This white paper, along with the previously published Best Practices for Performance Tuning white paper, and the soon to be published research note summarizing our QDR InfiniBand performance study using DirectPath I/O on vSphere 5, all collectively demonstrate how VMware vSphere 5.0 is finally a hypervisor platform that’s viable for a significantly increased number of latency-sensitive applications in a virtualized environment.
Comments