Network I/O Latency on vSphere 5.0

In a recent blog post, I offered Tuning tips to run low-latency applications in virtual machines. The blog post and the Best Practices white paper highlighted that by following certain tuning tips, a broad array of I/O latency sensitive applications which were historically difficult to virtualize due to overheads and added latency can now be successfully deployed on VMware vSphere 5.0.

 

I’m excited to annouce that my colleage Shilpi Agarwal, who manages the vSphere Network Performance Engineering team, just published a companion white paper that examines Network I/O Latency on VMware vSphere 5.

 

The paper is an excellent and highly recommended read to learn about the exact sources of latency overhead in the vSphere network virtualization stack. It also includes specific ping and netperf (common networking benchmarks) numbers comparing latencies achievable on physical non-virtualized environments versus vSphere VMs on the same hardware.

 

The paper compares latencies between a VM on an ESXi host connected to a non-virtualized physical host, two VMs on the same ESXi host connected across a vSwitch, as well as two VMs on two ESXi hosts using DirectPath I/O. It also measures the impact of CPU and network resource contention on latency in vSphere 5.0.

 

This white paper, along with the previously published Best Practices for Performance Tuning white paper, and the soon to be published research note summarizing our QDR InfiniBand performance study using DirectPath I/O on vSphere 5, all collectively demonstrate how VMware vSphere 5.0 is finally a hypervisor platform that’s viable for a significantly increased number of latency-sensitive applications in a virtualized environment.

Other posts by

IEEE Hot Interconnects Invited Talk: NFV in Cloud Infrastructure – Ready or Not

IEEE Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, data centers and clouds. This year, Hot Interconnects was held in Santa Clara, CA from August 26-28. I’m grateful […]

Tuning ESXi for NFV Workloads

Network Functions Virtualization (NFV) enables the efficiency and agility benefits that come from untethering network functions from proprietary appliances and moving them to a virtualized infrastructure running over hypervisors and COTS hardware. Enterprise IT departments have been realizing such benefits for a number of years but the telecommunications environment brings new workloads with different requirements. The vSphere ESXi hypervisor provides […]

Low Latency on vSphere at VMworld 2012

I was excited to have one of our partners, Mellanox, extend a special invitation for me to speak at their Expo booth during VMworld 2012 in San Francisco. They had allocated a 30 minute slot for a Theater Presentation and promoted my presentation via nice giveaways and postcards distributed in nearby VMworld hotels. I used […]