Tuning ESXi for NFV Workloads

Network Functions Virtualization (NFV) enables the efficiency and agility benefits that come from untethering network functions from proprietary appliances and moving them to a virtualized infrastructure running over hypervisors and COTS hardware. Enterprise IT departments have been realizing such benefits for a number of years but the telecommunications environment brings new workloads with different requirements.

The vSphere ESXi hypervisor provides a high-performance and competitive platform that effectively runs many Tier 1 application workloads in virtual machines. By default, ESXi has been heavily tuned for driving high I/O throughput efficiently by utilizing fewer CPU cycles and conserving power, as required by a wide range of workloads.

Telco and NFV application workloads are different from the typical Tier I enterprise application workloads in that they tend to be any combination of latency sensitive, jitter sensitive, high packet rate or high bandwidth. vSphere ESXi can be tuned specifically for such workloads to achieve best performance in the NFV use case.

I recently published a white paper with many useful tips for tuning the ESXi hypervisor, the hardware infrastructure as well as NFV applications to achieve good performance on ESXi.

You can learn more about NFV at http://www.vmware.com/go/nfv. More Telco-related blogs can be found at http://blogs.vmware.com/telco.

Other posts by

IEEE Hot Interconnects Invited Talk: NFV in Cloud Infrastructure – Ready or Not

IEEE Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, data centers and clouds. This year, Hot Interconnects was held in Santa Clara, CA from August 26-28. I’m grateful […]

Low Latency on vSphere at VMworld 2012

I was excited to have one of our partners, Mellanox, extend a special invitation for me to speak at their Expo booth during VMworld 2012 in San Francisco. They had allocated a 30 minute slot for a Theater Presentation and promoted my presentation via nice giveaways and postcards distributed in nearby VMworld hotels. I used […]

Summer of vRDMA

Last summer, Josh Simons wrote a blog entry titled Summer of RDMA, summarizing work done by our intern, Adit Ranadive, who looked at RDMA performance on ESXi using VM DirectPath I/O with QDR InfiniBand cards passed through to VMs. At the same time, I had been exploring how to exploit RDMA for accelerating ESXi hypervisor services […]