Performance of RDMA and HPC Applications in VMs using FDR InfiniBand on VMware vSphere

Bare-metal and virtual MPI latencies for a range of ESX versions

Bare-metal and virtual MPI latencies for a range of ESX versions

Customers often ask whether InfiniBand (IB) can be used with vSphere. The short answer is, yes. VM Direct Path I/O (passthrough) allows InfiniBand cards to be made directly visible within a virtual machine so the guest operating system can directly access the device. With this approach, no ESX-specific driver is required — just the hardware vendor’s standard native device driver for Linux or Windows. VMware supports the VM Direct Path I/O mechanism as a basic platform feature and our hardware partners support their hardware and device drivers.

Those interested in InfiniBand are, of course, primarily concerned with performance — can we achieve performance competitive with bare metal? We’ve recently published a technical white paper that answers this question for two constituencies — those interested in consuming the low-level RDMA verbs interface directly (typically, distributed databases, file systems, etc.) and those interested in running HPC applications, which typically run on top of an MPI library which in turn uses RDMA to control the InfiniBand hardware.

In addition to providing detailed RDMA and MPI micro-benchmark results across several ESX versions, the paper includes several examples of real HPC application performance to demonstrate what can be achieved when running MPI applications on vSphere.

The paper, titled Performance of RDMA and HPC Applications in Virtual Machines using FDR InfiniBand on VMware vSphere, is available here.

Other posts by

High Performance Computing with Altitude: SC’16 Begins Tomorrow!

As readers may know, VMware has had a presence in the EMC booth for the last several years at Supercomputing, the HPC community’s largest annual ACM/IEEE¬†conference and exhibition. With the fusion of Dell and EMC into DellEMC and with VMware now under the Dell Technologies umbrella, I am very pleased that we will have two […]

Virtualized HPC at Johns Hopkins Applied Physics Laboratory

Johns Hopkins University Applied Physics Laboratory¬†(JHUAPL) has deployed a virtualized HPC solution on vSphere to run Monte Carlo simulations for the US Air Force. The idea was conceived and implemented by Edmond DeMattia at JHUAPL, and has been presented by Edmond and his colleague Michael Chinn at two VMworld conferences. We now have a white […]

GPGPU Computing with the nVidia K80 on VMware vSphere 6

[UPDATE: Feb, 2017: This blog entry has been updated to correct an error. To use advanced features like passthrough of large-BAR PCI devices, you must use a UEFI-enabled VM and guest OS.]   As customer interest in running HPC workloads on vSphere continues to increase, I’ve been receiving more questions about whether compute accelerators like […]