Research Note: RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vSphere 5

ibrn.pngOur research note exploring the use of InfiniBand in Red Hat guests with VM DirectPath I/O is now available for your reading pleasure here. In it, we present the results of bandwidth and latency tests run using two hosts connected back-to-back with Mellanox quad data rate (QDR) InfiniBand. We show that bandwidths over a wide range of messages sizes are comparable to those achievable in the non-virtualized case and that low latencies (under 2us) are achievable as well.

 

This paper represents our first performance results using RDMA. We expect to publish further papers examining the performance of full MPI applications as well as results related to Bhavesh’s work on a virtualized RDMA device that would support RDMA while also maintaining the ability to perform vMotion and Snapshot operations.

Other posts by

High Performance Computing with Altitude: SC’16 Begins Tomorrow!

As readers may know, VMware has had a presence in the EMC booth for the last several years at Supercomputing, the HPC community’s largest annual ACM/IEEE¬†conference and exhibition. With the fusion of Dell and EMC into DellEMC and with VMware now under the Dell Technologies umbrella, I am very pleased that we will have two […]

Performance of RDMA and HPC Applications in VMs using FDR InfiniBand on VMware vSphere

Customers often ask whether InfiniBand (IB) can be used with vSphere. The short answer is, yes. VM Direct Path I/O (passthrough) allows InfiniBand cards to be made directly visible within a virtual machine so the guest operating system can directly access the device. With this approach, no ESX-specific driver is required — just the hardware […]

Virtualized HPC at Johns Hopkins Applied Physics Laboratory

Johns Hopkins University Applied Physics Laboratory¬†(JHUAPL) has deployed a virtualized HPC solution on vSphere to run Monte Carlo simulations for the US Air Force. The idea was conceived and implemented by Edmond DeMattia at JHUAPL, and has been presented by Edmond and his colleague Michael Chinn at two VMworld conferences. We now have a white […]