RDMA on vSphere: Status and Future Directions
Bhavesh and I gave our talk [PDF] this morning at the OpenFabrics Alliance Developer and User Workshop here in Monterey. Actually, to be fair, I did a short introduction and then Bhavesh spent the bulk of our thirty-minute slot covering two topics. First, he presented several possible approaches to enabling RDMA for latency-sensitive applications on vSphere — including what he called “the holy grail” — a virtualized RDMA device that provides low-latency, high-bandwidth RDMA while maintaining the ability to use vMotion and Snapshots. In his view, this vRDMA option (Option F in the slide deck) is the most compelling and interesting for VMware and is the one he will be prototyping within the Office of th CTO over the coming months.
The second half of Bhavesh’s talk focused on the potential benefits of using RDMA within VMware’s virtualization platform to accelerate some of our important hypervisor services — for example, vMotion and FT. The slide deck includes experimental results that demonstrate vMotion operations can be completed much more quickly and with far lower CPU utilization by using RDMA.
In my introduction, I showed a few graphs from our upcoming paper, RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vSphere 5 that showed we can use passthrough mode (VM DirectPath I/O) to deliver 1.75us half ping-pong latencies for Send and 3us for RDMA Read using polling completions and 7.6us using interrpt completions for RDMA Read.
insideHPC taped our session so I will post a link once it is available.