High Performance Computing with Altitude: SC’16 Begins Tomorrow!

sc16-4cblackredtextoutlineAs readers may know, VMware has had a presence in the EMC booth for the last several years at Supercomputing, the HPC community’s largest annual ACM/IEEE conference and exhibition. With the fusion of Dell and EMC into DellEMC and with VMware now under the Dell Technologies umbrella, I am very pleased that we will have two demo stations in the DellEMC booth here in Salt Lake City at SC’16. As a long-time HPC vendor and active participant within the HPC community, Dell(EMC) has one of the largest booths on the exhibit floor and it’s looking very good as we prepare for the gala opening event at 7pm on Monday.

PVRDMA

Our first demo station, which is located in the Advanced Technologies section of the DellEMC booth, showcases our new PVRDMA technology — Para-virtualized RDMA — running over 40 Gb/s RoCE. In particular, we show that we can live-migrate pieces of a running MPI application from one host to another while the parallel-distributed application (NAMD, a popular molecular dynamics code) continues to run. Further, as NAMD VMs transition from running on separate physical hosts to running on the same host, the MPI messaging traffic is seamlessly switched from physical Mellanox RoCE cards to purely virtual PVRDMA devices. While there has been earlier work [PDF] demonstrating live migration of MPI applications based on TCP connectivity, I believe this is the first time such capabilities have been shown using an unmodified MPI library and an RDMA-connected application. It’s very cool stuff.

Cloud

Our second demo station focuses on cloud technologies for HPC. Use of vRealize Automation as a private cloud solution is a central part of our cloud story since the combination of challenges represented by data volume, data sensitivity, and resource intensity dictate that the majority of HPC workloads will remain on-prem for the foreseeable future. We, of course, recognize there are excellent reasons for using off-prem (both public and hybrid) cloud in some cases and are prepared to discuss our increasingly interesting options in this area as well.

Our primary demo at this station shows the creation and use of a vRealize Automation multi-machine blueprint that automatically instantiates a virtual HPC cluster with the Torque job scheduler pre-installed and running within the cluster. In addition, because OpenStack is becoming popular with those in the HPC community who are experimenting with private clouds, we will also show VMware Integrated OpenStack (VIO), which greatly reduces the complexity of OpenStack deployment and lifecycle management while also providing the stability, performance, and functionality of the vSphere platform. Since some in the HPC community are beginning to explore the use of containers, we will also discuss both VMware Integrated Containers (VIC) as well as Photon Platform, two of VMware’s relevant open-source projects.

Performance

Finally, over the past several months we’ve been collaborating with our colleagues at DellEMC who have graciously lent us a 16-node EDR-based cluster within their Austin HPC Innovation Lab. Consequently, we will be sharing our latest MPI strong-scaling performance numbers running NAMD, LAMMPS, WRF, and OpenFOAM at various scales up to 16 nodes and 320 MPI processes. We are very excited to show these results.

Note that, due to the Dell-EMC merger, there are two DellEMC booths at SC this year. The smaller booth is configured mostly as a lounge area for casual discussions, while the larger (50’x50′) main booth is where the demo stations are located.

If you will be in Salt Lake City this week, please do stop by the DellEMC booth to learn more.

 

Other posts by

Performance of RDMA and HPC Applications in VMs using FDR InfiniBand on VMware vSphere

Customers often ask whether InfiniBand (IB) can be used with vSphere. The short answer is, yes. VM Direct Path I/O (passthrough) allows InfiniBand cards to be made directly visible within a virtual machine so the guest operating system can directly access the device. With this approach, no ESX-specific driver is required — just the hardware […]

Virtualized HPC at Johns Hopkins Applied Physics Laboratory

Johns Hopkins University Applied Physics Laboratory (JHUAPL) has deployed a virtualized HPC solution on vSphere to run Monte Carlo simulations for the US Air Force. The idea was conceived and implemented by Edmond DeMattia at JHUAPL, and has been presented by Edmond and his colleague Michael Chinn at two VMworld conferences. We now have a white […]

GPGPU Computing with the nVidia K80 on VMware vSphere 6

[UPDATE: Feb, 2017: This blog entry has been updated to correct an error. To use advanced features like passthrough of large-BAR PCI devices, you must use a UEFI-enabled VM and guest OS.]   As customer interest in running HPC workloads on vSphere continues to increase, I’ve been receiving more questions about whether compute accelerators like […]