Today I’m proud to announce the availability of VMware Virtual SAN 5.5. This milestone represents a disruptive event for the storage industry and a major achievement for VMware’s strategy. We are bringing together the key third block in the trio of virtualized compute, network and storage. With the fusion of these three resources upon industry standard server components, we can now fully realize a true software-defined data center.
This fusion represents a significant milestone in simplification. To date, we have thought of these resources, including storage, as a manifestation of the physical resources that we configure and expose to the virtualization platform. Think of this as bottom-up management of these resources. With Virtual SAN, we invert this model. Just as we did for CPU, we now think first of the application and its needs, then through simple application policy management, the platform automates the provisioning and allocation of the physical resources to accommodate the application requirements.
A Converged Model
Virtual SAN implements a converged compute and storage model, which is a subtle change to the way we traditionally consider storage topology. For the last two decades, centralizing the management of storage has meant creating a large, shared external storage device, which by nature of being centralized, gives it the ability to unify management. With Virtual SAN, the unified management comes through our top-down storage model which means we can provide all the benefits of centralized management without having to externalize the storage into another tier.
With converged storage, the compute and storage technologies are provided by the same physical servers which allows compute and storage capacity to be housed in the same system. It’s through Virtual SAN’s distributed system that all of these resources come together into a unified model, allowing users to take advantage of it as a centrally managed system. A Virtual SAN cluster comprises of three to 32 hosts each with local disks, clustered together using commodity networking, typically standard 10-gigabit ethernet.
A Virtual SAN system is a true distributed system, which in practical terms means it’s resilient against single points of failure and its resources can be scaled simply by the addition of servers. Virtual SAN isn’t the first converged storage system. The most notable one today is Hadoop’s storage model which allows compute and capacity on the same system. The difference with Virtual SAN is that it provides generic block storage, which means that we can host almost any application type on the storage platform.
A Different Way to Think About Scale
Virtual SAN builds on vSphere’s cluster model, which traditionally we use as the fundamental unit for automatic resource management and high availability. Hosts in the cluster are merged into a pooled model which vSphere can use to automatically place, rebalance and fail-over application within. With Virtual SAN, we simply extend this cluster unit with integrated, reliable storage.
Since the vSphere cluster is the traditional unit of management in our environments, scale comes almost naturally from this existing management model. Each cluster can scale to 32 hosts, providing storage for all the VMs running in the cluster. As with existing deployment models, we deploy multiple clusters for different groups of workloads, and naturally the storage extends with these clusters. So we can think of Virtual SAN scale in two ways – firstly, each cluster scales to 32 hosts, and also it scales within the data center with the multiple clusters we deploy.
Workloads and Virtual SAN
Virtual SAN is designed to accommodate many different types of workloads with varying capacity and IOPS requirements. One of the key design decisions is to transparently integrate the use of flash technology into the storage hierarchy so that we can take advantage of the significant performance advantages that flash provides. We require that each host has at least one flash device, and we use it as both a read cache and a write-back buffer. During development, we also learned that it’s not necessary to keep storage access local, since a network hop across 10gigabit ethernet is trivial compared to the actual storage device access. This means we can easily aggregate the flash and magnetic disk resources across the cluster, so that VMs can take advantage of storage devices across multiple hosts in the cluster. This has a leveling effect, where an application’s capacity and performance requirements may be met by the resources across several hosts in the cluster, removing hot-spots or bottlenecks that would normally result from individual hosts.
As a result, Virtual SAN can scale up and out. An individual VM can scale up and achieve more IOPS than a single host is capable of, by merit of this aggregation of devices across the cluster. Additionally and actually more importantly, the product can scale out. This means that the aggregate performance of the cluster scales up as more VMs and hosts are added.
Since the typical IOPS use of each VM is typically quite moderate, this is a more common scenario. Some examples of the IOPS requirements of customers VMs is shown below:
Workload | IOPS Required |
Virtual Desktops (VDI)(View Planer Workload) | Each VM uses 10-15 IOPS |
Typical Oracle Databases |
100-1500 IOPS |
Microsoft Exchange Mailserver VM w/5000 emails/day per user | 3000 IOPS |
We will blog more about Virtual SAN performance in the future, and today I’ll touch on some of the high level capabilities. We have benchmarked Virtual SAN a number of different ways, and one of the more significant results is the scaling of the technology from 3 to 32 hosts to reach an aggregate throughput of 2 million IOPS with a 32 node cluster. This workload shows Virtual SAN’s capability of aggregating multiple hosts together to provide great linearly scalable performance.
Our focus for Virtual SAN 5.5 has been primarily on virtual desktop and aggregate mixed workload configurations comprising many moderate sized VMs. To help put Virtual SAN performance in perspective, the graph below shows the scaling of the product compared the IOPS requirements of different types of workloads. In all these workloads, Virtual SAN provides significantly more throughput than individual and multiple instances of each workload, showing that the converged model successfully provides for the compute and storage needs of these workloads.
During our Virtual SAN Beta, over 12,000 customers put the product through many different tests and trials, including its use for VMware View, Web Analytics, Test/dev, ROBO environments, storage for DMZ networks, Management clusters, and more. A lot of the beta participants did benchmarking using tools like IOMeter and IOAnalyzer with excellent results.
Next-gen Workloads
Our team is working on the needs and requirements of big-data and other more advanced workloads, and will have more to share as we progress with this analysis.
In closing, I’d like to thank the many customers that participated in our Virtual SAN Beta program and for all the feedback and guidance along the way. I’d also like to congratulate the Virtual SAN engineering and product teams on the heroic effort over the last four years – they are all responsible for bringing us Virtual SAN today. I look forward to sharing much more about this technology in the future, and to hearing the great stories as Virtual SAN is used by our customers.
Here are some technical resources:
VMware Virtual SAN Compatibility Guide Page
Comments