The demand for services in datacenters is steadily growing as companies recognize the value of setting up or growing a service in seconds or minutes and of able to increase business agility. Traditional datacenters have made use of a great deal of specialized hardware, requiring a large amount of manual configuration and management, as well as complex resource management.
Two growing trends promise to make datacenters less expensive and less complex to build and operate, as well as easier to use:
- The increasing power of low-cost general purpose servers is making it possible to replace specialized hardware with general purpose servers plus software, enabling the creation of a software-defined datacenter.
- Automation of data center management, based on software which translates application requirements, in the form of policy statements, into operational plans, is enabling self-service use of the data center.
Automation of storage management via the interpretation of application requirements, which VMware calls Storage Policy-Based Management (SPBM), attempts to address an important source of complexity and delay in today’s datacenters. Today, when a user requests that a new application be set up, the first step after requirements analysis is to ask the storage management team to provision the storage for the application and make it visible to the servers to be used for the application. Low-priority, low-performance applications may simply use a share of an existing file system. On the other hand, important applications typically need assured performance and availability, which in turn commonly translates into assigning dedicated devices to a SAN LUN or NFS file system.
The storage administrator does a manual estimate of how to meet the requirements via one or another RAID configuration, selects available resources from among the available storage arrays, typically via manual accounting of previous space and throughput allocations, sets up the LUN or file system, and does the necessary access configuration (such as LUN masking or NFS exports). Once that is done, the application administrator can finally start the application. If the application requirements change over time, the storage administrator must provision storage (typically new storage) again, and the application data must be migrated. All this is a lot of manual work, with the opportunity for inadvertent mistakes such as getting the LUN masking subtly wrong. It is also time-consuming because the ability of the business to respond in real-time is reduced.
Even if applications are not provisioned with dedicated storage, and only a few classes of storage (slow, medium, and fast) are provisioned, meeting an application’s requirements, even without QoS assurances, will typically require “rounding up” the application’s requirements to the capabilities of the next higher storage tier. This effectively over-provisions the storage for the application, which is less cost-effective.
The essential goal of SPBM and related technologies is to eliminate these issues by automating the translation from a policy to a resource plan. For example, a mail server application might need 1000 IOPS at 15 ms. latency, for 2 TB of storage, with 99.99% availability and a probability of data loss of less than one byte in 10**20 bytes read, based on a 95% cache hit rate for a cache of 200 GB. The provisioning engine would translate this to an appropriate RAID configuration for the virtual disk, based on the performance and reliability of the underlying hardware resources.
VMware’s vision of a software-defined datacenter is based on using software to provide the full range of services, including processing, storage, and networking, in a single pool of resources. These resources may be allocated on demand to any application or groups of applications needed to run. Resource limits, not physical boundaries, define what a group of applications may use. Quality of service assurances, expressed as policies, allow an application to operate as predictably as it could on dedicated hardware, but without the inflexibility, availability limitations, and provisioning delays of dedicated hardware.
VMware delivers on that vision via features such as vMotion and DRS for processor power and memory, VDS and vShield for networking, and Storage vMotion, Storage DRS (SDRS), Storage I/O Control (SIOC), and SPBM for storage, albeit with some limitations on the scale of a single pool of resources.
On the storage front, we want to build on that base with technologies recently discussed at VMworld – including, Virtual Volumes (vVols), Distributed Storage, Virtual Flash (vFlash), and enhancements to SPBM:
- Virtual Volumes allow a SAN or NAS server to provide differential quality of services for VM-level objects, such as virtual disks.
- Distributed Storage aggregates local SSD and magnetic disk drives in a cluster to provide reliable, fault tolerant object storage, with policy-defined quality of service.
- Virtual Flash enables use of local SSD as a new tier in the memory hierarchy, as well as for local caching of objects on shared SAN or NAS storage.
As the software evolves, VMware plans to provide uniform access to SAN, NAS, vVol, and Distributed Storage resources across the data center, and use SIOC to enforce performance policies for SAN and NAS storage, as is done natively for vVol and Distributed Storage. This will enable policy-based management of storage wherever it is located. by avoiding the need to tie processing resources to storage resources in silos, thereby allowing for more flexible and efficient resource usage. The storage management layer can set up replication as needed for disaster protection. It can also migrate objects as needed to ensure policy compliance, both in response to requirements policy changes and to support changes to the underlying infrastructure, such as hardware replacement.
Uniform access to storage, with policy enforcement. is only the first step toward policy enforcement for application quality of service (QoS). We expect application QoS policies will allow one to specify the business requirements for an application, with system software undertaking to adjust CPU, memory, storage, and networking resources as needed to maintained the specified QoS. The ultimate goal for storage in the software-defined datacenter is thus to maximize business agility by enabling “cruise control” for applications.