Blue glowing high energy plasma field in space, computer generated abstract background

2013 predictions: The year of software-defined storage?

2012 has been the year of “software-defined datacenter“. Another buzzword? Perhaps, but nevertheless it captures a tectonic shift happening in the IT industry. As a colleague, William Earl, commented in a blog post earlier this year, the increasing power of low-cost servers is making it possible to replace specialized hardware with general-purpose servers plus software. That’s software that abstracts, pools together, and manages the different resources in datacenters. Virtualization has been doing this for CPU and memory for ages. 2012 has been the year of “software-defined networking” (SDN) – the concept is gaining momentum beyond the few big players (such as Google and Facebook) that use SDN (Openflow specifically) in their datacenters for years now. The popularization of SDN is driven thanks, in no small part, to VMware’s acquisition of Nicira.

I expect 2013 to be the year of “software-defined storage” (SDS). In VMworld 2012, VMware and our partners gave previews of various aspects of this trend, including technical previews of Distributed Storage, Virtual Volumes and Storage Policy-Based Management. Storage vendors, both established and newcomers, are riding the SDS wave. I anticipate a number of new products or at least prototypes to see the light of day in the following months. They won’t move the storage market; they will test the water with customers and set the stage for a radically new generation of enterprise storage to come. As a result, big vendors that are falling behind will start buying their way into SDS. It will sure be a fun year for storage!

Hardware evolution will support the economics behind SDS. Larger and cheaper flash-based devices will become standard in server platforms. Capacity cost will fall below $5/GB and $0.01/IOPS for enterprise-class flash devices at the low end. The economics will favor hybrid architectures with flash storage holding all active data and fronting even larger magnetic disks. The latter with capacity cost down to a few cents per GB but even lower ratio of IOPS to capacity will essentially become a data reliability tier. On the networking side, the per-port cost of 10GbE switches will finally be priced low enough thanks to competition in the market. Thus, 10GbE will become a reasonable option even for the most cost-conscious users. In the enterprise, we will see a gradual shift towards new-gen 2.5 inch disks, mostly driven by power consumption benefits. So, we will see a lot of heterogeneity in terms of direct attached storage in the data center.

Hardware changes may fuel the emergence of SDS. However, the main ingredient behind SDS is, well, software. I see four main architectural aspects that will define every SDS product. I anticipate a number of products in 2013 with innovative solutions in these areas.

Distributed architectures. Distributed software for clustering and data redundancy over the network provides the foundations of any SDS system. It is a prerequisite for virtually aggregating storage resources that are dispersed across the data center. Also, it makes the notion of planned storage downtime as obsolete as waiting for Saturday morning shopping to buy your music.

Hardware heterogeneity. Every existing enterprise-class storage system includes multiple tiers of storage. In the big scheme of things, even all-flash disk arrays (the big craze of 2012) are yet another tier of storage in a data center. I am not talking about that sort of tiering here. By definition, an SDS platform has to incorporate a variety of storage devices of different generations with widely varying performance and reliability characteristics, as they are found in a typical data center. The value of the SDS is in delivering predictable quality of service for the workloads running on such widely heterogeneous hardware. Which brings me to my next point…

Predictable quality of service. Today, storage quality of service is defined in the context of specific deployments –vendor and array model– based on detailed knowledge of the hardware characteristics and vendors’ recommendations. This approach is useless in the brave new world of SDS platforms with heterogeneous hardware. Creating a RAID-5 over 5 spindles does not mean anything about availability or performance, if one has no idea of the characteristics of those spindles, if they are spindles at all! The industry needs storage quality of service abstractions that apply across vendors and platform implementations. I realize this may sound too good to be true. However, any SDS product that supports heterogeneity will have to find some solution for defining and enforcing quality of service across different hardware. Perhaps some newcomers will surprise us with their innovation in this space.

Management at scale. Over the last couple of years, a number of new storage products (mainly by startups) have focused on managing storage at the granularity of virtual machine. This trend will be boosted further, in the future, with VMware’s Virtual Volumes initiative. So, how does one provision and manage 10,000 virtual machines, with possibly many different combinations of quality of service? This is another area ready for disruption. Products that will come up with an intuitive way to provision, monitor, anticipate and adjust the properties of thousands of storage objects will have a competitive advantage going forward. Already existing products show promising innovation in terms of user experience. As these products mature and new ones emerge in 2013, managing the storage of virtual machines as opposed to LUNs and file systems will become even more pervasive. Feel free to share your views for the software-defined storage space in 2013.


Leave a Reply

Your email address will not be published.