Blue glowing high energy plasma field in space, computer generated abstract background
Uncategorized

Taming CAP with Policy-based Management

Prof. Eric Brewer at UC Berkeley coined the CAP theorem in 1998. It has become the poster child for design trade-off discussions among architects of next generation scale-out storage and data management systems.  Crudely defined, the theorem states that in the event of a network partition (P), the system can be designed to support either Consistency (C) or Availability (A). This theorem is often mistakenly generalized as a black-and-white “2 of 3” rule, and was clarified by a well-written  article by Prof. Brewer in 2012.

While CAP popularized the concept of trade-offs in distributed data management, C, A, and P are not the only dimensions that architects need to trade-off when they are considering design choices for building blocks of a scale-out data management system.  Prof. Abadi put together a CAP variant called PACELC – it essentially highlights the latency-consistency tradeoff in the absence of partitions. Holistically, there are a large number of dimensions, including latency, throughput, scaling, locality, durability, coherence, consistency, availability, and more. I attempted to create a comprehensive list for my tutorial session at the USENIX FAST’12 conference, and have ever since kept continually adding to the list (a good topic for a future blog post!).

The Web 2.0 companies were the early adopters of distributed scale-out architectures, with several seminal architecture papers that made Big Data accessible to the broader industry via open-source Hadoop ecosystem. The architects were trying to address a well-defined combination of:

1)     Application data model

2)    Workload access characteristics

3)    Deployment infrastructure

4)    QoS goals

For instance, the seminal Google File System (GFS) was developed for a non-POSIX data model with at-least-once update semantics (instead of exactly-once semantics or byte-ordered replica fidelity). The key workload characteristics were non-blocking appends, and large sequential reads. The deployment infrastructure was primarily disk-based with conventional network topologies. The primary QoS criteria were throughput instead of latency, high durability, and consistency over availability. Given that these systems were co-developed with the application designers, there was really never a need to design “knobs” to customize the behavior of the storage layer e.g., what if your application, in addition to throughput, also wanted to also optimize latency by aggressive read caching and write buffering? How do you express this to the filesystem?

Fast-forward to Software-Defined Storage (SDS) that is transforming the storage architecture and management paradigm deployed within enterprises and private data-centers. SDS leverages distributed scale-out principles, but is designed to support a wide spectrum of application data models, workload access mix, hardware heterogeneity, and QoS goals. The centerpiece of the SDS architecture is Policy-based Management. Broadly speaking, policy-based management defines an out-of-band communication for applications to dynamically express their requirements to the infrastructure layer. For example, a business critical application may define policies that configure a higher number of redundant copies, or reserve storage space to ensure there are no out-of-space exceptions for provisioned volumes. In contrast, another application may choose to trade-off a lower number of redundant copies for higher performance, and thin provisioning to optimize overall space utilization. Implementing support for policy-based management cannot be an afterthought or retrofitted into a vanilla scale-out storage design – instead, it needs to be baked into the design from inception. VMware’s vSAN realizes the SDS vision, with foundations for a rich policy-based management model to support for both existing and emerging enterprise applications.

To summarize, enterprise deployment models are application-centric, and require IT infrastructure (including storage) to align the QoS trade-offs on a per-application basis. Policy-based management is a critical component in enterprise-class SDS solutions, that maps the observable storage behavior based on the application requirements. This key centerpiece is not highlighted much in Web 2.0 architectures where the application-designers were co-designing the storage layer. Thus, customizing application trade-offs is no longer a one-time hardened decision, but instead with policy-based management, a dynamic decision on a per-application basis!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *