Blue glowing high energy plasma field in space, computer generated abstract background
Uncategorized

Software-Defined Datacenters, Clouds & PuppetConf 2012

Just over a week ago I had the pleasure of speaking at PuppetConf 2012.  PuppetConf is a great conference focused, somewhat unsurprisingly, around Puppet, a declarative, model driven framework for automating systems and IT management.  The conference brings together a community of pretty hard core systems admins from the DevOps world and beyond.  Automated management is clearly a key ingredient in building software-defined datacenters and Clouds, and something that we, at VMware, have more than a passing interest in.

Preparing for it made me think.  A lot!  I was after a prism of simplicity, if you will.  A way of capturing the Cloud and software-defined datacenter concepts in a simple way, that allowed one to understand how we got to where we are, and then to easily extrapolate.  I like simple ideas.  For me the search for simplicity is at the heart of everything.  You can build highly functional, and almost inevitably, complicated systems using simple components.  And by understanding simple components and how they are [simply] combined, you can understand complicated systems.  If you can’t understand or express something in simple terms, then it is probably too complicated, and you are doing something wrong!

As I mentioned in an earlier post, you can view the emergence of Cloud systems through this prism, splitting the overarching trend into a couple of simple interacting threads – that of the evolution of once monolithic applications into ever more disaggregated and distributed services, and the evolution of infrastructure toward an ever more distributed fabric of resources connected with high speed, low latency networking.  These threads are long lived, but what has happened on the infrastructure side of the house in the last 5 to 10 years is quite profound.

Firstly the wholesale adoption of Intel x86 based servers is making the data center much more homogeneous from an infrastructure perspective.  Whilst this adoption was driven by cost, the net effect of greater homogeneity is that management, and especially automation, becomes much easier.  The smaller the number of types of things, and the slower their rate of change, the easier they are to manage, and the more effective and sustainable any automation of that management is.

Secondly the pervasive adoption of server virtualization is separating the application from the underlying infrastructure.  This separation yields simpler resource allocation/isolation and application mobility.  Whilst the adoption of server virtualization was primarily driven by the desire for greater efficiency, its most profound impacts are now being felt as a consequence of this mobility.  Once you have mobility, you can move workloads to the best place for them to run at any given time.  You can instantly provision them, you can move them from low capacity to high capacity as needed, you can move them off hardware that needs maintenance with no interruptions, you can move from one location to another for business continuity, and you can move them from within your business to the outside (hybrid Clouds) for burst capacity or vice versa when bringing new applications in house.

But there is a fly in the ointment!  Not everything has been effectively virtualized yet.  Whilst the application – encapsulated within a virtual machine, together with its operating system – has been liberated from its bonds to the physical server, constraints around storage, and in particular networking remain.  Software-defined networking (SDN) is effectively addressing this, further decoupling the application from the physical infrastructure, and increasing mobility.  It is for this reason that we purchased Nicira.

But what server virtualization and SDN combined show us, is another thread, that of separating the infrastructure operating software from the physical infrastructure.  The hypervisor does this for servers, and SDN control software does this for the network.  This, together with the ongoing evolution of storage virtualization, is what we call the software-defined datacenter.   We (as an industry, and VMware specifically) are effectively creating a layer of software, a meta-operating system, between the applications and the physical infrastructure.  This meta-OS will ultimately hide the complexity of underlying infrastructure from the applications and those who manage them, automatically placing applications on the right pieces of infrastructure, at the right locations, to deliver the right quality of service, at the right price, flexibly, safely, securely and compliant.

The software-defined datacenter provides the technology foundations for the Cloud, and provides the capabilities that enable the Cloud business model of self-service, instantly provisioned, pay as you go, elastic IT.  We are still in the early days of the meta-OS for the software-defined datacenter, but it will change the management of infrastructure, making it more or less fully automated and something that the average business, and average IT admin will no longer care about.  15 to 20 years ago, people cared intimately about the operating system scheduler and how it mapped application processes and threads onto processors.  Today, in the vast majority of cases, no one cares.  The operating system simply does it for you.  Similarly we are rapidly heading towards a point in time where no one will care which server any given application is running on and how they are connected.  The meta operating system for the software defined data center will take care of this for you.

So what about applications and PuppetConf?  How do they fit into this?

Well, if it is simplicity that enables better management and scalable, sustainable automation, then it is the virtual machine that provides that simplicity for managing applications from a day to day perspective.   Once an application [instance] is within a virtual machine (VM) it looks much like any other application in terms of most daily operations – provisioning, resource and performance management, availability management and so forth.  For most operations, managing the VM is a good enough proxy for managing the application instance itself.  Thus massive automation of the management of essentially homogeneous VMs, something that is relatively simple, results in the massively automated management of heterogeneous applications – something that is hard.

The next step in this journey is to be able to move from individual script based configuration of [relatively] simple OS and application stacks, to managing the configuration and topologies of complex, distributed applications as a whole, improving delivery and update times. This why we introduced vFabric Application Director, partnered with Puppet Labs and invested in them to complete the picture.  And it is why I was so excited to get to spend some time with the Puppeteers.  That, and the fact that such things are just plain cool.   The level of engagement of the Puppet community is energizing and infectious, and participating was a lot of fun.  Thanks to PuppeLabs for inviting me, and to Puppet community for listening and for feedback!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *