Blue glowing high energy plasma field in space, computer generated abstract background

Better Together – Containers are a Natural Part of the Software-Defined Data Center

Today is the first day of VMworld 2014! There’s a lot to be excited about this year, but for me, it is our work around containers that particularly engages me. And it engages me for two very specific reasons.

Firstly, containers are very dear to my heart. I worked on Solaris Containers since their inception in 2000, during my tenure at Sun Microsystems. I’ve been working on these abstractions and constructs, and the technology and business problems they allow us to solve, for a very long time!

Secondly, and much more importantly, it reinforces and enables us to execute on VMware’s primary purpose: to enable our customers to deliver and consume their enterprise applications.

Our joint collaborations with Docker, Google and Pivotal are all about enabling our customers to get the benefits of containers, whilst taking advantage of the unique capabilities of VMware’s Software-Defined Data Center approach. All without having to change the way they do things – Containers without Compromise. The benefits are ultimately about minimizing time-to-value for new applications and about deployment at scale, with minimal disruption, but with all of the important capabilities our customers associate with VMware – isolation, trust, efficiency, agility, simplified management, on- and off-premises, etc.

Read on to understand how it all comes together…

What Customers Tell Us

In my experience, our customers care about minimizing the amount of time it takes to get value from their applications. They care about simplifying application management, and they would prefer not to have to care about infrastructure at all.

A CIO’s priorities in mature businesses, in order, are:

  • Don’t let the important stuff break!
  • Manage costs
  • Enable the business to compete:
    • Turn data into actionable information. Provide business insight through big data, real time streaming analytics and simulation.
    • Enable developers to deliver new differentiating applications to market as quickly as possible.

This is what I tell our engineers when they ask me what problems our customers have to solve and how they think:

  • All customers, and especially large enterprises, will carry a legacy of applications with them (thousands in the case of large enterprises) so they have to manage these as well as their shiny new ones.
  • Managing cost will always be an issue, whether it is CAPEX or OPEX. We now have choices about how to spread these, thanks to Cloud. Different enterprises with different models will prefer different spreads.
  • Isolating certain sets of applications, particularly legacy and/or bought in applications, will always be important.
  • They’d like one operational model for both development and for production, i.e. as few different ways of doing things as possible.
  • In general, they will not be too bothered about small differences in comparable product capabilities if they can get the simplicity that results from a single model for managing their applications.
  • Although they will tie themselves to a vendor when a technology delivers true differentiating value and business advantage, they prefer enough vendors in a space to avoid lock-in, drive competition, and consequently drive costs down.
  • But not too many require more vendor relationships be managed, or more people with more diverse admin skills employed etc.
  • Cloud has reset expectations such that they want technology to just work, install itself, update itself, and be perceived as commodity.

VMware’s Strategy

Figure 1 – VMware’s Strategy

VMware’s strategy is simple. We break it down into three areas of focus around the delivery and consumption of applications (or services if you’d prefer to use that term. I will use them synonymously):

  • Software-Defined Data Center (SDDC) – Our SDDC strategy is about creating a layer of software that enables our customers to automatically deploy and manage their applications, delivering the right SLA, at the right price, flexibly, safely and securely. We’ll explore this a little more below.
  • Hybrid Cloud Our Hybrid Cloud strategy, through VMware vCloud Air, is about enabling our customers to choose where they run their applications – their on-premise SDDCs, VMware’s SDDCs, via vCloud Air, or one of our more than 3,800 vCloud Air Network service provider partners’ SDDCs. All with the simplicity of a single way of doing things, or more formally with one operational model.
  • End-User Computing – Our End-User Computing strategy is about enabling customers to consume their applications, any time, any place and on the devices of their choice, safely, and securely, either as a service with VMware Horizon DaaS or on-premises from their own SDDCs via VMware Horizon 6.

Containerization In The Software-Defined Data Center

Figure 2 – Containerization in the compute domain

Before looking at how the Software-Defined Data Center is made real, I’d like to note that the SDDC is a generalized platform. Our goal is for all classes of applications to run on it and benefit from a single management paradigm, and the capabilities of the platform (i.e. efficiency, flexibility, security, availability, etc).  SDDCs run everything from SAP, HANA, Oracle RAC to scale out, Third Platform Applications, to Hadoop and more.

The Software-Defined Data Center is realized through separating the application from infrastructure, “containerizing” it in some sense (Virtual Machines, Virtual Networks, LXC), and then automating the management of those containers. By automating the management of the container, we effectively automate the management of the applications inside them, by proxy. This is how you scale management. It’s a lot easier to automate the lifecycle of a small number of types of container than it is to automate the lifecycle of hundreds or thousands of individual applications or services. Indeed, “containerization” in the form of virtualization has already allowed us, and our hundreds of thousands of customers, to do this without having to rewrite the applications. And now, additional layered, container focused, technologies, including Linux Containers (LXC, et al), Docker, Google’s Kubernetes and Pivotal Cloud Foundry are providing new opportunities around rapid developer driven application deployment at scale.

Containers without Compromise

So why should the capabilities of containers and technologies such as Docker, Google’s Kubernetes, and Pivotal Cloud Foundry be best leveraged within a Software-Defined Data Center? And how?

Enterprises will always develop new applications that deliver new value to their customers or stakeholders. Many of these new applications are what we call cloud-native applications – they scale out, they make different (usually fewer) assumptions about the infrastructure they run on, and they are trending to micro services based architectures. These applications are particularly well suited to containerization at the operating system level, i.e. using technologies such as LXC. They can be developed, deployed and updated quickly and frequently. That’s why Docker is gaining popularity in this space as an application delivery mechanism for these containers, separating applications from infrastructure and enabling application portability. Google Kubernetes is effective at managing containers at scale. They do so, not by managing individual containers, but by managing collections of containers and/or VM’s that, together, deliver a service.

As I mentioned before, many enterprises both curate a very large number of existing applications, and develop new applications. Historically, this has resulted in an irreconcilable tension between the needs of the application developer and those who operate the applications and the infrastructure they run on.

The developer needs to minimize the length of time it takes to build, test, deploy and iterate new applications.  And they want to take advantage of the capabilities offered by Docker, Kubernetes and Cloud Foundry in doing so.

The operators of applications, or at least of the infrastructure they run on, demand performance, throughput, availability, stability, consistency, accountability, and security, and all with no risk, and at the right price. They would like to take advantage of the capabilities of their existing virtualized infrastructure. They’d like to run their applications inside VMs, due to the inherently greater security and isolation properties, as well as supported configuration diversity. They’d like as small a number of pools of infrastructure as possible, to maximize efficiency. And they would like continuity. Why change a successful model? They’d rather extend it, as they operate new applications.

We believe that enterprises would prefer to have one means of deploying and managing their applications, irrespective of the type of application.  Bringing container technologies – Docker and Kubernetes – into the VMware ecosystem as first class citizens is the way to do this. They are better together.

And the how?

We’re committing to working with Docker and Google in two dimensions:

  • The first is in making our technologies work seamlessly together.
  • The second is in contributing VMware and Pivotal experience and code to the open source projects that Docker and Kubernetes are built on.  For example, Pivotal Cloud Foundry is working with VMware and Docker to enhance the Docker libcontainer project with capabilities from Warden, a Linux Container technology originally developed at VMware for Cloud Foundry.


The separation of applications from underlying infrastructure, “containerization” if you will, is the great technology enabler for application management. Ranging from virtual machines to operating system level containers, and beyond. By decoupling the applications from the underlying layers of infrastructure, we can deploy them incredibly quickly, start them up, move from low capacity to higher capacity seamlessly, replicate for business continuity, and move backwards and forwards, into and out of the Cloud. By combining the benefits of the various layered containerization technologies with automation and management tools, we now have the opportunity to enable our customers to get what they want – zero friction to the creation and deployment of new applications, and simple, automated, delivery of new and existing applications, delivering the right service levels, at the right price.

From my personal perspective containers, Docker, Kubernetes and Cloud Foundry are very cool. From a technical perspective they bring opportunities in terms of capabilities around managing applications. And managing applications is VMware’s business!



Leave a Reply

Your email address will not be published. Required fields are marked *