Today at VMworld 2014 I have the pleasure of co-presenting with Ben Golub, Docker CEO, on our joint container strategy. Our session, “SDDC3350 – VMware and Docker – Better Together,” will run on Monday from 5:30-6:30 and on Tuesday from 12:30-1:30.
For more insight into the work that we are doing with containers, I encourage you to take a look at the following posts:
- VMware + Containers = Containers without Compromise
- Docker Service Broker for Cloud Foundry
- Gartner Panel Reveals Stark Differences in Container Based PaaS Options
Combined architectures that leverage both containers and VMs are nothing new. Cloud Foundry Warden first supported this approach in 2011. In addition, Amazon EC2 first supported containers via LXC in 2010. The bottom line – this combined architecture isn’t some crazy new VMware approach – it’s an industry norm.
Docker CEO Ben Golub agrees as well:
“Organizations are rapidly adopting the Docker platform because it allows them to ship apps faster, whether to bare metal, virtualized data center, or public cloud infrastructures. For enterprises seeking to efficiently build, ship, and run enterprise apps, Docker and VMware can deliver the best of both worlds for developers and IT/operations teams. Joint customers will benefit from enabling IT to run and operationalize their Docker environments on their current virtual infrastructure and take advantage of existing management, compliance, networking and security processes and tools.”
Ben’s assessment is spot-on. Today, containers do a great job easing application and data portability, but there are many additional operational requirements that are eased by our software-defined data center (SDDC).
For example, the VMware vCloud Suite can extend the value of containers by providing the following:
- Multitenant security as well as separation of zones of trust in single-tenant environments
- Fault domain isolation (application and/or OS failures will only impact the single tenant running in the VM)
- Continuous application and data availability – applications and application platforms with no native high availability can leverage the native HA capabilities of the VM. Furthermore, our storage technologies such as vSAN ensure redundancy and data availability for any persistent data stored by a container
- Automated infrastructure operations (compute, network, storage, security) – the entire infrastructure service stack can be provisioned in seconds to minutes, depending on requirements
- Seamless integration with massive third party ecosystem – containers can leverage the hundreds of turnkey third party integrations offered through the VMware Solutions Exchange
- A lower TCO is achieved by leveraging a common management layer for both second and third platform applications
Finally, running containers on our SDDC architecture provides management and operational service stack portability. If an application is redeployed to a new environment, the entire operational service stack goes with it, along with the same tooling and third party integrations. Choices include vCloud Air or one of over 3,800 partner providers, a private data center, or even an outsourcer.
The VMworld session will also include a demonstration of Docker integration with vCloud Automation Center (vCAC) and vCenter Orchestrator (vCO) using the sample architecture shown below. Beyond provisioning and scaling applications, we will also show how easy it is to support application updates and continuous integration in this model using the VMware stack and Docker Hub.
Note that in addition to the physical deployment model, two unique container:VM virtual deployment models will be demonstrated – 1:1 and many:1. Ultimately the provisioning model is an operational detail that should be abstracted from developers via policy. That said, here are some of the benefits to each model.
- Stronger isolation and security (zoning, access control, logging, identity federation)
- Aligns containers with existing technical compliance mandates
- Easy integration with existing VM-based management and monitoring tools
- Easy integration with third party solutions that are VM-aware but not container-aware
- Combined with “Project Fargo” (detailed in the next section), you get the easy developer tool integration of containers in a very lightweight VM package (getting all of the security, management and portability benefits of the VM)
Many:1 provides the same benefits of 1:1, and also includes:
- Greater application density
- A simple way to isolate applications within the same security zone or from a single tenant
- The potential to reduce development and test costs (developers could get a few VMs and then partition them out as necessary using containers).
Note that with our “Project Fargo” technology that was previewed at VMworld, the need to layer a container inside a VM is negated.
Introducing the Tech Preview of Project Fargo
In short, the Project Fargo technology provides a fast, scalable differential clone of a running VM.
This approach yields several benefits:
- Significantly reduces the startup time for child VMs (VMs available < 1s)
- Reduces the VM storage and memory footprint
Project Fargo uses a copy-on-write architecture similar to that of containers, meaning that if an application running in a child VM tries to change a shared OS file, a copy of the shared file is created and stored in the child VM. This way, all modifications made by the VM are isolated and unique only to it. Any newly created files that are saved by the VM would also be stored in the child VM and not in the parent.
Going forward, there are several potential use cases for this technology. For enterprise applications, imagine getting the speed of containers-based provisioning but maintaining all of the VM-level goodness that you count on (such as security isolation, infrastructure portability, and centralized management).
The technology is also a logical fit for virtual desktops, providing an instant clone of running non-persistent desktops and instant application availability for app publishing (i.e., server-based computing) deployments.
As you can see, we are building a highly differentiated architecture to support the workloads that you have trusted to VMs for more than a decade, as well as emerging third platform applications.
In the case of Docker, developers can leverage their tools of choice, the Docker Engine, Docker Hub, and any of the more than 30,000 “Dockerized” applications, while the operations team can provide the VMware infrastructure to support Docker containers on the most efficient and flexible SDDC platform. This combined approach amounts to nirvana for both developers and operations teams, as both get to use their platforms of choice. We are committed to simplifying the delivery and operations of all applications, while providing the choice that enterprises demand. In the end, the package used by a Docker container (such as a VMware VM, LXC, or combination of both) will likely be an operational detail that doesn’t concern developers. It is simply what DevOps teams select based on a myriad of requirements as discussed earlier, and simply expose up to developer tools through an API.
For those that could not attend the session, hopefully this provided a good overview of how VMware and Docker will continue to collaborate. What do you think about our efforts? We welcome your thoughts.