These are exciting times for those of us with a passion for infrastructure. The unstoppable tide of virtualization, having covered servers, is now crashing over networking and storage as we continue towards a fully automated data center. With all of these developments in infrastructure, it’s very easy to lose sight of the purpose of that infrastructure – to run applications!
But running applications can be challenging. Why challenging? Well, each application has to be purchased or written, architected, integrated and tested. And if it’s business critical, the application will be tested again and again. Then it needs to be installed, monitored, managed, and regularly archived. You need to regularly test the backups to make sure they are readable. Disaster recovery plans are also needed. And if you are really dedicated, you will practice actual failover between sites rather than table-top exercises. As if this isn’t enough, each application will need people with the skills to manage it and implement processes for managing its lifecycle, including patching and upgrading.
Phew! While managing one application can be complicated, think about how many applications are in a typical enterprise? Some have hundreds, some thousands! Each of which will need to be individually purchased, architected, integrated etc. It is this large number of applications and their diversity that drives complexity in IT, and thus cost. And it is diversity that has made automated management so hard. Diversity in terms of applications, and diversity in terms of infrastructure – and all change frequently.
Wouldn’t it be great to have just one application or pattern? Minimizing the number of patterns is what allows cloud providers to achieve genuine economies of scale, or what we could think of as economies of simplicity. Amazon, Google, eBay and Facebook all minimize the number of patterns they deploy, and in so doing are able to massively automate the management of their applications. This automation improves reliability, reduces cost and enables agility – agility that can enable rapid provisioning, potentially tens of thousands of new application instances a week.
Enterprises are unable to reduce the number of applications they manage down to single digits, but they can certainly source some of their applications as software-as-a-service from the cloud. But what about the rest? Well, it turns out that if we make all of the applications look the same from a management perspective, then we can get most, if not all, of the benefits of simplicity, and automation. And it is this principle that lies at the heart of a software-defined data center.
From my perspective, the software-defined data center is about two verbs – virtualize and automate. Virtualization separates applications from the physical infrastructure, placing them in simple containers (virtual machines or virtual data centers). With the applications in containers, you can isolate them from each other. You can move them from low capacity to high capacity, and back. You can move them from a breaking machine to a working machine. You can move them from a machine that needs maintenance to one that does not. You can move or replicate them across data centers for business continuity. You can move out to the cloud to burst or test. Separating applications from the infrastructure allows you to do all of this, and virtualization is the means of achieving this. Placing your applications in these containers is the key to simplification.
VMware has spent the last 15 years separating the application from the server. Now we are separating it from the network and from storage, allowing us to put entire network distributed applications into simple containers that can be automatically managed. Once applications are inside these containers, they all look basically the same from the perspective of to-day-management operations around provisioning, increasing or decreasing capacity and availability. So by automating the lifecycle of the containers once, we effectively automate the majority of the lifecycle of all virtualized applications. This is the really big win! In fact we can use statistical machine learning and big data to learn good and bad behavior of the containers, and thus by inference, the applications, without truly having to understand the applications themselves. This is the reasoning behind products like vCenter Operations and Log Insight. All of this in aggregate allows us to automatically manage the lifecycle and quality of service of containerized applications.
VMware has led the charge in server virtualization and with the technology we’re introducing at VMworld today, our customers can unshackle their applications from networking and storage, and automate the management of them. This is the purpose of the software-defined data center. In fact, when I think about VMware’s broader mission, it is all about applications. Our goal is to enable our customers to deliver and consume their applications without regard to underlying infrastructure.
- A software-defined data center enables our customers to deliver the right applications, with the right service levels, at the right price, flexibly, safely, securely and compliant.
- Our hybrid cloud strategy gives customers a choice of where they run their applications – within their software-defined datacenter, extending out to the public cloud (with vCloud Hybrid Service), or in a cloud run by one of our service provider partners.
- Our End-User Computing strategy is about enabling our customers to consume applications in a safe, secure and compliant way. And users can consume these applications at any time, any place and on the device of their choice.
It’s all about [the delivery and consumption of] applications.
Comments