Interop and the Software-Defined Datacenter
I’m in Las Vegas this week attending Interop, one of the largest IT conferences of the year. There are a lot of exciting things to talk about in IT right now, but one of the things that I find most interesting is the opportunity for IT to move away from data centers and infrastructure defined by the devices and the hardware that we buy, and towards what we are calling the “Software-Defined Datacenter.”
The Evolution of the Datacenter
Prior to the last couple of decades, the computing resources that support how we do business were tightly tied to huge specialized mainframes and a lot of specialized hardware to support them. Over the last two decades, we’ve moved to an environment where industry-standard hardware can run almost anything…and at a great price. This capability has been augmented by virtualization, enabling efficient and flexible use of the powerful resources.
Virtualization starts by abstracting the logical view of the servers from the physical hardware that implements them. After abstracting those compute resources, the next benefit has come from pooling the resources of many different servers, allowing automatic load-balancing and higher levels of availability. We’re now in the phase where automation takes over, speeding up computing-related operations and improving their reliability. With our partners, we are able to do much the same thing with storage, pooling different storage devices into pools for assignment to any one of the VMs. And now we’re seeing abstraction in networking and security of our data centers. Virtual switches have been around for a few years, and now we’re seeing VXLANs and OpenFlow abstract and enable more pooling and automation than in the past. With these last pieces of the datacenter moving towards a software defined model, we’re seeing it become entirely possible to have a fully “Software-Defined Datacenter.”
More than just a VM, a Virtual Datacenter
The Software-Defined Datacenter enables us to think even more broadly about the process of provisioning workloads. The initial phases of virtualization has made it very easy (and affordable!) to spin up virtual machines quite quickly. But when deploying workloads into a production environment, there are so many additional steps as their network identity is created, monitoring probes are installed, and security policies are enforced.
In an ideal world, no longer do we need to order some specialized hardware, then hire a consultant to install it and program the device in it’s specialized language. Instead, we’ll simply define an application and all of the resources that it needs, including all of its compute, storage, networking and security needs, then group all of those things together to create a logical application. There’s work ahead, but I see the Software-Defined Datacenter as enabling this dramatic simplification.
When our infrastructure is not constrained by highly specialized hardware but is instead agile and flexible and working off of software instructions, we make operations simpler. I am also excited about the ability to bring this simplicity to more applications than ever before. VMware’s heritage is making existing applications work even better, but we’re also proving that new application types such as HPC, Hadoop, and latency-sensitive apps can also run in this environment. One platform for all apps.
Over the coming months, we’ll hear a lot more about how VMware and many of its partners are working to help this transition along. We’re doing a lot of work to enable the Software-Defined Datacenter, where infrastructure makes businesses more flexible, agile and responsive to customers. After all, that’s the most important mission of IT.
Click here to watch Steve launch VMware’s vision for the Software-defined Datacenter at Interop.