The VMware Office of the CTO (OCTO) is relentlessly focused on co-innovation, helping customers and partners succeed with their IT initiatives by designing purpose-built solutions to best fit their desired outcomes. I recently joined the OCTO Advanced Technology Group (ATG) as part of the Accelerated Co-Innovation Engineering (ACE) team on the front lines of these efforts. While the team has been doing similar work for the last couple of years with a focus on professional services, our new charter is aimed at accelerating delivery of on-roadmap features in partnership with our customers, R&D business units, and the professional-services organization. This is helping organizations to get required supported capabilities in VMware products on the timelines that are required for their business needs.
Why and how ACE fills the solution gaps
Have you ever failed to finish a puzzle because of a missing piece? Frustrating! Incomplete technology solutions can give you a similar feeling: a missing piece — perhaps a capability, product, or service — prevents you from realizing the full benefit and negates any feeling of accomplishment.
This is where ACE comes in. We work with customers to create a new piece to fit just right into their existing suite of VMware tools, eliminating the need to develop a solution in house or find a third-party solution (which usually provides only a near fit). These projects are one-time expenses: after development, there are no ongoing fees. Best of all, customers have direct input into the capabilities and architecture of the solutions that are delivered on an accelerated timeline.
The ACE team is unique. We work hand in hand with the customer, inviting feedback along the way to ensure that everything is going according to plan and that we are meeting their expectations. Within VMware, we can innovate at the speed today’s market requires, creating and releasing fully supported product enhancements and advanced technologies, without being constrained by product-release cycles.
The following are a few examples of our work with existing customers.
Use case #1: stranded resource management
Many companies are engaged in environmental, social and governance (ESG) initiatives, with a focus on sustainability. They usually start by looking for CPUs, memory, and storage that aren’t getting a lot of use but incur unnecessary carbon emissions. These are called stranded resources. We need a way to identify and release them.
Stranded resources may show up as things like “zombie workloads” — workloads that are hanging around, running, but no longer serving a valuable purpose. An example might be a team that deploys a recent version of Centos in a Virtual Machine (VM), but never gets around to deleting the VM and releasing the allocated resources. Zombies can make up 15-50% of workloads in a typical enterprise environment unless there’s a rigorous process in place to manage them.
Another common example is oversized VMs and containers with excessive CPU, RAM, or disk space. Perhaps these were allocated out of an abundance of caution, but in the end, didn’t require all of the resources projected. Or maybe the team created a lightly used web server for marketing that was assigned 8 CPUs and 16GB of RAM but is capable of meeting the requirement with a 1CPU, 2GB VM.
Resources can also be stranded by a siloed capacity, such as when workloads are running in various clusters. Often, they can be combined if they exist in the same cluster. For example, maybe a marketing team needed more capacity during a campaign but could safely loan out spare capacity until the next one. Another scenario may involve teams that need large overnight processing capacity but could share that unused capacity with another team during the day.
Yet another example of stranded resources could be workloads that will be snoozing at known times but could be scaled down or completely put to sleep. For example, with the nightly processing mentioned above, if the capacity wasn’t needed by another team in the day, those resources could be turned off after the job is done and restarted each evening before processing, saving power and reducing CO2 emissions.
Make no mistake, these are difficult problems to detect and fix. ACE is collaborating internally with our sustainability team, our vSphere product teams, Customer Success, and customers to develop comprehensive solutions that reduce stranded resources. Check out my presentation in the VMworld 2020 session Setting and Achieving Your Sustainability Initiatives to learn more about stranded resource management.
Use case #2: easy consumption of AWS native services with VMware Cloud
Some customers want to use Amazon’s native services to build out the capabilities of resources they have on VMware Cloud on AWS, managing them automatically from a lifecycle perspective, as opposed to configuring everything manually. There’s a lot to think about on both the AWS and the VMware sides in this case, such as the creation of VPCs, gateways, interface endpoints, NSX firewall rules, identity and access management, security groups, and more. After creation, it’s important to think through day-two operations for managing these and other resources. What happens when disaster hits and everything needs to be re-created? Further, best practices dictate keeping everything consistent for subsequent upgrades to VMware Cloud on AWS or other AWS services. It is also prudent to create a duplicate environment for testing and clean it up afterward to save budget.
Today, there is no consistent means to address these problems. But ACE is researching and building a solution that enables developers to easily integrate native public-cloud services with their existing vSphere and VMware Cloud on AWS applications, while allowing operators to specify how those services are implemented, and application owners to run them. Such a solution would enable customers to request a capability, such as a MySQL-like database that could fulfill that request from any public-cloud provider or from an on-premises provider, as dictated by operations policy. This solution gives enterprises a new and pragmatic way to accelerate the modernization of traditional applications that are migrated to VMware Cloud on AWS — essentially building hybrid applications that are a composite of a traditional VM and native AWS services.
Engage with ACE
To engage with the ACE team, customers can reach out to their VMware account team, technical account manager, or customer-success manager to tell them about their missing puzzle piece(s). They will connect them with ACE and we will work with them to understand the technology gap and discuss a viable, repeatable solution.
Next, we will co-innovate to develop a solution, involving the customer in the validation and provision of proof points throughout the process, while enabling them to leverage our high-scale VMware research and development and engineering teams. The results will be fully supported by VMware.
If you’re a customer with these kinds of requirements (e.g., missing capabilities or integrations in VMware products), we look forward to working with you to accelerate the delivery of the solutions you need on your timeline, not ours.