Co-innovation VMworld 2021

xLabs: Co-innovating across Products, Partners, and Customers

By Tasha Drew, Director of Product Incubation, and Daniel Beveridge, Director of Co-innovation and Intellectual Property

xLabs is a program within the Advanced Technology Group in VMware’s Office of the Chief Technology Officer. We focus on delivering ahead-of-roadmap innovation (often one to three years ahead of roadmap) and cultivating cutting-edge technologies in collaboration with our employees, partners, and customers.

xLabs operates like a venture-capital portfolio: we have strategic technology “pillars” that are part of our funding rounds. (Our current foci include edge, modern applications, edge, machine learning, edge, intrinsic security, edge, and sustainable computing. Also edge — a lot of edge.). When we choose to move forward with a compelling proposal (some proposals are collaboratively built within the program itself; others come to us completely “baked” from people or teams outside of our program), we build a team, milestones, and a minimum viable product (MVP) plan. The end goal is to deliver the technology and the team that built it to our business units, who get it into the hands of our customers. We’re not only building prototypes or proofs of concepts: we’re building real, sustainable, integrated software that we can ship and sell.

A picture containing diagramDescription automatically generated

Because some ideas need maturing and validating before they are ready for “prime time,” xLabs also fosters some early-stage incubation activities aligned to strategic investment areas. The goal is both augmenting existing projects and building a pipeline of mature proposals for future xLab projects, some of which we covered in our xLabs session.

This year, our presentation at VMworld 2021 had an edge theme (surprise!). We see the edge as dramatically reshaping the technology landscape. Much as centralization and decentralization trends have dominated entire decades of infrastructure technology — from mainframes to PCs to public clouds to PoPs — here we are again, with a tidal wave heading out of the consolidated cloud and out to the far edge.

We have a number of exciting projects we’ve been building in xLabs that we think are going to provide compelling platforms that will enable our customers to deploy, manage, and scale their rapidly growing edge needs. Here’s the roundup:

Project Santa Cruz!

You may have seen our awesome demo in the Vision & Innovation Keynote already, but if not, here’s a quick overview: Project Santa Cruz gives you form-factor flexibility at the edge by combining the VMware SD-WAN and VMware SASE products with Tanzu. You can deploy Kubernetes on your VMware SD-WAN with a simple software update — orchestrated by VMware SD-WAN Orchestrator — and then connect that platform to Tanzu Mission Control for your Kubernetes platform team, who can provide your developers namespaces and workspaces for deploying and managing their containerized applications.

Chart, box and whisker chartDescription automatically generated

Bring Your Own Host with Tanzu Kubernetes Grid

Some of our customers want to manage their own machines and operating systems and have asked us to add a more flexible delivery model to allow them to continue to leverage Tanzu Kubernetes Grid’s consistent enterprise-grade Kubernetes platform. To meet this need, we were excited to announce and give a technology preview of the “Bring Your Own Host” project. You can now opt nodes in for Tanzu Kubernetes Grid management by adding an agent to them. We have created a new ClusterAPI infrastructure provider that can then add these hosts to a Kubernetes cluster under Tanzu management — and, of course, you can continue to consolidate management of all of your Tanzu Kubernetes clusters in Tanzu Mission Control.

Project Arrakis

We are incredibly excited to see the evolution of Federated Machine Learning technology, especially around the Linux Foundation’s FATE project. Project Arrakis leverages FATE’s technology stack and operationalizes it on the VMware Cloud Foundation and Tanzu Kubernetes Grid platforms — lowering the barrier to adoption and simplifying lifecycle management.

Why are we so excited about FML? Because it provides a new way to learn from your edge data without needing to transfer data to a central location to run your machine-learning (ML) models against it. You can have edge clusters that are gathering huge amounts of data (some customer use cases include mining sites, airports, and autonomous vehicles) and transfer your ML models to those individual sites. The models learn at the sites, adapting based on the information there. Next, you transfer the new model and associated metadata back to the centralized FATE platform. ML learns from all of your models and metadata to make a new, more complete, model. That model can again be federated against all of your edge locations.

DiagramDescription automatically generated

The end result is that your machine-learning infrastructure can evolve as if it has access to all of your data — but you have stronger privacy guarantees and large cost savings because you don’t need to move and store your data anywhere.

The Project Arrakis team has published two posts about Federated Machine Learning and its ecosystem on our OCTO blog:

Project Radium

With these new opportunities to build models at the edge, how do we get the performance we need to accelerate model training? Introducing Project Radium — a fundamentally new way of leveraging advanced accelerators to boost your ML performance in both inference and training. Project Radium moves beyond the CUDA virtualization found in our current Bitfusion product, offering an ML framework remoting that splits Python apps into a client and server side, intercepting key system calls and replaying them on a remote machine attached to a range of accelerators.

This approach offers a seamless developer experience with slick Jupyter Notebook integrations while enabling acceleration from a broader range of technologies, including GPU, FPGA, ASIC, and even new CPU-only acceleration technology from companies like ThirdAI, NeuralMagic, and others. VMware customers will be able to take advantage of new technologies very quickly after they appear on the market, due to the reduced integration efforts. Even customers without access to acceleration hardware can use their spare-cluster CPU as a shared resource. Another major advantage of Project Radium is the reduced need for high-speed networking.  Even GigE networking offers good performance from the remote accelerators, making this a great technology for edge and FML-driven use cases that lack datacenter-grade networking.

Sustainable computing — grid transformation

As we all pull together to think about making a positive impact on our planet, VMware is partnering with Intel, Dell, and other innovators in the grid ecosystem to build solutions that introduce virtualization and digital transformation into the power substation and other control systems. Today’s grid needs to double or triple in size to handle the coming move toward renewables. Having the right technology platform embedded in our substation control systems is key to reaching our carbon-reduction goals. We showcased some exciting work demonstrating ESXi’s ability to handle a new class of real-time oriented industrial workloads, such as virtual protection relays — the circuit breakers of the substation. This technology can consolidate more than five physical appliances into a single 2U server capable of more advanced functionality than was previously possible.

A picture containing diagramDescription automatically generated

With virtualization platforms entering the power substation, there’s also a new “intelligent edge” opportunity to bring rich analytics to aid understanding of the new complex supply-and-demand patterns associated with renewable energy technologies, such as solar and wind generation. Being able to capture granular data, predict patterns, and pre-empt outages are key to evolving toward a truly smart grid. In our VMworld presentation, we discussed how a new storage technology called “computational storage” can combine intensive analytics workloads with real-time control-path workloads in the same server! xLabs is incubating power parallel database technology that can perform ML-based analytics right on the storage layer, processing massive amounts of information without any impact to the main CPU that is focused on real-time protection-relay workloads. We’re seeing SmartNICs emerge in the datacenter and now, at the edge, we’re seeing how moving compute toward the “edge” of the system enables convergence between real-time and intelligent edge workloads. Exciting stuff. A fractal version of macro trends we’re seeing in cloud and edge trends.

Project Crest

We also introduced Project Crest, an open-source automated accessibility-testing tool (https://github.com/vmware/crest)! Crest makes it easy to test your web applications for compliance with current WCAG standards and guidelines — a significant step towards making content accessible for those with vision and hearing challenges. You can read more about the origin and significance of this project in this blog. Pull requests welcome.

Graphical user interface, applicationDescription automatically generated

Edge computing is reshaping the way VMware builds solutions and brings new technology to market — placing the focus on deeper integration, a solutions mindset, and a great user experience.  xLabs is committed to driving these changes and helping define the next phase in enterprise computing.  Check out our VMworld session VI2105 for more detail and reach out to us for partnership and co-innovation ideas!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *