VMworld 2021

A Closer Look at the VMworld 2021 Solution Keynote Demos

Welcome to VMworld 2021! While we all hoped to be together in person for 2021’s flagship event, the last year and a half have taught us to be ready for anything, adapt to changing circumstances as they arise, and do our best to stay on task. With that as our mantra, I’m proud of all we’ve accomplished over the last year in the Office of the CTO (OCTO).

CTO Kit Colbert and I just delivered our Solution Keynote, where we shared demos of some of the most innovative work happening in OCTO, in partnership with our R&D business units. In this blog post, I’ll give you a short recap of what we discussed and include links to the deep dives for the solution demos we presented during the keynote. These projects showcase emerging technologies that align to major initiatives we’ve been focusing on over the last year, including security, multi-cloud computing, edge computing, machine learning, and modern apps.

Project Santa Cruz: Bringing modern apps and innovation to the edge

First up, Director of Product Incubation Tasha Drew walked us through Project Santa Cruz, which is a perfect example of the impact modern applications can make in edge computing. Project Santa Cruz offers a single device that can run SD-WAN and modern applications at the edge. Our goal is straightforward — if the application or service can run in an OCI-compliant container, it can run on Santa Cruz. Public cloud providers are packaging their services designed to run at the edge in containers, meaning that you can have a single device to run network services, containers, and cloud services. As business needs change, new apps and services can be deployed to your edge sites as software updates.

Tasha demonstrated just how easy it is to deploy this platform. You literally open the box containing the VMware SD-WAN edge device and plug in power and Ethernet. The platform connects to the VMware SD-WAN Orchestrator for initial provisioning and downloads policies, applications, and settings. This is the simple and low-touch experience that our customers have been asking for in support of their edge initiatives. The next step is to connect it to Tanzu Mission Control, so your Kubernetes platform team can begin to interact with it — providing namespaces to your development teams as part of their global infrastructure platform. It is truly zero-touch provisioning.

Tasha also demonstrated the power of combining Project Santa Cruz’s edge platform with OpenVino to take advantage of inference at the edge. In the demo, we watched as the platform combined streaming camera data and local inference to identify whether everyone entering a corporate office was wearing a mask to comply with local COVID-19 restrictions. It also showed Project Santa Cruz identifying vegetables coming down a conveyor belt at a supermarket, helping store clerks or customers at self-checkout avoid having to look up codes for different produce items.

Bottom line: Project Santa Cruz gives you a cost-positive path forward to edge modernization, where you can reduce your infrastructure and carbon footprint at edge sites, simplify operations, and reduce costs. Best of all, this will be made available as a future update to VMware SD-WAN, so you can deploy SD-WAN edge devices today and get full access to Project Santa Cruz once it is generally available.

We also got a sneak peek of the Tanzu Kubernetes Grid (TKG) new “Bring Your Own Host” capability, which offers VMware customers and partners the ability to run Tanzu Kubernetes Grid on the infrastructure platform (OS and hardware) of their choice, providing practically unlimited flexibility in how TKG can be deployed and operated.

Scaling applications and services, independent of existing infrastructure

Next up, Director of R&D Mazhar Memon presented Project Radium. Last year, we introduced vSphere Bitfusion, which allows for GPU pooling and sharing over a network. Radium builds upon Bitfusion and expands its feature set to other architectures. It gives you support for AMD, Graphcore, Intel, NVIDIA, and other vendor hardware for AI/ML workloads. You’ll be able to dynamically attach applications to accelerators over fabrics (such as Ethernet), with no code changes to applications. We think this is a really big deal for both customers and hardware vendors. Project Radium makes it easy for applications to take advantage of a variety of accelerators, while not imposing any restrictions or taking on additional software dependencies from the accelerator vendors. For customers, this provides significant choice and flexibility in the continually expanding hardware-accelerator ecosystem. For our hardware partners, Project Radium accelerates the ability to build an application ecosystem around their products.

Radium works through an application-level monitor that introduces virtualization services in much the same way as a virtual machine monitor does. By virtue of this software working at the application level, we’re able to dynamically split the application and run application components that benefit from a hardware accelerator directly on the accelerator itself. Once the application has been split, each segment can execute separately while the application monitor continuously keeps data, code, and the execution environment coherent. For Bitfusion, Radium will have integrations for Jupyter Notebook and Kubernetes, and will be runnable via a console. 

Mazhar demonstrated using Project Radium to remote TensorFlow training from a small CPU-only virtual machine to three separate servers with different hardware: CPU remoting, remoting to NVIDIA GPUs, and remoting to AMD GPUs. The TensorFlow script is the same in all cases, but the accelerated portion is dynamically bridged to a CPU or GPU.

We’re working to ensure that Radium runs well on a variety of hardware and that it supports powerful new backends, such as Apache TVM and ThirdAI. We want to make sure that we have native support and common execution environments. Lastly, we’re focused on the edge, where diversity of hardware can really show its benefit. This new technology will allow you to scale capacity at the edge — independent of your core system investments — and has seen broad enthusiasm in industry verticals, such as manufacturing, financial services, and healthcare, among others.  

Ensuring security through crypto agility

Senior Staff Researcher David Ott and Security Engineer Sean Huntley demonstrated new crypto-agility capabilities that we have been developing. While there is genuine concern over the impact that quantum computing will have on how we secure apps and data, many organizations have an urgent need for more crypto-agile applications today. For example, we have been collaborating with financial-services organizations that operate in over 200 different countries while navigating more than 70 different crypto standards.  

The National Institute of Standards and Technology (NIST) has been working on standardizing new crypto algorithms that will be safe against quantum-computing attacks. These new algorithms are called post-quantum cryptography (PQC).

The industry practice of hardcoding cryptography implementations into systems and applications is deeply entrenched, David explained. PQC must offer security teams more control over crypto configuration and the ability to switch between standards and libraries, not just in anticipation of quantum computing, but also to meet specific compliance requirements.

Sean demonstrated a new architectural framework that VMware is pioneering for our products — as well as for all of the developers using VMware platforms. In the demo, Sean showed how our new crypto-agility solution could be used to enable PQC on our Unified Access Gateway (UAG). The UAG is a specialized authenticating reverse proxy that supports VMware Horizon and Workspace One. Like most enterprise software, UAG was built to support a specific set of crypto libraries. This approach is sufficient for most use cases, but creates challenges when an organization requires the use of other crypto libraries for internal or local compliance mandates.

In the demo, Sean added a configuration option to enable PQC to UAG’s configuration file. He showed how it produced an SSL error, because the Firefox client he used in the demo wasn’t enabled for post-quantum ciphers. Then he used a Chromium browser that had been enabled for PQC. It was able to make a quantum-safe connection.

Beyond PQC, David Ott discussed customers who have compliance-based needs based on specific security standards for various business contexts (such as healthcare, government, and finance) or due to regional or national requirements. Crypto agility will help address the needs in all of these contexts. Our next goal will be to make it easy for developers to build their own cryptographically agile applications. Stay tuned for more on that in the future!

Scaling management in the multi-cloud era

Tom Hite, Senior Director of our Accelerated Co-innovation Engineering (ACE) group, walked us through some very compelling innovations in the area of approaching and scaling management in multi-cloud environments — the open-source project Idem.

Running, managing, and securing applications across multiple clouds is a difficult proposition, which is compounded by the velocity of innovation that is inherent to cloud computing. Tom’s group has been hard at work on this in collaboration with our Cloud Management Business Unit’s Staff Engineer/Director Thomas Hatch, author of the Salt open-source software project and founder of the automation company SaltStack, which was acquired by VMware in 2020.

Idem is fast — built from the ground up to be asynchronous and parallel, so it can run any number of management tasks on any cloud at any time. It is idempotent, meaning that it can run repeatedly to converge on the target cloud’s desired state. It is completely stateless, making it the perfect target for running on modern platforms, such as cloud-native runtimes from VMware Tanzu.

Idem exposes the full complement of private- and public-cloud APIs to the automation engineer. This is far more powerful than the abstractions you commonly see today. With Idem, engineers have access to the full power of APIs and as new capabilities are added, Idem can self-discover and make those available as well.  In his demo, Tom showed Idem’s state-declaration file, which is similar to what you would expect to see in a Kubernetes state-declaration manifest or SaltStack state files. Idem uses the state files to reconcile the requested multi-cloud infrastructure. The discovery capabilities inherent to Idem allow your management and automation systems to dynamically adapt and scale as cloud provider APIs expose new capabilities. This is how cloud management should be, and I encourage you to explore Idem and contribute to the project.

Just getting started

Our collaborative and unique innovation approach is enabling us to bring new technologies to market faster than ever. While we are running hard at solving new challenges, we are doing so while ensuring that we continuously calibrate our work against the needs of our customers and partners. If you have use cases for the technologies shown in this keynote, please reach out to us directly or engage with your local VMware team for a faster response.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *