VMware Explore 2022 News Technologies

Let’s Go Over the Edge: What’s Next for VMware Edge Compute

VMware is moving forward into the future with continued focus and investments in Edge computing. This rapidly growing market is about processing and managing data and applications close to where they are used and generated.

The volume of devices at the Edge is large and growing. As business drivers grow, including cost of data transfer, lower latency response needs, privacy laws and data governance, and 5G/6G rollouts — the demand for holistic and consistent Edge compute management outside of centralized data centers and clouds continues to accelerate.

xLabs, a specialized team nested in VMware’s Office of the CTO has been investing in and accelerating Edge technology for some time (you may recall VMworld 2021’s Vision and Innovation Keynote and Edge Breakouts). At VMware Explore 2022, we are excited to share more remarkable technology, enabling our customers as they continue to grow and manage their Edge workloads:

  • Helping customers overcome the constraints in edge computing to meet the needs of autonomy with the proven workhorse of ESXi (Project Keswick)
  • Modernizing global power grids with virtualization
  • Enabling machine learning at the Edge on Kubernetes (Project Yellowstone)
  • Workload balancing at the Edge (Work Balancer for Tanzu Kubernetes Grid Edge Workloads).

See something you’re excited about? We love to work with early beta users and design partners! Please reach out to use to work together more closely at xlabs@vmware.com

Project Keswick

Even simple things tend to breed complexity at scale. Deployments are no different. As Edge deployments grow to hundreds, thousands, or millions of devices, so do problems with governance and visibility. For example, what application versions are each of these Edge devices running? How do we update these applications when there are new versions available? Furthermore, these devices are often difficult to reach due to remote locations, and it is time-consuming to update each one to ensure uniform definitions for how infrastructures and workloads are run.

Project Keswick pairs together our trusted hypervisor, ESXi, with GitOps. Adding GitOps to our workflow gives us version control and CI/CD, so we document, manage, and provision infrastructure. As part of this flow, we use a YAML file to declare how the infrastructure should be set up, and which workloads run on it.

Here is an example YAML file, where we have a simple deployment defined. Using the latest container image (hello-world:latest), we would like to deploy 3 pods (replicas) to run the nginx app:

When this file is committed to source control (git), Project Keswick will make this configuration update available to each ESXi device connected to the git repository. When endpoints are ready to be updated, they pull their changes down and update.

This architecture diagram outlines the layers of Project Keswick. Kubernetes manifests, which are YAML files stored in a git repository, are used to define the Infrastructure and Workloads layers. When the manifests are changed, Project Keswick detects these changes using a listener on the git repository and executes the updated instructions when the node is ready to update.

Project Keswick was developed with intermittent connectivity use cases in mind. Nodes will not always be ready or in a good space to update, but when they are ready, they can pull down the latest versions and update themselves.

Power Substation Workload Virtualization

Recent utility grid issues and power outages highlight the energy industry’s need for innovation. Specifically, power substations, or the intermediate points between power plants and homes or businesses, are often remote, disconnected, and unprepared to accommodate the increasing need for a bidirectional flow of power. When power substations were first built to transmit voltage, energy only flowed one way – from the powerplant, through the substation, to homes and businesses. In this model, power substations decrease voltage to appropriate levels for their destinations. Today, as more renewable energy is generated at homes and businesses through options like solar panels, there is a growing need for the grid to handle the bi-directional flow of electricity back to the grid.
VMware is investing in the virtualization of power grids with Intel and Dell as our co-innovation partners to modernize our power grids.

VMware’s Power Substation Workload Virtualization project accommodates the bi-directional flow of energy. In the process, it efficiently gets the most out of physical hardware assets that run mission-critical software governing the flow of energy. This Virtual Protection Relay (VPR) software has very specific networking constraints due to its monitoring electricity, which moves pretty fast! Furthermore, testing the VPR software itself requires special devices to send simulation data.

The architecture diagram below captures the testing setup we built that allows power substations to carry out their resiliency checks for their VPR software. At its core, the Power Grid Virtualization project updated ESXi 7.0.3 to support real-time operating systems, the precision time protocol, and parallel redundancy protocol for networking resilience. This update allowed us to support the testing devices and the VPR software:

In this diagram, the workflow starts at the Doble Power System Simulator at the bottom. This device sends mock amp readings through the physical and virtual network switches to the VPR Services and Simulation Management & Troubleshooting host, which runs on servers with ESXi 7.0.3. Variation in amps can significantly damage power substation equipment, so it is critical that VPR software can be tested rigorously to make sure it can detect them to prevent damage.

This project proved that ESXi can support real-time operating systems, precision time protocol, and parallel redundancy protocol. As a result, we are excited to continue delivering our solution for running and testing VPR software with customers and partners so that power substations are better equipped for further transformation.

Read more about how power grids are transforming in this whitepaper.

Project Yellowstone

Let’s move to machine learning (ML) at the Edge, which is driving massive change in industry and infrastructure around the globe. For example, ML at the Edge is impacting the automotive sector significantly. ML algorithms used in autonomous cars, electric vehicles, and other new models are equipped with thousands of sensors that capture large amounts of data. The sensors collect and build a “picture” of the vehicle’s environment, including pedestrians, road conditions, traffic lights, and even tracking eye movement of the driver. The result is a large amount of data that requires quick processing at the Edge because of data privacy laws, volume, and latency requirements.

As businesses increase the automation of their workloads using Artificial Intelligence (AI) and Machine Learning (ML), IT administrators are seeing a steep learning curve during this transition. Because of the vast variety of accelerators and infrastructure that workloads are deployed on, there is no guarantee that an ML workload with inferences will match up with the right node, causing failure or inefficient workloads which require time-consuming debugging.

Recognizing and understanding workloads at the Edge provides speed, agility, and security to power applications built with AI and ML – and with Project Yellow Stone, we created heterogenous Edge AI acceleration for Kubernetes-based deployments.

Project Yellowstone leverages cloud-native principles to boost AI and ML tasks. Customers can optimize and accelerate ML processes without code changes using multiple mainstream graph compilers, like

  • Apache TVM and Intel OpenVINO. Project Yellowstone will enable users to:
  • Deploy workload clusters on the correct node with the proper accelerators
  • Auto-compile and optimize ML inference models dynamically
  • Use available accelerators that would best suit the workload

This means customers can easily manage and take advantage of available edge accelerators on the worker node. The result is improved efficiency and an end-to-end machine learning framework that dramatically reduces the complexity of user environment configurations by using heterogeneous hardware accelerators.

Work Balancer for Tanzu Kubernetes Grid Edge Workloads

We are also investing in building solutions for workload balancing at the Edge. Properly orchestrating load balancing at the Edge ensures that we get the response times we need to successfully realize the advantages of applying ML and other time-sensitive operations.

Due to resource limitations and non-isolation requirements, customers often deploy Kubernetes on bare-metal devices rather than in virtual machines at the Edge. ​However, there are a few challenges with this approach:

  • Kubernetes does not offer an implementation of network load balancers for bare-metal clusters
  • The only Kubernetes options for load balancing are NodePort or externalIPs services.

Neither of these options are ideal. NodePort requires users to have direct access to the node, which is not secure, and the port range is limited. ExternalIPs require users to directly assign an IP to a node, which is not reliable and would require manual remediation should the node die. Manual remediation is often a non-starter due to lack of IT staff or simply intermittent network connectivity.

This project provides a software-defined cloud management and Edge workload balancer for Kubernetes clusters deployed on bare metal, enabling users to create a load balancer service the same way they would in the cloud.

Building a workload management capability at the Edge improves performance and execution. As we apply tools like ML to solving problems in compute-constrained environments at the Edge, having appropriate tools such as load balancing to make the most of our resources becomes ever more crucial.

To learn more about our offerings and see demos of all these projects, please see our VMware Explore On-Demand Breakout Session: Edge Computing: What’s Next?

We hope you enjoyed reading about our xLabs Edge-computing-focused innovation projects. We are excited to innovate at the Edge with our customers and partners. If you want to connect with the team about early beta feedback or partnerships, please email xlabs@vmware.com.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *