We Built an Automated Full-Stack, On-Demand SDDC on Equinix Metal. Here’s What Happened.

In a recent blog, we talked about expanding an on-premises compute environment to a near-cloud metal provider. As a refresher, I’m defining a “near-cloud provider” as a globally interconnected, multi-tenant networking/compute/storage bare-metal environment that is well-connected to any of the hyperscalers. Consumers of the metal environment are able to use automation interfacing with APIs to install/configure/maintain the full environment that run their enterprise’s applications. These environments offer either a reserved set of servers, a set of servers that are available on demand, or server spot pricing. The underlying hardware is maintained and serviced by the metal provider.

Recently, a team made up of solution architects from the metal provider Equinix, together with solution architects and engineers from VMware, conducted an experiment to see if it was possible to run off-the-shelf software from VMWare (in this case VMware Cloud Foundation 4.1) on a highly automated Equinix Metal environment. Spoiler alert: yes! The work product from this effort, including a video, can be found in the Equinix GitHub repo. We also used these environments to host other applications running on top of WCP and TKG-m.

Read on for the details of how we conducted this experiment, as well as our results.

The environment

The environment used on-demand instances (12 * m3.large.x86 metal instances from Equinix) split up as:

  • Four nodes in the Silicon Valley Metro to be used for the VMware Cloud Foundation (VCF) management domain
  • Four nodes in the Silicon Valley Metro to be used for a VCF workload domain
  • Four nodes in the Dallas Metro to be used for a VCF workload domain

We also used four additional c3.small.x86 instances — one as a kernel-based virtual machine (KVM) destination, two as a gateway, and one as a staging host.

What we deployed

With this hardware, we created the automation in the repository to:

  • Deploy the core infrastructure for a VCF management domain on Equinix Metal
  • Automatically configure the four ESXi metal servers using the VCF Cloud Builder default values (as found in the Cloud Builder spreadsheet)
  • Configure a metal instance to run KVM, which will act as an edge host for routing and DNS virtual machines (VMs)

Once you have carefully followed the steps outlined in the repo and have deployed the project, you will have set up an environment resembling the one in the diagram below. Note that once the management cluster has been configured, you can add one or more workload domains using the software-defined data center (SDDC) manager exactly the same way that you would do it for an on-premises environment.

Figure 1: Basic VCF Layout (Source)

From an Equinix perspective, the automation sets up an environment resembling the one in the diagram below. Note that all of the Equinix services are available and can provide connectivity to any of the cloud hyperscalers with relative ease.

Figure 2: VCF on Equinix Metal (Source)

The reason that the off-the-shelf VMware software works is because of Equinix Metal’s ability to serve up a virtual rack of equipment that not only looks — but also acts — like a physical rack of equipment on premises. The VCF Cloud Builder does not detect a difference between a physical or virtually stitched rack of equipment.

A word of caution

The GitHub project merely outlines what is possible within a metal environment. It is not an official product offering. If you are contemplating doing something similar, please take additional caution in how this environment is set up/configured and make sure you review all of the security constraints in the environment. This particular project is for reference or proof of concept and is not intended for production.

What’s next?

We will continue to explore interesting metal use cases to try to understand how we can solve customer challenges. We believe that being able to create dynamic environments on demand create valuable flexibility for an enterprise.


Leave a Reply

Your email address will not be published. Required fields are marked *