Open Source, Open Interfaces, and Open Networking

[This post is a joint effort by Brad Hedlund, Scott Lowe, T. Sridhar, Martin Casado, and Bruce Davie]

Without a doubt these are transformative times in networking. Everything we’ve known about how to build and operate networks over the last quarter century is changing: networks are evolving from interconnections of individually configured devices to software-defined, programmable networks.

Some will characterize these times as “disruptive”, referring to the potential for leadership change in the networking supply chain. And while that may or may not be an eventual consequence, at VMware we see the challenge and opportunity ahead as something much larger than that. More importantly, it’s time for the industry to come together again—both customers and suppliers—to re-think and re-define how open networking will perpetuate through this transition to software-defined networks. With the right efforts, the results may be a networking architecture and ecosystem that’s more open than ever before.

It should come as no surprise that VMware intends to be a key player in the future of open networking. A healthy ecosystem rich with customer choice is central to our vision of the software-defined data center. Open, industry-standard APIs and protocols can enable that. To that end, VMware and Nicira have been leaders in many of the most significant SDN efforts, such as Open vSwitch, OpenFlow, OpenStack Quantum, and VXLAN.

But how does one define “open networking”? The foundation for open networking, including SDN, should start with a high level architecture. For example, let’s consider a framework of autonomous functional blocks federated and coupled together by four key interfaces. Work is already underway to define and standardize these interfaces:

  1. A northbound API, like OpenStack Quantum;
  2. Multiple southbound APIs, including both a forwarding control API (like OpenFlow) and a configuration/state federation API (such as OVSDB_config); 
  3. The wire format for virtual network isolation (e.g. VXLAN and STT); 
  4. And finally, a framework for collecting and storing real time and historical performance data, operational state, and network statistics made available for query.

Each functional block in this framework is an area where an ecosystem of vendors can insert themselves, interfacing with the architecture at the control/management plane with standard APIs (both northbound and southbound, as listed earlier), and data plane protocols such as VXLAN and STT.

Examples of functional blocks include:

  • Edge – network attachment point, L2-L4 services, and forwarding control for virtual machines
  • Virtual/Physical Gateways – network attachment points for non-virtual hosts
  • Service Appliances – high touch L4-L7 network services insertion (e.g. L7 Load Balancers, WAF, IDS)
  • External Gateways – attachment point to external IP/VPN networks (e.g. BGP/MPLS, IPSec/SSL VPN) 
  • Fabric – data plane transport, OAM, instrumentation and telemetry
  • Controller – accepts northbound API requests, disseminates the appropriate control and/or configuration state to each functional block
  • Data collection – extract, store, query interface for logs, historical and real-time performance data & stats
  • Tools – e.g. Monitoring, troubleshooting, analysis, reporting
  • Apps – Cloud management systems, e.g. OpenStack, CloudStack, vCD

Here’s a graphical representation of the framework we’re describing for open networking:

Components of this framework already exist. For example, consider Open vSwitch (OVS), a production quality virtual switch that is entirely open source software. OVS supports multiple open standards, including the most widely used implementation of OpenFlow. The OVS data path module has been added to the standard Linux 3.3 kernel with an interface to any 3rd party user space control program through the Linux-maintained Netlink API. Further, OVS has been implemented on both virtual and physical switching platforms and is already in use by a large and diverse user community. Originally developed by members of the Nicira team, OVS now has received contributions from approximately 90 individual authors from more than a dozen organizations.

Northbound APIs

To date, the best example of an open and standardized northbound API is the Quantum API in OpenStack, another effort in which Nicira and VMware have been very much involved. Quantum provides a vendor-agnostic API for defining a logical networks and related network-based services; this is combined with a plug-in framework for translating API requests into southbound device specific configuration actions. A large ecosystem of networking vendors have already begun developing plug-ins for Quantum, which has helped it to become the default networking API in the largest open source cloud management platform, OpenStack.

Southbound APIs

While much of the industry seems to be focused on an architecture involving one monolithic SDN controller with OpenFlow as the omnipotent one-size-fits-all southbound control API, top-down control of all devices may not be the best approach to foster an open ecosystem. In fact, it’s likely that more than one SDN controller will be part of an overall solution. For example, the fabric may have its own SDN controller for autonomous fabric-level control. Another controller, such as NVP, might be present to provide network virtualization and a northbound API for the logical network. As such, it’s important to address the need for configuration and network state synchronization between autonomous devices and multiple SDN controllers – something OpenFlow alone is not equipped to handle. In other situations, not every device or vendor will be able to support OpenFlow, or top-down control of the device’s forwarding plane may not be completely necessary or the best approach.

For these reasons, in addition to using protocols like OpenFlow where appropriate, a standard configuration & state federation API serves an important role in gluing together a truly open architecture. The most widely deployed example of configuration/state federation is the OVSDB_config API implemented by OVS. Federation is one of several areas in open source and SDN standardization efforts where we intend to contribute.

Wire Frame Format

Much has already been written about wire frame formats; for more information, consult our earlier post on VXLAN and STT. VMware has played an integral role with regard to the network virtualization data plane with both the VXLAN and STT protocols, and is committed to continuing the development of open, standardized protocols that all vendors can use. (Specifications of the two protocols available here and here.)

Statistics Collection & Telemetry

Another area of focus for an open networking ecosystem should be defining a framework for common storage and query of real time and historical performance data and statistics gathered from all devices and functional blocks participating in the network. This is an area that doesn’t exist today. Similar to Quantum, the framework should provide for vendor specific extensions and plug-ins. For example, a fabric vendor might be able to provide telemetry for fabric link utilization, failure events and the hosts affected, and supply a plug-in for a Tool vendor to query that data and subscribe to network events.

Bringing It All Together

VMware’s own network virtualization platform (NVP) is a perfect example of how the aforementioned key areas—OVS, a northbound API, multiple southbound APIs, and a standardized frame format—come together to build a platform that offers customer choice, and a framework for ecosystem partners to integrate with. For example, any hardware switch vendor will be able to interface to NVP as a virtual/physical gateway for attaching non-virtual hosts to virtual networks, using the OVSDB_config API and the VXLAN tunneling protocol. NVP works with OVS, implementing control and configuration using open APIs such as OpenFlow and OVSDB_config, giving customers a choice of using NVP on any hypervisor platform that supports OVS, such as ESXi, KVM, and Xen (with Hyper-V support forthcoming). Furthermore, anyone who wants can develop their own SDN controller that works with OVS.

NVP is a platform focused on full network virtualization: providing all the properties of a physical network in a virtual network, decoupled from network hardware, with the provisioning speed and operational model of a virtual machine, which can be realized on a choice of any general purpose network hardware. In doing so, we intend to make sure NVP can work with other SDN platforms that might be providing a management/control point for other areas of the infrastructure, such as the physical network fabric.

VMware is committed to an open ecosystem in the network virtualization and SDN era. As a company we’ve made significant investments in leading open source networking projects such as Open vSwitch and OpenStack Quantum, and in SDN-enabling protocols such as VXLAN and OpenFlow. Our development team includes sizeable groups of developers dedicated full-time to the development of both Open vSwitch and OpenStack Quantum, contributing production-quality code to both projects. We also expect that other open-source SDN projects will come together in the near future, and we look forward to making similar contributions and investments in those efforts as well.


Other posts by

CTO Reflections Four Months In

In my first four months as Field CTO for APJ I’ve crossed the Pacific ocean 7 times and the equator 9 times — living up to the standard line that the T in CTO stands for Travel. My travels have been a mix of customer meetings and VMware events. Some of these were CIO Forums, and I […]

And Now For Something Completely Different

If you’re one of my 1441 twitter followers or an avid reader of trade publications from the Asia-Pacific region, you may have noticed I started a new job last week. After two-plus years as CTO for Networking at VMware, I’ve now taken on the role of CTO for APJ (Asia, Pacific and Japan). My new job […]

VMworld and the Future of Networking

VMworld 2016 in Las Vegas was an especially hectic one for me, and ultimately a very satisfying event as well. In addition to my normal schedule of breakouts, customer meetings, and analyst briefings, I was responsible for organizing a new networking event that took place alongside VMworld, called future:net. Whereas VMworld is mostly about talking […]