Blue glowing high energy plasma field in space, computer generated abstract background

Let’s get logical – the case for network virtualization

In my last post, I hinted about the changes happening in the data center, especially with respect to networking and security architectures and deployments. To say that there are transformational changes going on in the industry in this area, is an understatement – the premier networking event Interop goes live this week in Las Vegas, and will showcase some of these trends. One of sessions of interest is hosted by the recently formed Open Networking Foundation, which will also hold an informational session Wednesday, May 11th at 11 – 11:45 to highlight the ONF vision and the future of Software Defined Networking (SDN).

The rampant adoption of server virtualization and consolidation, the emergence of server hosted desktops, along with growing interest in private and hybrid clouds, is highlighting the shortcomings of current networking & security architectures.

Following representation is a simplified view of existing data center networking architectures:


       Virtualized servers are connected to virtual switches (1), which are connected to Top of Rack (ToR) physical switches (2). ToR switches are cabled into the core network (3). Traffic enters/leaves the data center via edge routers (4). Additional core network services like firewalls and IDS/IPS devices are implemented in End of Row (EoR) configurations (5). This results in efficient cabling and good network designs.

Typically, hosts are segregated into VLAN/subnets, and VMs are restricted to deployment within hosts in their respective “silos”. First level security is achieved by hair-pinning traffic out of the VLAN and to the firewall/IPS service nodes.

This architecture worked well when servers were physical/static, with most of the traffic being “North-South” i.e. client-server traffic. With the virtualization of servers, server consolidation is accelerating, and the amount of North-South traffic has exploded. But more challenging to the architecture, is the fact that the new workloads are provisioned/de-provisioned more rapidly, there is more mobility of workloads across the hosts, and there is lot more “East-West” traffic driven by control traffic (e.g. vMotion, DRS, HBR) and access to shared services like storage and backup. When we begin to add notions of multi-tenancy and scale requirements to this new dynamic, fluid NS+EW mesh, the architecture really begins to show its age.

Some of the issues are:

  • Host-centric physical and static segmentation based on VLANs/subnets, curtailing the ability for VI admins to have more flexibility in consolidating VMs across hosts.
  • VLAN & switch TCAM limits, VLAN sprawl
  • Onerous lock-step requirements between VM provisioning and network re-mapping
  • Firewall rule explosion and static IP-based rules
  • Hair-pinning of firewall rules, L4-7 services, resulting in choke points
  • Shared edge services

In summary, the rigidity and static nature of current network architectures stand in the way of the agility, flexibility and dynamic requirements of modern workloads. Network re-mapping becomes an ongoing, onerous task.

A better approach is needed, one which separates the consumption of these network constructs from the underlying physical network. We need to un-tether VMs from the underlying physical network, much as we un-tethered OSes from the server hardware. The approach is in line with comments made in an earlier post.

From a tenant or org or app owner perspective, we need to abstract and simplify the underlying network/security architecture, and present consumable constructs such as logical networks, edges and zones, as shown below.


Specifically, the requirements are simply stated as:

•     VM workloads need to be optimally placed (manually or automagically) across the host cluster, untethered from the underlying network segmentation.

•     Each vApp (logical collection of VMs) is given its own logical network(s); each logical network represents an isolated L2 broadcast domain. “A” above represents this “vApp” scenario.

•     Additionally, each org (or tenant in public clouds) can opt for a logical edge, providing edge security & networking services e.g. firewall, NAT, VPN capabilities, and the ability to route between logical networks. “B” above represents this “VDC” scenario.

•     Furthermore, each tenant can further opt to partition its workspace into Trust Zones, with associated security policies. “C” above captures this scenario. Note such Trust Zones could either mirror virtual abstractions like VDCs, vApps, and PortGroups, or be fungibly abstracted based on identities, sensitive data, or administrative span of control concerns, for example.

In order to realize such a logical representation of networks, edges and zones, we need to work together across the industry (network/security/NIC vendors, virtualization providers, cloud admins) on the provider side of the equation. Let’s touch on the key areas undergoing change:


  1. The Virtual Distributed Switch (VDS) needs to provide a homogenized “sea of ports” across the cluster of hosts; these can be grouped into “port groups”. These “port groups” are allocated on demand to each vApp, and presented as “logical networks”. Port Groups are ideally backed by isolated L2 broadcast domains, that span subnets, and operate in a tenant-specific namespace. There is room for innovation in delivering such “multi-tenant, L2 overlays”.
  2. The Access tier, typically represented by Top of Rack switches, maps the logical “sea of ports” into the physical network infrastructure. Top of Rack switches are fast evolving to support higher bandwidth, lower latency, greater port density, convergence (FCoE), and ingress port to egress port one-hop routing. We can continue to see tighter linkages between the virtual switches and physical NICs above, and the network fabric below. For example, support for multi-tenancy and programmable elasticity:
    • Multi-tenancy refers to the need to have separate addressing namespaces for each tenant, to avoid MAC/IP/broadcast overlapping.
    • Programmable elasticity refers to the need to control creation of logical networks, and add/delete ports on demand.
  3. To be able to meet the demands of such dynamic, fluid virtualized environments, where logical networks are allocated on demand, the network fabric continues to become “Fast, Fat and Flat”
    • Fast meaning the ongoing trend from 1Gbps to 10Gbps and more, to support the increased north-south and east-west traffic explosion.
    • Fat means getting away from Spanning Tree Protocols and moving to multi-pathing which uses inter-device links more efficiently.
    • Flat refers to the emerging trend of moving from 3-tier networks (Core/Aggregation/Access), to 2-tier (Spine/Leaf), or even 1-tier. Driven by low latency & simplified fabrics.
    • Note that the distinction between the Access tier and the Core/Aggregation tier is beginning to blur, so we can ultimately consider items 2 & 3 as a collective network fabric requirement.
  4. The WAN Edge tier itself needs to get virtualized to support logical edges, available to each tenant/org on demand. Key drivers are the scale out  (versus scale up) architecture, the ability to have customizable (even self-service eventually) edge services on a pay-as-you-go basis, and the ability to provision such services on demand e.g. edge firewalls, VPNs or Load Balancers. Note that some capabilities e.g. DDOS detection & protection are better left at the physical edge as a first line of defense, and where cross-tenant context is useful.
  5. Likewise, current service node architectures in the data center need to get logical. Today, firewalls, IDS/IPS, email spam filters, NAC devices, etc, are implemented as service nodes sitting in a “End of Row” configuration, with traffic steered to such a node via “hair pinning” i.e. traffic is forced to leave the VLAN, and steered towards the service node, where several functions are chained. With increased server consolidation, increased east-west traffic, logical networks, multi-tenancy, etc, such devices become potential choke points, we have firewall rule explosion, and VLAN depletion. There are already examples of such purpose-built services getting virtualized, and logically inserted into the virtual plane on a per-tenant basis.


We are entering a new phase of data center networking, driven by the needs of modern virtualized/cloud workloads. We need to transition from an era of static, host-centric, IP-centric, pre-segmented networks, to a modern, efficient programmable network fabric, that provides dynamically allocatable logical abstractions to the new workloads. An era that leverages:


Let’s get logical!



Leave a Reply

Your email address will not be published. Required fields are marked *