The Ability of the Data Center LAN to Support Virtualization


There is no doubt that over the last couple of years that the topic of virtualization has received considerable attention in the trade press.  Virtualization, however, is more than a media event.  For example our research indicates that over 90% of IT organizations have implemented at least some server virtualization and that in the coming year most IT organizations will increase the percentage of their servers that are virtualized.  In addition to being more than a media event, the topic of virtualization is bigger than just server virtualization.  Today, almost every component of IT can and is being virtualized.

This month's discussion will focus on virtualization.  As in the previous discussions, we will start with a fairly high level question and then ask more granular questions over the month.  In order to make this discussion somewhat interactive, kindly feel free to send us questions or comments.


In order to comment on the discussion here and/or to suggest further questions, please send email to Jim Metzler or simply enter your comment into the form below.

12 Comments

| Post a new comment/Start a new thread.
|To reply to an existing comment, please click "Reply" next to the original poster’s name and post date.

Virtualization broadly defined is a hot topic for virtually all IT organizations. What impact does virtualization have on data center LAN switching?

Virtualization within the Data Center is now taken for granted, with some declaring that ‘Cloud Computing’ will be the choice of most enterprises and that applications and information will become commodities. Experience has proved one thing; the Data Center of the future cannot be built on the technology of the past. General-purpose products, outmoded techniques, and legacy designs cannot be re-packaged as ‘Data Center-ready’. The industry will take the best and leave the rest. Ethernet is readily available, cost-effective, extensible, and – as the 40/100 Gigabit developments prove – scalable, however many existing deployment methodologies are no longer an option.

The benefits implied by large-scale application and server virtualization – higher efficiency levels, faster time-to-service, reduced hardware costs, smaller footprint – set an expectation that create new challenges for the underlying network. Traditional networks were conceived in a time before the demands of a high-virtualized compute environment.

Specifically, the next-generation Data Center network needs to empower:

 •  Virtual Machine connectivity optimization and life cycle migration – virtualized servers are operationalised very quickly and very dynamically; the network must react quickly and seamlessly

 •  Effective segmentation of traffic by application – a myriad of business and operational drivers mandate the support for a series of full-featured virtualized networks; much is Layer 2-only but Layer 3 functionality is also key

 •  Efficient service provisioning and orchestration – time-to-service demands dictate that the network dynamically and automatically responds to service changes; innovations can negate the change administration burden and change-induced errors

Provisioning needs to be simpler, and availability and performance need to scale seamlessly. Empowering a truly commoditized approach to service delivery requires a solution that is characterized by simplification, and a standards-based approach will help ensure an open architecture that avoids costly or inflexible lock-in.

There are many degrees to which a switch is virtualized, defined by the level of fault containment and management separation provided. The main elements that characterize the degree to which a network switch is virtualized include:

  Control plane: The capability to create multiple independent instances of the control plane elements enables the creation of multiple logical topologies and fault domains

  Data (or forwarding) plane: Forwarding tables and other databases can be partitioned to provide data segregation.

  Management plane: Well-delineated management environments can be provided independently for each virtual device.

  Software partitioning: Modular software processes can be grouped in partitions that are dedicated to specific virtual devices, thus creating well-defined fault domains.

  Hardware components: Hardware components can be partitioned and dedicated to specific virtual devices, allowing predictable allocation of hardware resources to different virtual devices.

The Cisco Switches supports all these degrees of virtualization. The Cisco innovation, through virtual device contexts (VDCs), which allows the switches to be virtualized at the device level by separating logical entity within the switch, maintaining its own unique set of running software processes, having its own configuration, and being managed by a separate administrator.

IMPACT: The capability to consolidate multiple functions onto fewer devices leads to a simplified architecture, which provides operating efficiencies by reducing the number of tasks to be processed as well as the number of elements to be maintained without sacrificing efficiency, utilization, and scalability. VDCs improve CapEx/OpEx by optimizing power consumption, space requirements, device utilization, maintenance operations, and ultimately, service speed

Virtualization has a significant impact on switching because it impacts the network in many ways. From the technology perspective, virtualization consolidates the network and increases performance requirements. Virtualization increases the load coming from servers, meaning the need becomes acute for highly dense, wire-speed L2/3 switching of 10/40GbE. The design of data center networks flatten out to support east-west traffic flows, and very importantly, to reduce hops and latency.

Most important is the human side: Server virtualization brings with it a set of operational challenges from dealing with configuration challenges around Virtual Machine (VM) switching to managing virtual machine mobility in the network, to providing virtual machine location and inventory in the network. Today, there are only a few tools available to the network administrator that provides visibility, control and insight into the virtual machine environment. This white paper outlines the problem well.

Widespread deployment of high-density, highly virtualized, federated applications will require much larger-scale, lower-latency, flatter, layer 2-oriented network architectures to support server-to-server traffic and vMotion/Live Migration-driven virtual server migration

These high-density deployments will push much higher server hardware utilization rates and drive demand for new network architectures that provide higher speed server access connections and larger-scale core/interconnect capacity.

These network architectures will be built on higher performance platforms and employ new networking tools that displace legacy Spanning Tree and Virtual Router Redundancy protocols to deliver much higher network performance and link utilization while significantly improving network availability and simplifying network management.

Building upon and complementing existing innovative tools like HP’s Intelligent Resilient Framework, a new set of standards-based, layer 2-focused multi-pathing technologies like TRILL and SPB will further empower customers to deploy more scalable, more highly-available, flatter network architectures that propel server virtualization.

IRF is an innovative HP switch platform virtualization technology that allows customers to dramatically simplify the design and operations of their data center and campus Ethernet networks.

HP IRF overcomes comes the limitations of traditional Spanning Tree Protocol (STP) based and legacy competitive designs by delivering new levels of network performance and resiliency.

What functionality needs to be in a data center LAN switch in order to support the dynamic movement of virtual machines between physical servers?

Managing an increasingly "virtual" data center has become a daunting task for data center managers. Managing the assignment and allocation of highly dynamic and mobile virtual workloads across physical and virtual networks has added tremendous complexity to overall data center network operations and administration.

For example - the configuration of servers, virtual machines, physical and virtual networks (vSwitches) can often be complex and difficult to coordinate across IT staff. Workload adds/moves/changes can be slow and error prone. The lack of a single pane view of the virtual and physical network infrastructure makes troubleshooting difficult if not impossible.

The HP Intelligent Management Center (IMC) unifies physical and virtual network management and helps IT overcome the challenges of administering the new virtual server edge. The solution provides a unified view into the virtual and physical network infrastructure to help accelerate application and service delivery, simplify operations and boosts network availability.

IMC addresses this challenge by delivering the following innovative virtualization optimized capabilities:
 •  Automatic discovery of virtual machines, virtual switches and their relationships with the physical network
 •  VM and virtual switch resource management, including the creation of virtual switches and port groups
 •  Virtual/Physical topology views and status indicators for networks, workloads and virtual switches
 •  Automatic reconfiguration of network policies that "move" with VM/workloads as they "move" within or across the data center.

The upshot for customers is that HP IMC can help eliminate service interruptions caused by virtual/physical network configuration errors, reduces administration and troubleshooting by providing unified management of physical and virtual network infrastructure through single pane of glass and ultimately accelerate the delivery of new applications and services by automating configuration of virtual and physical network infrastructure.

The proliferation of VM’s introduces new challenges and issues for SAN and LAN, including loss of visibility, potential security issues, difficulty with traffic isolation of applications, workload balancing, disaster recovery and management complexity. Virtual Machines are also required to move across physical servers (Inter and Intra data centers) for workload balancing, failover/disaster recovery, or sharing of physical resources across different workloads/customers.

Features required to support mobility include:
 •  Security and QoS per VM
 •  Data protection per VM
 •  Scalability /performance management per VM
 •  Trending and capacity planning for each VM

To enable VM migration, the network should have the ability to support VM motion without impacting performance (bandwidth/CPU/ utilization and memory). Thus features like 10GbE and FCoE not only help in preserving the performance levels, but also help in cost containment.

Features such as Overlay Transport Virtualization (OTV) and Locator/ID Separation Protocol (LISP) enable VMs to move across datacenters in any of the scenarios. OTV enables the movement over L2 networks whereas LISP does it over L3. Cisco Nexus Virtual Switch helps the network administrator to deploy configurations and policies to the VMs and define VM migration scenarios. Higher port count is required in order to support large number of VMs on a physical server and provide the recommended need of a dedicated port for migration purposes. Technologies such as FEX and VM-FEX based on emerging standard 802.1 BR deliver port scalability.

VM mobility affects storage performance also– ideally the switch should be able to provide VM-aware features for SAN, to provide seamless connectivity to storage.

When organizations attempt to scale virtual server environments, the network often presents challenges due to Spanning Tree Protocol (STP), the growing number of GbE connections per server, low utilization, and link failure recovery. Server clusters have traffic running between multiple racks, travelling “east-west,” so the tree topology increases latency with multiple hops and restricts bandwidth with single links between switches. STP automatically recovers when a link is lost. However, it halts all traffic through the network and must reconfigure the single path between all switches in the network before allowing traffic to flow again. Enabling virtualization capabilities, such as Virtual Machine (VM) mobility, requires VMs to migrate within a single Layer 2 network, since non-disruptive migration of VMs across Virtual LANs (VLANs) using Layer 3 protocols is not supported by virtualization hypervisors. In traditional Layer 2 Ethernet networks, to create a highly available network, organizations designate paths through the network as active or standby using STP. While this provides an alternate path, only one path can be used at a time, which means that network bandwidth is not well utilized. Since one of the goals of server virtualization is to increase utilization of the physical server, increased utilization of network bandwidth should also be expected. Finally, traditional DC LANs are not VM-aware, so admins must manually create VM connections.

Modern DC LAN switches must be VM-aware. This means they must be able to support the following:
 a.  Places no physical barriers in the way of VM migration
 b.  Is aware of VM locations and consistently applied network policies
 c.  Does not require manual intervention when a VM moves to a new physical machine
 d.  Removes the overhead of switching traffic from the hypervisor for maximum efficiency and functionality
 e.  Supports heterogeneous server virtualization in the same network

More advanced, highly VM-aware Ethernet fabrics will allow IT organizations to broaden the sphere of application mobility, provide VM awareness, and optimize server resources for applications.

The question of what needs to be in the Switch is possibly a relatively minor one, with multiple options for network signaling between the two environments, the networking and the compute. Perhaps the more pertinent issue is that of policy control associated with the VM, and specifically with migration. The network access control characteristics associated with a VM in one location need to follow it to the new location (or additional location/s). That requires a policy tracking function and the ability to know, proactively, whether or not the edge device in the new location is capable of applying equivalent policies. This type of orchestration requires a higher level of functionality then just the internals of an individual Switch; Avaya has taken a pioneering approach through the delivery of our Virtualization Provisioning Service (VPS) unified management platform, and we also have the ability to leverage our role-based authenticated network access solution (Identity Engines).

So, how would this work is practice..? With an Avaya VENA solution, the end-point provisioning feature of our open, SPB-based Network Fabric is used in conjunction with the centralized VM tracking provided by VPS. This has the following advantages: one place for network operators to track the status and connectivity of all VMs, central control of allowed versus un-allowed VM moves, and central network policy management for VMs.

Of course we plan to implement any future standards-based alternative, with the under-development IEEE 802.1Qbg VEPA model being the most promising. VEPA proposes to allow the VM to explicitly signal VLAN connectivity membership requests directly to the network, ideally at the first point of interconnection. And to this end we are actively participating in the relevant Working Groups. By tracking the progress and capabilities of these proposals we are able to determine the right time to transition from a purely home-grown offering to an open standard.

Other than server virtualization, what other form of virtualization is likely to have a big impact on the data center LAN. What is the impact and what has to be in the data center LAN to support that form of virtualization?

To create an end to end Virtualized Data Center you need to consider virtualing well beyond basic server virtualization.

In order to abstract the offerings of a Virtual environment, we need to understand the composite entities that make up a “virtual data center” —data, storage, processors, the network, and finally a super-control plane. Often when tasks are divided across a network, they are managed by different entities and it is the job of the network to provide fault tolerant paths between these entities or sub-modules, integrating these to produce a seamless virtualized DC environment. In particular, distributed virtual switches are virtual machine access switches that are an intel­ligent software switch implementation based on IEEE 802.1Q standard. The Cisco Nexus 1000V Series supports server virtualization technology to provide policy-based virtual machine connectivity and Mobile virtual machine security and network policy.

Other technologies to consider when building an end to end Virtualized Data Center are VDC, which allows the partitioning of a single physical device into multiple logical devices. This logical separation provides the following benefits- administrative and management separation, and failure domain isolation from other VDCs. Virtualized services like Cisco Virtual Security Gateway provides trusted multitenant access with granular zone-based security policies for virtual machines. And Cisco Virtual Wide Area Application Services (vWAAS), a WAN optimization solution delivers assured application performance acceleration.

Virtualization implies the ability to make available resources across disparaging physical entities with a view to enabling virtual connections to these resources; Data Center switches should provide seamless virtualization.

Reply to a comment/Post a comment

Note: A "Captcha" box will appear once you start typing a comment. If you have trouble seeing where to respond to the challenge, it goes in the space between the box showing the characters and the words "Type the Characters..."

Return to
Thought Leadership Series


Recent Comments