- A TechNote on The Next Generation
- Jim Metzler
- Distinguished Research Fellow and Co-Founder
- Webtorials Analyst Division
Low Port Densities: Three Tiers
The main impetus behind the deployment of the three-tier data center LAN architecture was the low port density of previous generations of data-center LAN switches. For example, it was common for first-generation switches to have as few as 16 or 32 ports. Even a medium-sized data center required many access switches to interconnect its servers, because traffic almost always had to travel between a server connected to one access switch and a server connected to another access switch.
The best way to interconnect these access switches was with another set of switches, referred to as distribution switches. From there, high-end data centers required a large number of access switches to interconnect the distribution switches; this was done with a third set of switches, known as core switches.
There are many ongoing IT initiatives aimed at improving the cost efficiency of the enterprise data center. Among them are server virtualization and a service-oriented architecture (SOA), shared network storage and high-performance computing (HPC). In many cases, these initiatives are placing a premium on the ability of IT organizations to provide highly reliable, very low-latency, high-bandwidth communication among both physical and virtual servers. While the hub-and-spoke topology of the traditional three-tier data center LAN was appropriate for client-to-server communication (sometimes referred to as "north-south" traffic), it became suboptimal when applied to high volumes of server-to-server communication (or "east-west" traffic).
High Port Densities: Two Tiers
One approach to improving server-to-server communication is to flatten the network from three tiers to two: access-layer switches and aggregation/core-layer switches. A two-tier network reduces the number of hops between servers, which in turn lowers latency and potentially improves reliability. The typical two-tier network is also better aligned with server virtualization topologies, where virtual LANs (VLANs) may be extended throughout the data center to support dynamic virtual machine (VM) migration at Layer 2.
Reducing the architecture to two tiers is possible because modular data center switches have moved way beyond their low-port-density predecessors to provide up to 768 non-blocking 10-gigabit Ethernet (GbE) ports or 192 40-GbE ports. Today's high-speed uplinks are often multiple 10-GbE links that leverage link aggregation (LAG). Note that a 40-GbE uplink typically offers performance superior to that of a four-link 10-GbE LAG. This is because the hashing algorithms that load balance traffic across the LAG links can easily yield suboptimal load distribution with a majority of traffic concentrated in a small number of flows.
Most high-performance modular switches already have a switch fabric that provides 100Gbps of bandwidth to each line card, which means that as 40- and 100-GbE line cards become available, they can be installed on - and preserve the enterprise's investment in - existing modular switches. Most vendors of modular switches are currently shipping 40-GbE line cards; 100-GbE line cards will not be widely deployed until late 2012 or 2013.
The significantly higher port densities on the current generation of data-center LAN switches are good news for IT organizations that want to implement a flatter network. More good news: it is also possible to combine multiple data-center LAN switches and have them perform together as if they were a single, very large switch.
However, even though IT organizations can now flatten their data-center LANs, they are not necessarily rushing to do so. The market momentum behind this approach has yet to build.
At the forthcoming Interop conference in Las Vegas, I will co-moderate, along with Mike Fratto of Network Computing, a half-day workshop on data-center LAN design, on Monday, May 7, 2012.