- The Data Center LAN Evolution Series
- A Webtorials Thought Leadership Discussion
- Dr. Jim Metzler, Moderator
- Featuring Arista, Avaya, Brocade, Cisco Systems, Extreme Networks and HP
This is the sixth and last of the monthly discussions of data center LAN switching. The five previous months have focused on a specific technical topic such as the best alternative to the spanning tree protocol. This month discussion will be an interview that Jim conducted with each of the six vendors and will cover a range of topics, both technical and non-technical, that relate to the evolution of the data center LAN.
- Related discussion: What Are the Best Approaches to Scale Virtual Machine (VM) Networking Beyond the Data Center?
- Related discussion: The Ability of the Data Center LAN to Support Virtualization
- Related discussion: What's the Best Alternative to Spanning Tree?
- Related Discussion: Does Converging the LAN and SAN Make Sense?
- Related Discussion: Does OpenFlow Make Sense in Enterprise Networks?
In order to comment on the discussion here and/or to suggest further questions, please send email to Jim Metzler or simply enter your comment into the form below.
The trade press talks a lot about the need to flatten the data center in order to reduce latency primarily for east-west applications. Other than for certain well-discussed transactions in the financial industry, can you put a monetary value on cutting data center switch latency by a few microseconds?
There are some applications for which it’s possible to put a monetary value on cutting the latency of the data center switch by a few microseconds. You mentioned one such application - financial transactions like high frequency trading (HFT) readily come to mind. Another application is high performance computing (HPC). The monetary value that is placed on reducing data center switch latency by a few microseconds is generally a function of how the company monetizes the business impact of latency.
While some IT organizations place a value on reducing switch latency by a few microseconds, most IT organizations are looking for predictable latency, not just at a box level, but end-to-end across the network and applications. Based on the data flow, there generally is some fine-tuning that IT organizations can do to improve performance, as long as it improves user or application experience.. If, however, the fine-tuning doesn’t improve application performance, IT organizations shouldn’t look to optimize further.
HP has a team that is entirely focused on just the issue of very low latency switching and the applications that require this low latency. While there is interest in very low latency switching, that interest is very narrow. We see the interest currently as being largely limited to the high frequency trading markets. For most applications, cutting the data center switch latency by 700 nanoseconds or 1 microsecond doesn’t have a real impact.
It isn’t possible to put an absolute dollar value on reducing switch latency by a few microseconds. However, there are certain application suites that could benefit from lower switch latency. One such suite is data mining, which requires the sorting of huge volumes of data in a short period of time. One measure of the scale of sorting that is both necessary and possible is that in 2009 Hadoop set a record by sorting data at over half a terabyte per minute.
A possible way to quantify the value of switch latency involves the bandwidth-delay product. (The bandwidth-delay product refers to the product of a data link's capacity (in bits per second) and its end-to-end delay (in seconds). The result, an amount of data measured in bits (or bytes), is equivalent to the maximum amount of data on the network circuit at any given time, i.e. data that has been transmitted but not yet received.) When designing LAN switches, manufacturers set the buffer size to some multiple of the TCP window size. Lowering the latency in the data center LAN switches that connects to the Top of Rack (ToR) switches reduces the number of buffers that are needed in the ToR switches, which in turn reduces the cost of those switches.
There is clearly a monetary value in being able to support east-west traffic without having to access a core switch as this saves switch ports. Relative to switch latency, the way that we look at this is that it is difficult to put an absolute dollar value on cutting the latency of a data center switch by a few microseconds. Our position is that the value of low latency switching is largely determined by the application. For example, there is a lot of interest in low latency switching from IT organizations that want to reduce cost by converging their LANs and SANs. We also see interest in low latency LAN switching from the health care and financial sectors. However, in a lot of cases reducing the latency of data center switching by a few microseconds doesn’t provide any monetary benefit.
For certain financial applications even a nanosecond of reduced delay has a monetary value. In similar fashion, in many instances of high performance computing (HPC) it is possible to put a monetary value on cutting data center switch latency by a few microseconds.