How Wireless LANs Handle Video


For months now, we've been reading predictions that video traffic is about to flood corporate networks. Meanwhile, wireless LANs (WLANs) are quickly becoming employees' default access network. Video traffic consumes significant bandwidth and is sensitive to delay, packet loss and jitter. These metrics are particularly challenging to control in Wi-Fi's interference-prone and shared-access RF environment.


As WLANs and video applications become de rigueur in the enterprise, then, how can network administrators ensure high-quality, reliable performance of multimedia applications? Let's explore this question with Manju Mahishi, Director, Wireless Products Strategy at Motorola Solutions.



What capabilities has the IEEE built into 802.11 standards to help video operate well in RF environments?

First and foremost, the advent of 802.11n technology has significantly improved the handling of video over WLAN. 802.11n supports many inherent enhancements in the PHY and MAC layers resulting in higher throughput and overall reliability of wireless transmissions and is ideally suited for handling video applications.

In addition, 802.11e defines the QoS enhancements for multimedia applications. One of the QoS schemes introduced by 802.11e – Enhanced DCF Channel Access (EDCA) (which the Wi-Fi Alliance calls WMM, or Wireless Multi Media) defines four “access categories” of traffic to which prioritization levels can be assigned: Background, Best Effort, Video and Voice. This enables latency-sensitive voice and video traffic to be prioritized over other traffic in the network. EDCA/WMM basically allows for the tuning of minimum and maximum backoff slots for each stream of traffic – thus creating an advantage for a packet marked with higher priority.

Mapping wired side Layer 2 (802.1p) or Layer 3 (Differentiated Services Code Point, or DSCP) traffic priorities to the video access category enables an end-to-end prioritization.

You mention that there are four "access categories" - Background, Best Effort, Video and Voice.

Can you please explain a bit more here? In particular how do "voice" and "video" differ if both are real-time? Also, does "video" assume that it is "real time" or "non-realtime" (streaming).

As per the WMM spec, each access category (queues in the radios) is characterized by certain parameters that define when the packets in that category are transmitted over the air - essentially controlling the priority. By default, WMM gives higher priority to voice over video.

The WMM parameters can be tuned based on the application requirements - whether it is real-time or non-real-time video. Motorola, for example, distinguishes between interactive and streaming video and gives higher priority to the former. The retry mechanisms are also less stringent for interactive/real-time video. The ability to distinguish between real-time and non-real-time video helps optimize the handling of these applications over the wireless network.

Playing devil's advocate to a certain extent, but, when you say Mapping wired side Layer 2 or Layer 3 traffic priorities to the video access category enables an end-to-end prioritization, doesn't this work only if the end-points are wired?

I can see how this would be helpful, for example, when streaming video from YouTube because the video would be prioritized "half duplex" in coming from YouTube to the device.

But this is streamed - in which case (imho) the performance is not such a big issue - and I'm not sure what this gets me in a full duplex video chat.

The mapping of Layer 2 and Layer 3 priorities works with wireless end points.

In the scenario of wired to wireless traffic, all packets from the wired side are mapped to the appropriate access category (AC) by the wireless access point (AP) and transmitted with the right priority level. For example in the case of a wireless client viewing a YouTube channel, the AP receives video traffic from the wired side marked with the appropriate priority. All those packets are sent to the video queue and will get priority over best effort and background transmissions.

In the case of wireless to wired traffic, the application on the client device should mark the packets with the right priority. Based on that priority, the client will put the video packets in the appropriate video queue and schedule it for transmission. For example, if the application is video chat and those packets are marked appropriately, then they will be queued in the video queue and as per the WMM spec, these packets will get priority over best effort and background traffic.

Every 802.11n client should have WMM enabled by default (per spec). This will ensure end-to-end prioritization for the interactive video chat use case.


Something to note here about mapping L2/L3 priorities is that tagging uplink traffic at L2 (.1p) requires that each AP support VLAN tagging. Not doing so breaks end-to-end L2 QoS.

How do options added by 802.11n, like channel bonding, block acks, frame aggregation, and transmit beamforming, benefit video?

Channel bonding increases the operating bandwidth to 40MHz, thus enabling more capacity for HD streaming. Channel bonding is more useful in 5GHz band where there are more channels available. (Channel bonding in the 2.4GHz band, which has just three non-overlapping channels, is not recommended.)

With frame aggregation, the time between frames is reduced. Also, if every packet is acknowledged, bandwidth is wasted because the ACKs go at the highest basic rate (24Mbps for a/g networks), which is a much lower rate than the data traffic.

Block ACK enables acknowledgment of aggregated frames and saves precious bandwidth. This enables HD video streaming due to more bandwidth being available.

The net effect of beamforming is to improve the overall signal level at the client. Transmit (Tx) beamforming techniques fall into 2 categories: Explicit and implicit. Explicit beamforming requires client side support. Today there is not a significant deployment of clients with Tx beamforming support. Implicit beamforming does not require client support but is suboptimal since there is no feedback from the client.

With real multipath and mobile clients, the Tx beamforming system gain is really only about 2 to 3dB - if it works properly. An AP with high Tx power or a good antenna would suffice in most cases.

Overall, with regards to Tx beamforming, while it certainly has some merits, we do not see it benefiting video a whole lot in a well designed network that provides good signal quality where the clients operate.


Manju is right about TxBF. It's not used at short range because clients are already at max data rates and not used at long range because there are too many reflective paths for the DSP to decode. That means that it's only good at mid range, and only really useful if it's supported on both sides (client and AP). Even if it is supported, the net effect is only about 2-3 dB, which may yield 1 data rate if you're lucky. I see Cisco advertising their proprietary version of TxBF as adding "65%", but that's just marketing spew since that theoretical 65% is a bump of one data rate (e.g. 12 Mbps to 18 Mbps) best case. Until the Wi-Fi Alliance gives us a certification that includes TxBF, it's not worth much. Additionally, using TxBF means that you can't use multiple spatial streams in the current 3-spatial stream silicon (the first generation to support TxBF), so TxBF is only good at slow data rates. Which do you prefer, fast data rates using multiple spatial streams or some tiny bit better signal with a single spatial stream? :)


Doesn't MIMO/MRC on the receiver recoup the impact of multipath, whether the signal was beamformed or not, and wouldn't this mean it also extends the higher data rates further out to the edge?

Are 802.11 QoS standards alone adequate? In other words, how effective is WMM by itself at supporting high-quality video?

802.11n together with EDCA/WMM QoS enhancements certainly improve the handling of video but by themselves are not sufficient to guarantee reliable delivery under dynamic and heavily loaded RF conditions. Deploying video over wireless requires several network enhancements.

EDCA/WMM, since it is based on a shared access/fairness model, cannot guarantee that low-priority traffic will not be transmitted. It cannot guarantee bandwidth, latency and jitter required for acceptable video performance. It works well in the case where the network is not heavily loaded. It also relies on the wireless client devices and access points to control access to the medium and prioritization.

Some of these limitations can be overcome by implementing admission control mechanisms. The basic idea behind admission control is to limit the number of streams within an access category to preserve bandwidth, limit the latency and protect voice/video streams already in play.

When, where and by whom do the QoS characteristics get marked?

Something like using the IP DiffServ fields? This would seem to imply that each individual device marks the traffic type?

Also, this would seem to imply that the wireless device accessing the network needs to have sufficient intelligence to make these markings.

If the devices (such as a SmartPhone, for instance), lack this intelligence, do APs have sufficient intelligence for doing deep packet inspection in order to determine the traffic type?

The priority of the packets can be marked at the following levels:

• Application residing on the client or server: For example, most VoIP phones mark the packet before giving it to the radio for queuing.

• Client or server device marking packets: For example, Windows 7 has the capability of changing packet priorities originating from different applications

• Infrastructure level:

o Wireless AP: The AP can change the priority of a packet based on a match with configured firewall rules. Motorola APs support the capability to inspect traffic based on multiple rules including any packet with specific port numbers, such as 5060 (SIP), or packets from specific source/destination MAC and IP addresses.

Motorola also has an Application Level Gateway (ALG) for protocols such as SIP and SCCP. The AP can recognize the SIP streams and mark them with appropriate priority in either downstream or upstream direction.

o Infrastructure devices like wired routers and switches: These devices have the capability to mark the packets with L2 (802.1p) or L3 (DSCP/DiffServ) markings/priorities.


While I don't disagree with any of this, I think there's more to it than this really. With everyone going to a distributed architecture, I think each AP has to have a much more granular and powerful QoS engine and both uplink and downlink airtime fairness mechanisms. Rate limiters, classifiers, markers, schedulers, and per-user queues should all part of strong QoS engines within each AP.

How might the call admission control component of 802.11e work with WMM to make video experiences better?

The WMM Transmit Specification (TSPEC) standard specifies a means for the client devices in the network to request bandwidth and priority. The process is for the AP to advertise in the beacon if admission control is mandatory for a particular access category and for the client device and AP to negotiate a TSPEC that basically results in the client getting a TXOP (Transmit Opportunity). The TXOP specifies how long the client device can use the medium without contention. This negotiation helps the client to determine if the AP can provide the required bandwidth.

Again, how does a client device request bandwidth? And what classes (or percentages) of client devices have the intelligence to do this?

Also, what happens when there is not enough bandwidth for a "call" to be admitted. In the old voice world, we would define this is a "busy signal."

If the client is compliant with the TSPEC part of WMM, it requests bandwidth using TSPEC mechanisms. Basically the client sends an 802.11 action frame specifying its airtime requirement to the AP. The AP then factors the airtime requirements to implement an admission control mechanism where the calls/streams are not admitted if there is insufficient bandwidth.

There are not many client devices available today that support TSPEC. For non-TSPEC capable clients, the Motorola APs have the intelligence to calculate the airtime usage over a period of time and include this in the admission control mechanism. This avoids the problem where non-TSPEC-capable clients can degrade and oversubscribe the AP capacity.

How does Motorola enhance WMM QoS to improve video performance?

Motorola WLANs allow the tuning of the WMM backoff parameters to better optimize them for video transmissions.

Motorola also enables bandwidth limits to be placed on specific application traffic to facilitate the coexistence of all applications and to allow enough bandwidth for video.

Can enterprises use IP multicast technology over WLANs like they do over wired switched networks to conserve capacity and alleviate congestion? If so, how does a net admin set it up in a Layer 2, shared-medium environment?

Multicast video is generally more problematic to handle over wireless LANs, because there is no acknowledgment of packets. This leads to some level of packet loss. Given that, the multicast traffic has to be sent at lower data rates compared to unicast traffic. In general, then, it is desirable to use unicast where possible.

Can you say a bit more about "because there is no acknowledgment of packets"? Do I understand that this is a wireless-specific issue?

Seems to me that with the limited available bandwidth with wireless (as compared to any wired environment), the availability of multicasting/broadcasting would be of great interest.

There is no acknowledgment of multicast packets even on wired networks; this is not a wireless-specific issue. However, the problem is compounded on wireless because RF is a shared medium and transmissions are more error-prone.

Multicast is of great interest in wireless. But it is limited to basic rates, and there are no acknowledgments, so multicast requires special handling over wireless networks. Converting multicast video to unicast (as Motorola does at the AP level) adds reliability and can deliver higher data rates.

Does Motorola do anything specifically to help enable multicast or at least allow multicast and unicast traffic to peacefully coexist?

For enterprises wishing to use multicast, Motorola offers enhancements to multicast video that include the following:

• Tuning of Multicast Transmit Speed: Administrator can choose between four policies that determine the multicast transmit speed:
o Lowest basic rate
o Highest basic rate
o Dynamic basic rate: we look at the rate at which unicast frames were sent successfully to all clients associated to a BSSID [basic service set identifier] and choose the rate that would work for all. This is usually higher than the lowest basic rate and is very often equal to the highest basic rate.
o Dynamic rate: same algorithm as above except the range of transmit rates is limited to legacy Wi-Fi connect rates (up to 54Mbps)

• IGMP snooping – This is used to decongest the RF environment by not sending multicast streams that do not have receivers.

• Multicast-to-unicast conversion – Converting incoming multicast streams to unicast for the clients interested in a given stream solves the problem of unreliability of multicast streams in WLANs. This works in conjunction with IGMP snooping.

• Multicast mask configuration – WLAN administrators can configure multicast mask settings so that multicast streams travel over the air without any buffering, thereby removing the delay in multicast stream delivery inherent in WLAN world.


Something doesn't quite sound right about this:

"Multicast mask configuration – WLAN administrators can configure multicast mask settings so that multicast streams travel over the air without any buffering, thereby removing the delay in multicast stream delivery inherent in WLAN world."

Does this mean that your AP doesn't wait for the DTIM? If that's the case, then clients could (and most likely would) be asleep when multicast frames arrive. That would be disastrous. Does it mean something else?



Your understanding is correct, and I completely agree that if the clients receiving the "prioritized" multicast traffic have power save enabled, the net result is a performance hit, not a benefit. If this feature is to be used, the clients must be set to CAM. There are, of course, drawbacks to that requirement, chiefly mobile device battery life. In my experience working in the Motorola TAC, this feature was most widely used with applications and deployments that did not yet support WMM, but needed to reduce some of the latency inherit to 802.11 transmission of Multicast data, such as some of the old Vocera badges.

Is all video 'created equal?' If not, what are the various types and different performance considerations that accompany them?

Broadly speaking there are two categories of video: interactive video (like video conferencing) and streaming video. These two types have different latency tolerance levels.

For interactive video such as telepresence, latency is of high importance. WMM sets a 64msec end to end delay. Anything beyond this would degrade the video performance.

For streaming video, latency is of relatively less importance, as the receiver has a buffer and can compensate for late-arriving packets. A delay of 200msec end to end is considered OK in the case of streaming video. It is more important that packets do not get lost. But streamed video still needs a QoS advantage over other background and best-effort network traffic. Streaming video might also contain higher resolution video than video conferencing, and thus might require higher bandwidth.

Motorola distinguishes between these two types of videos and treats them differently. For example, interactive video frames are prioritized over streaming video; the rate adaptation is more aggressive to ensure that a frame goes through soon, but there are fewer retries, because frames that are delivered late are useless.

Why does streamed video need enhanced QoS? It seems to me as if one could tolerate low numbers of seconds if watching, say, a video on demand.

Also, what percentage packet loss can typical streamed video (and I realize it may depend on the codec) encounter and still produce acceptable quality? In the seemingly parallel world of digital voice, many algorithms can deal with high occasional packet loss - even on the order of several percent. (Of course, it also depends, I would assume, on whether the "lost" packets occur randomly or in a burst.)

For video, the quality of experience (QoE) matters more. Any pixelation or loss of voice/video synchronization at the client level becomes immediately apparent. To ensure better QoE, we have to ensure that the jitter and latency are under control, that the video stream is prioritized, and there is low packet loss. These are all measures of QoS that are quantifiable.

If you are simultaneously downloading a large file using FTP and watching video, the video quality will most likely suffer if there is insufficient wireless bandwidth. In this case, QoS becomes important, and you would want to throttle background traffic and prioritize video.

Besides QoS, there are others factors that affect the viewing experience, including video encoding formats. Given the same QoS parameters, we have seen noticeable differences in the QoE when different video encoding and delivery options (for example HTTP, UDP, MMS, etc.) are chosen. Awareness of these factors and making changes as needed are also important to ensure good video over wireless experiences.

According to industry experts, less than 0.1% packet loss over a 5-minute period is desirable for video/real-time traffic.

Can you suggest any general best practices for supporting the different flavors of video?

The key to handling video over wireless is to understand that besides being sensitive to latency, video consumes a lot of bandwidth. Also note that it is better to use unicast as opposed to multicast.

The following are some of the best practices for running video over WiFi networks:

• With regards to bandwidth, it is preferable to use the 5 GHz band with channel bonding especially for handling High Definition (HD) video streams. Using 5 GHz is preferred even with SD video for supporting a high-density deployment, as well.

• It is always advisable to use a planning tool to determine the number and location of APs needed to handle the required bandwidth that the video applications demand.

• Use QoS mechanisms to prioritize voice and video traffic over other traffic in the network. The use of WMM Unscheduled APSD (Auto Power Save Delivery) is recommended to save battery life.

• Map wired side Layer 2 (802.1p) or Layer 3 (DSCP) traffic priorities to the video access category to enable an end-to-end prioritization.

• If the incoming video stream is bursty in nature and if the stream overshoots the reserved bandwidth, there would be a random packet drop. In the case of HD video streams with average bit rates of 20Mbps, bursts may be as high as 10Mbps. So it is essential that the bandwidth reserved using WMM TSPEC be sufficiently high to accommodate these bursts and avoid random packet drops that degrade video.

In addition to pre-deployment planning, post-deployment testing can be used to verify video service delivery - for example, using similated or live video to measure metrics like throughput, media loss rate, and delay factor to derive a Media Delivery Index (MDI). Does Motorola provide or recommend any tools to accomplish this?

Post-deployment testing to measure video performance can indeed be useful, at least to establish some baseline. However, since RF environments tend to be dynamic, the video performance cannot be guaranteed to be consistent.

Motorola uses the Veriwave test suite for quantitative measurements and has also established a qualitative baseline for video performance using popular tools like VLC media player. This is not an endorsement of any specific tools however.

Regarding use of 5 GHz channels, when should channel bonding be used for video? Also, how does DFS on UNII-2 5 GHz channels impact video?

Channel bonding should definitely be used when streaming HD video.

In general, it is best practice to not use DFS channels in sites that are in close proximity to radar stations (you can determine that by pre-deployment surveys). It should be noted that DFS is only a concern when the AP detects radar and has to move to a different channel, thereby impacting video quality.

Can you throw some light on what kind of processors can handle all these software features to transmit streaming video and/or videoconferencing traffic? Do you think dual-core should be used over single core? if yes - how to organise the features between the cores? What do you think of multi threaded processors for these APs? Do you think they help to improve the performance?

Generally, what kinds of RF tools should enterprise wishing to run video over Wi-Fi make sure are present in their 802.11 access points and/or controller(s)?

Useful tools for deploying video applications in enterprises include:

• A comprehensive planning tool that takes into account the floor plan layout, building material characteristics and the physics of 802.11n WLAN. Such tools provide valuable information on the number – and optimal locations – of Wi-Fi APs required to optimally support video applications.

• Centralized visibility into comprehensive, real-time RF network conditions

• Tools to remotely or automatically troubleshoot network connectivity issues

I'm presuming Motorola offers tools such as those you've recommended. Are there other distinctive tools that Motorola offers?

Motorola has integrated a number of features to further optimize the handling of video over WLAN networks which are summarized below:

• Fair scheduling to ensure that legacy 802.11a/b/g devices don't adversely affect the performance of 802.11n clients. Fair scheduling can also be used to provide 802.11n devices more airtime than legacy devices. This capability extends to providing extra airtime for specific clients or groups, less airtime for .11b clients, or even guaranteed bandwidth for specific clients and groups.

• We simulate admission control for clients that do not support it. This way, those clients can receive voice/video even when admission control is enabled, and we control how much voice/video traffic is allowed on the medium.

• Delayed aggregation: This feature increases the amount of aggregation of downstream traffic and therefore reduces airtime use of the AP. Video that uses TCP as a transport would benefit by letting the clients have more airtime to send their TCP acknowledgements, which should reduce TCP retries, and as a whole support more video clients.

• SmartRF: Interference-resisting capabilities in the Motorola SmartRF tool suite ensure that the WLAN include automatic tuning of channel and transmit power levels in response to changing RF conditions or loss of an AP. Other SmartRF features pertinent to video:

o Spectral load balancing of client devices across wireless APs that accounts for bandwidth and RF utilization characteristics at the domain level.
o AP load balancing across a geographically collocated or distributed cluster of Motorola WLAN controllers.

• Path optimization so that applications such as peer-to-peer video (for example, Apple’s Facetime) don't have to send traffic through the controller and back down to another AP.
• Inspection of Real Time Protocol (RTP) packets and automatic assignment of appropriate priority and security for video traffic.

Briefly, what role do Session Initiation Protocol (SIP) and other standards play in wireless video?

For example, how fluent do network managers need to be in SIP (and/or H.323) signaling? In H.264 compression standards? What does a network administrator need to know about codecs and video formats when video becomes part of the wireless network mix?

The knowledge of codecs and video formats is useful in wireless network planning. The encoding determines the required bandwidth; some video formats such as YouTube-style video require 1 to 2Mbps, for example, while HDTV with MPEG2 could require up to 20Mbps.

For example it is possible to use “selective packet forward” (SPF) Application Layer Gateway (ALG) on MPEG2Ts encoded streams having constant bit-rate in a densely populated WLAN environment. In this scheme (for which Motorola holds an IP), non-critical video frames such as B frames or P frames of MPEG2TS stream can be selectively dropped to reduce required bandwidth without affecting the output video quality.

In a dense WLAN environment such as classroom receiving a constant bit-rate multicast video stream (a lecture) as unicast streams to all the students, the SPF scheme can help increase the total number of unicast streams supported.

It should be noted that real-time video conferencing / video surveillance etc., are controlled essentially by another device, and the role of the WLAN AP is limited to delivering the video content over the WLAN. From the WLAN point of view, sufficient bandwidth must be available, and if the input video is multicast, it should be converted to unicast.

The same video stream could be delivered to various types of recipients such as desktops, HDTVs, smartphones, tablet PCs, etc., each having different capacity and capabilities. The HDTVs may be capable of receiving high-definition MPEG4 (H.264) encoded streams while the handheld may be capable of decoding MPEG2 streams. The network administrator should know the format of the video source (such as MPEG2, MPEG4 etc), the stream rate and expected bursts in the stream. The administrator should also know what types of clients would be receiving the stream and the codec supported by each of the clients so that source transcoding can be done if required.

It seems that a lot of corporate video traffic - intentionally or unintentionally - is going to be coming from the latest generation of smartphone/iPhone and tablet/iPad devices. Additionally, specialized products such as the Avaya Flare could have a major impact.

To what extent are these various devices tested/certified for interoperability?

How reasonable is it to expect that these devices could take advantage of Motorola's capabilities (and/or the standard capabilities) discussed here?

Steve, partial answer: you can discover what products and features are Wi-Fi certified as interoperable at At this page, you can check off the characteristics that you wish to ensure have been interoperability tested. Note that it's one thing for a product to be Wi-Fi certified, say, for 802.11n interoperability. But WMM, power save, WPA2 security and other Wi-Fi "features" require separate tests and certifications.

For example, if I go to the site and filter on "Apple," "WMM" and "WMM Power Save," no products show up as certified. Yet if I search on just "Apple" and "802.11n," the iPad (first version) and the iPhone4 are listed as certified (as well as a number of other interfaces, laptops and devices from Apple). So while some of today's most popular devices have been certified for basic interoperability, they don't necessarily support the QoS capabilities described in this discussion - or at least, haven't been tested and certified for their support of them.

BTW - It seems that Avaya currently doesn't appear on the Wi-Fi Alliance's list of companies having any Wi-Fi-certified devices.

At Motorola, we have a comprehensive test bed that includes the latest generation of smartphones (Android based phones, iPhone, etc.) and tablets (iPads, Xoom, etc.) test for video performance - for both real time and non-real time video applications. All these devices can take advantage of the Motorola specific optimizations mentioned.


Let's consider a large coffee shop with about 50 people with laptops with basic 802.11n capabilities (no EDCA/WMM) that simultaneously watch videos from a popular website like Youtube, and therefore using TCP as the protocol to transmit the data. TCP is always going to probe the available bandwidth, so it is always going to have losses, typically from 0.1% to even 10%, where the throughput depends on those losses. Let's assume that my connection to my ISP is around 500Mbps. If a want to use a Motorola AP with the most advanced technoloy available to give good Youtube service to my customers, what would I have to do? IP packets from assigned to the video class? Do I have to fine-tune the video class in WMM according to some complex process, or the videos will go smoothly without any additional effort? And if I also want to have a reasonable service for the non-video data downloads? Thank you!!

Good question.

First of all it should be noted that if the laptops have 802.11n
capabilities, then they will automatically support WMM.

Secondly even if WMM is enabled, the packets may not get put in the
priority queue because they may not be marked with DSCP or 802.1p tags.
In that case we can do the following with Motorola's solution:

Option 1:

Use firewall rules to inspect the packets and assign the appropriate
packet marking rule if there is a match. The firewall rule can be used
to inspect traffic based on IP address, protocol or port number. If packets
are coming from YouTube we can either modify the 802.1p VLAN user
priority or the DSCP type of service (TOS) bits in the IP header. This way the YouTube traffic can be put in the right video priority queue.

Option 2:

We can force the entire WLAN to be video priority class. In this case
all packets whether it is YouTube or Gmail will go in the video queue
and be handled with the same priority.

Note that in both the cases this is only applicable for downstream
packets. For upstream it is up to the client to do the right thing
(assuming it is WMM capable).


You mentioned: • It is always advisable to use a planning tool to determine the number and location of APs needed to handle the required bandwidth that the video applications demand.

We are looking at an 802.11n MESH network as a possible infrastructure for video surveillance (megapixel cameras). What tools are available to estimate bandwidth and performance?

Return to
Thought Leadership Series

Recent Comments