The Necessity for End-to-End QoS Experiments

One phrase you'll see time and again within posts is the necessity for "end-to-end" experiments/measurements. Because I repeat this phrase so often, I figured someone would ask why at some point (so I'll beat you to the punch).

 

End-to-end: adj: with the end of one object in contact lengthwise with the end of another object.

 

The dictionary gives us the general idea, but to be more explicit (in networking terms) end-to-end refers to the network path between a data source and sink, including network elements such as NIC, switch, hub, bridge, etc. From a QoS perspective, it is necessary to perform end-to-end experiments/measurements because (1) network elements don't always do the right thing with packets, and (2) no single entity (alone) on a shared network (WiFi) can accurately deduce available bandwidth.

 

(1) Because the QoS program is part of core Windows networking, we've had the opportunity to observe many strange behaviors with networking products. One such example is some network switches drop packets (or crash!) with MTU larger than 1500 bytes, e.g. when an 802.1Q priority tag is present. Because every element in the path between a source and sink has the opportunity to mangle (or drop) a packet (even though they're not supposed to in most cases), it doesn't make sense to rely on whether a NIC alone supports priority tagging. If a tagged packet will never reach the sink because it gets dropped by an intermediate switch, or worse, it causes the device to crash, it seems to reason the packet shouldn't have been tagged. The qWAVE subsystem hides these complexities from an application and does the right thing behind the curtains. If the end-to-end path does not support prioritization, a packet will not get tagged (even if the application requests it). This protects both the network and the application.

 

(2) Every WiFi connected device (PC or otherwise) has a different experience of the network. A laptop in the same room as the AP will have a better signal (and data throughput) than a laptop in a far-away room with walls and/or floors separating it from the AP. Even the furniture placement in a room, or people moving around can affect the wireless signal, and therefore throughput capability of devices in that room. Because every WiFi connected device has a different experience of the network, it makes no sense for one device to act as an authority and inform all other devices what the throughput capability at any moment is for the shared network. Inline with the example, if the laptop in the same room as the AP were to ask the laptop in the far-away room (with degraded connectivity due to obstructions) for the available network bandwidth, the result would not be accurate. The only way to accurately estimate available bandwidth for a data stream is to measure end-to-end between the source and sink, and monitor real-time throughout its duration. This will take into account the bandwidth usage of any ongoing streams (not necessarily between the same source/sink), as well as how well connected the source and sink devices are.

 

The qWAVE subsystem leverages different lightweight techniques to estimate end-to-end bandwidth and allows calling applications to access the results real-time. Further, applications may register for change notifications (congested and uncongested) to adapt appropriately to variable network characteristics.

 

There is much more to write about for packet tagging, so I'll save this for another post. As always, questions/comments/feedback is welcome.

 

- Gabe

Comments

  • Anonymous
    June 29, 2006
    My previous post on WiFi QoS (WMM) discussed the four access classes (BG, BE, VI, and VO) available for...