Does this site look plain?

This site uses advanced css techniques

We've long been pretty comfortable with TCP/IP networking down to the bit level, but our conversion to 802.11b wireless has been very recent. We love the functionality, of course, but we've been puzzled by how all the pieces fit together. What are "channels"? How does "sniffing" work? What's the over-the-air protocol?

We've found bits and pieces of this on the internet, but not a comprehensive overview with just the right level of detail on the standard from bottom to top. Hence this Guide (but it's going to take a while to get really fleshed out properly).

Note - we're not any kind of expert this, and we're only summarizing what we've picked up from Google searches. Those more in the know are encouraged to set us straight where appropriate.

802-dot-what?

(to be filled in)

802.11b Channel Assignments

The 802.11b standard defines 14 frequency channels in the 2.4GHz range, but only eleven are allowed for unlicensed use by the FCC in the US. Each channel uses "Direct Sequence Spread Spectrum" (DSSS) to spread the data over the channel that extends 11MHz on each side of the center frequency, and - this is important - the channels overlap. This table depicts the center frequency of each channel.

[802.11b frequency assignments]

To the extent that channels overlap, they interfere with each other and reduce available bandwidth, so for single access points it pays to first survey your area to find what your neighbors are using: pick your own channel that overlaps the least with them.

For installations that need to cover a wider area with multiple access points, three access points using channels 1, 6 and 11 have no overlap and provide a theoretical 33 mbit of bandwidth. Larger installations have to work a lot harder to plan signal and channel overlap. This is beyond the scope of this guide, but we found this article describing how it's possible to use four channels (1, 4, 8 and 11) and take advantage of the non-linearity of the overlap to get more than 33 mbit bandwidth.

Physical layer / How Fast?

Though a channel has 11 mbit of theoretical bandwidth available, this is utterly impossible to achieve in practice. In addition to the obvious issues such as "TCP/IP overhead", "interference" and "distance from the base station", the media access layer itself imposes its own overhead.

All stations sharing an access point share the same channel, and it's a strictly half-duplex protocol: a station is either transmitting or receiving, but not both. In this respect, the media access is very much like that used by Ethernet: Carrier Sense Multiple Access with Collision Avoidance.

When a station wishes to transmit, it first listens to see if the channel is in use. If not, it waits a small random amount of time and listen again. If still not in use, it jumps on the channel and does its business. If the channel is busy, it waits a while and tries again. When collisions happen - which is inevitable - all the stations involved back off a while and try again later. Anything sent during collision time is lost.

It's not clear just how much each of these factors contribute to the reduction of available bandwidth, but in practice it's common to get 3-4mbit/second throughput with consumer-grade equipment.

Power Saving Mode

This really works

The wireless protocol itself has a power-saving mode designed in that provides a kind of polling mechanism. In power saving mode, the remote stations go to sleep and turn on their radios at predetermined timing intervals which look to be a dozen or two times per second. During this wakeup time, they listen for a "Traffic Indication Map" packet that informs the remote card that data are buffered on the access point. When this is received, the card turns on its radio fulltime until all the data are transferred, then it goes back to power-saving mode.

All the cards served out of the access point synchronize some kind of internal clock, as all clients have to wake up at the same time to listen for these TIM packets. The "time" - probably not the "time of day" - is synchronized via broadcast beacons to around 4 microseconds.

Cards entering "doze mode" must inform the WAP of this, so it buffers the data, and whether one card is in doze mode doesn't seem to have any effect on any other cards - mix-and-match is OK.

Tests by Network Computing Magazine indicate power savings of up to 1000% depending on traffic, but we don't know how to measure this ourselves. The effect on our Orinoco PCMCIA card is that the radio lights are off most of the time, with frequent blinks when the receiver comes on.

What we can do is verify that we see no difference in user experience between the two modes: web browsing, Yahoo! Instant Messenger, and even interactive secure shell to a remote Linux box "just works" - no added latency (and we are very sensitive to telnet latency). We'd presume that playing a very highly interactive game might see some degradation, but we've not seen it yet.

We also suspect that wardriving requires full-power mode, there being no real central WAP to offer up the beacons.