Target UDP throughput 15-20 Mbps (54 Mbps bursts) sustainable throughout a 5000 square foot (460 sq meter) public area
After poring over so many pages of "suggestions" and "tips," I was beginning to wonder if I would ever see an actual number! Finally, an actual, measurable distance had presented itself; it was time for some well-deserved elucidation.
Knowing the area covered by an access point is very well and good, but we're chiefly interested in its range, and therefore, the radius. With the information given, there are a couple of ways to determine this: the first is the quick and dirty way; the second, solve a basic equation. Since I believe the quick and dirty way to be more effective at conveying the idea at hand, I'll start with it, though it turns out to be less accurate.
Draw a square. Next, draw an inscribed circle within the square. Find the center of the circle, and draw a radius that connects at the edge of the square.
Assume that the area of the square is 5000 sq. ft. The closest whole number approximation of the square root of 5000 is 71, as 71 x 71 = 5041--close enough for our purposes. Therefore, one of the sides of this square should be roughly 71 ft. in length. As the radius of the inscribed circle is half the length of one of the sides of the square, the radius should be roughly 35.5 ft. Therefore, a sustained UDP throughput of 15-20 Mbps should be attainable within about 35.5 ft. of the access point.
Now, for the more accurate method:
The actual radius is approximately 39.894 ft. The first method misses the mark by less than five feet--which I don't consider to be terribly significant--but it conveys, visually, how misleading a unit like square feet can be when trying to determine the coverage of an access point. That 5000 sq. ft. doesn't look quite so impressive now, does it?
A = π r2 5000 = π r2 5000/π = r2 r = √(5000/π) r = 39.894
With our improved numbers, it would appear that a sustained UDP throughput of 15-20 Mbps at 54 Mbps bursts is attainable within ~40 ft. of the access point. That's not too bad until you install it inside a building with walls, furniture, and all kinds of potential sources of interference--not to mention what happens when more than one host tries to use this access point at the same time. That's not all, though: it gets much worse.
The 802.11 specifications dictate that power levels must remain below certain amounts for various parts of each channel. This is called a spectrum mask, and if you exceed it, then you're likely to raise the noise floor on adjacent channels. That is, if you blast your transmit signal at a high enough level, you'll be out of spec, creating all sorts of interference for other wireless access points. In fact, Cisco access points actually enforce a maximum transmit power of 30 mW, or just under 15 dBm, when using 802.11g because OFDM modulation has a spectral mask that continues to generate considerable levels of energy beyond the channel boundary. Again, this means that, long after the point on the spectrum where a channel's signal is supposed to drop off, there remains signal in excess of what it should be, bleeding over into otherwise adjacent channels.
In a presentation by Cliff Skolnick for the 2003 O' Reilly Mac OS X Wireless Conference, it was stated that 802.11g has poor enough ACR (Adjacent Channel Rejection) to warrant using an AACh (Alternate, Adjacent Channel) scheme--that is, using channels 1 and 11 as opposed to channels 1, 6, and 11--when deploying a wireless LAN. Though this may be due, in part, to the spectral efficiency issues in OFDM that Cisco mentions, it seems reasonable to consider this a separate issue until I look more deeply into the ACR algorithms associated with 802.11g.
Another category of issues has to do with the differences in receive sensitivity and transmit power between a wireless access point and one or more connecting hosts. There are a number of especially well-known, well-documented phenomena that arise from such differences--the hidden and exposed node problems being the most notable--but for now, I'd like to focus on a much more obvious problem: transmit power mismatches.
As the demand for wireless access increases, wireless deployments become more ubiquitous, and as it happens, demands turn into expectations. As such, nearly every business, great or small, is being pressured to deploy wireless solutions, and most of them will attempt to deploy these as cheaply as possible. This isn't a problem.
What is a problem is that users can only expect high availability from a ubiquitous technology, and there simply is no such thing as a cheap, high-availability network--not even a wireless one. What's more, users see no reason for there to be problems unless a device is malfunctioning. To many users, there is a signal in the air, and with the right equipment, it can be captured, but that is typically the extent of their knowledge. They don't realize that their neighbor, their cordless phone, and even their own computer affects the signal to the degree it does.
It is because of this expectation that such cheap installations seem to always become enormous headaches. If those providing the wireless access knew, beforehand, what the expectations for their new network would be, they might think twice before bothering with the idea, at all. Doing things correctly can be rather cost-prohibitive.
Specifically what, though, makes a proper deployment so cost-prohibitive? Equipment. It's always, always equipment. Equipment requires initial principle, and initial principle cannot be considered without also considering the return-on-investment. Say an apartment complex is pondering the installation of an autonomous WLAN. They aren't going to raise the rent to fund the project, as that would clearly make their residents unhappy, but it's becoming difficult to retain residents without providing some kind of wireless Internet service; kids just expect it. The property makes a compromise: they decide to provide wireless Internet and the residents don't pay for anything. Great. They go with the cheapest possible solution: fewer access points, all configured for maximum transmit power. Now one access point can service twice the users, right? Not quite.
The trouble, now, is that the boosted transmit power allows twice the number of users to "hear" the access point, but the most distant users have a very hard time getting the access point to hear them, no matter how loud they yell. Good luck trying to figure out what's wrong from the user's point-of-view: they're seeing three bars of signal! There can't possibly be anything wrong on their end! The access point, however, may not even know that the user exists. In the end, your average laptop can only do so much, and consumer-grade access points are just as capable of creating this asymmetry as commercial equipment.
Consider: an ORiNOCO Gold a/b/g card is capable of producing up to 85 mW, or about 19 dBm, of transmit power when using 802.11b (60 mW, or ~17.75dBm, for 802.11g/a). Most access points, however, are fully capable of delivering 250 mW (~24 dBm) of transmit power, and given the chance, it seems almost any user is more than willing to crank up the juice without first considering the consequences. After all, the manufacturers certainly don't warn users about excessive power settings, and the users certainly wouldn't listen if they did. I wonder if customer support representatives for some of these products ever tell callers to turn the Tx power down. If they did, I think it would be interesting to listen in to some of the customers' reactions.
Of course, decreasing the transmit power on a user's access point in an effort to make their expectations more realistic regarding the transmit power of their laptop is really just a case of doing the right thing for the wrong reason. Sure, they should try to reduce interference as a courtesy to their neighbor, but that doesn't actually solve the user's real problem. The only reason a home user will go to the trouble of finding their long lost manual, risk breaking their wireless router, and spend hours on the phone with tech support is if they're not getting the coverage or performance they need, and that can only mean one of two things: either they need less interference or more access points.