Articles
As we near 4G networks, the concept of usage density
becomes key in considering variable speeds.
The other day, I downloaded a large file – about 100 MB (800 Mbit) – to my office computer using my “basic
level” DSL service. Because there was no other loading on my DSL at the time, the download took approximately 15 minutes at an average speed of just less than 1 Mbps.
While I was waiting, I contemplated how much quicker the process would be with the promised performance of 4G wireless data networks. As I was in a fixed location that would supposedly be 1 Gbps, so making the very dubious assumption of no other bottlenecks in the source server, on the Internet or in my computer, the entire 100 MB download would take about 1 second. Even if I was downloading to a portable computer while speeding along on the interstate, the 4G mobile performance target of 100 Mbps would have allowed the download to be completed in only about 10 seconds. Wow!
Of course, nobody really expects that individual users are going to see anything close to 1 Gbps data speeds over a practical wireless channel. For one thing, if you assume an optimistic modulation efficiency of 10 bits per second per hertz, the required channel bandwidth to deliver 1 Gbps, without any channel coding overhead, would be 100 MHz.
That’s more than the combined bandwidth for both uplink and downlink of all five channel blocks in the recently auctioned AWS spectrum. Nevertheless, 4G networks will be launched amid user expectations for exceptionally high throughputs, and delivering real-world performance that satisfies those expectations will be a difficult task for network operators. But the far bigger challenge will surely be in maintaining user satisfaction as network usage grows, because the real wireless data network design issue isn’t so much channel speed as usage density.
DENSITY DRIVES OPERATION
To illustrate how usage density drives network operation, consider the following scenario. Let’s assume that in some large metropolitan area a certain 4G wireless data operator has 10 MHz it can devote to the downlink channel.
Now consider a particular base station in that 4G network. With the OFDM technology used in all current 4G downlink channel schemes, it is unlikely that a given base station will be able to use all 10 MHz for its downlink channel, but let’s put that aside for now. In fact, suppose we assume that with advanced beam-forming technologies this particular base station can, on average, simultaneously use the entire 10 MHz channel twice for meeting the download demands of the users it is serving. To top it off, let’s also optimistically assume that the net channel throughput rates are going to average 5 bits per second per hertz. Our hypothetical 4G base station is then capable of a total downlink channel throughput of 100 Mbps. That’s nowhere near the 1 Gbps anticipated for 4G fixed, but in the context of what is considered “high speed” Internet access by today’s standards it’s very impressive.
SHARE AND SHARE ALIKE
Of course, that sterling 100 Mbps downlink throughput is for the entire base station, and it must be shared by all of the data terminals being served by it at a given time. By conventional wisdom, it is here that the nature of a packet data channel is supposed to pay big dividends.
Specifically, it is generally assumed that users want blazing speeds so that downloads of large files are very quick, but that on average (over, say, one hour) the per-user demand for throughput is much, much lower. For example, a user may want to download a 50 MB (400 Mbit) photo album, and will be very pleased if that process only takes a few seconds. But during the next 20 minutes or so, while the user is looking at the pictures, his or her throughput demand is essentially zero. For that user, average demand over the 20-minute period is about 333 kbps.
A casual analysis might suggest that if that’s the typical average demand per user, then the available 100 Mbps throughput capacity of our hypothetical base station could support 300 simultaneous users. Unfortunately, as anyone experienced in traffic engineering intuitively knows, this won’t be the case. In order to deliver the very high expected peak per-user speeds, average throughput on 4G channels will have to be kept far below maximum potential rates.
Just how much such backoff will be needed, and how many simultaneous users can be supported, will depend to a large extent upon the characteristics of user activity. This is extremely difficult to predict because the nature of Internet usage seems to be constantly changing. However, experience does suggest that when users are provided higher data speed they tend to find and use applications that require it.
For example, you can get streaming video of marginal quality that runs at 300 kbps. but if your connection can support, say, 10 Mbps, why not take advantage with a full HD picture and stereo sound? It won’t take many users streaming 10 Mbps video to gobble the capacity of a 4G channel.
TARIFFS
There are lots of ways to address this problem, the most obvious being to tariff 4G service on the basis of peak data rates. You want 50 Mbps speed? Fine, but it will cost you big bucks. “Consumer level” pricing might only get 1 Mbps or so. Another idea would be to allow users to enjoy extremely high speeds, but only for very short and widely spaced periods of time.
Somehow, though, 4G network operators are going to have to find a way to manage usage density. Trying to build enough base stations to give everybody all they want just doesn’t seem economically or technically practical.
Drucker is president of Drucker Associates.
He may be contacted at edrucker@drucker-associates.com.


