Tech Insights - Strategies for Meeting 4G Performance Objectives
Sooner or later, the industry will need to react to expectations for 4G, so what are the options.
With Verizon Wireless accelerating its schedule for LTE network rollouts and other carriers committing billions of dollars to play catch-up, I think it’s safe to say that the 4G era is well under way. So it’s only a matter of time before users become disappointed when the performance they receive from 4G networks falls far below what current hype suggests they should expect.
Why is this inevitable? Because the sterling data speeds anticipated for LTE, particularly on the downlink, are predicated on extremely high carrier-to-noise ratios (CNR) in the radio access network (RAN) that only exist when the user and serving base station are in close proximity and levels of co-channel interference are very low.
In the real world of cellular RANs, this fortuitous combination of circumstances is comparatively rare, so throughput speeds actually realized are generally much lower than the vaunted “up to” figure. To make matters worse, the throughput available on a particular network sector, having been diminished by less than optimal channel conditions for the various users being simultaneously served, must be shared among those users. So, in a “mature” 4G network – meaning one that serves enough users to turn a profit for its operator – spectrum that is theoretically able to sustain a downlink throughput of “up to” maybe 50 Mbps per sector might actually provide speeds of about 1 Mbps to a typical user during peak usage times.
One Mbps isn’t going to deliver the fantasy of streaming high definition, 3D video that the public is being led to believe is just around the corner with 4G service. Clearly, the wireless industry is sooner or later going to have to deal with this disconnect between hype and reality. As I see it, there are five main strategic moves that can be used in this regard. They are:
• Control peak demand for data throughput.
• Utilize additional spectrum for 4G networks.
• Increase the concentration of network base station deployment.
• Improve spectrum efficiency through technology advancement.
• Improve spectrum efficiency through optimization of RAN configuration and channel (frequency) utilization.
Controlling peak demand really means limiting per-user throughput, at least during times of peak demand. AT&T and Verizon are already doing this to some extent, mainly on their 3G networks, with monthly data caps. This is a rather crude approach which has the dual disadvantages of only indirectly (and poorly) controlling peak loading and being unpopular with users. Better tools will probably be introduced over time, but the real key to limiting peak demand is managing user expectations for the service.
The fact is that streaming video (other than short “clips”) is a positively terrible application for wireless data networks. Even if the industry excels at all of the other four network enhancement approaches discussed below, there will never be enough spectrum available to afford everyone the luxury of streaming Netflix videos to their smartphones whenever and wherever the mood strikes. Prohibiting these sorts of bandwidth-hog apps, or at least making them prohibitively expensive, will go a long ways toward allowing broadband networks to deliver broadband performance.
Procurement of additional spectrum for 4G networks is going to be a long and difficult process, and results will obviously be limited by the laws of physics as well as FCC regulations. Spectrum is a scarce resource, and I expect that operators will eventually go to extreme measures to meet growing demands. Already, it is widely believed that AT&T’s prime motive for acquiring T-Mobile USA is access to 4G spectrum. At this writing, that move has been at least temporarily thwarted by a government antitrust suit, but I expect that for the next few years we will see intense, and ever more expensive, jockeying and lobbying among industry players to grow and carve up the spectrum pie. If everything goes right, we might end up doubling, or perhaps even tripling, the amount of spectrum available for 4G, but that will not come close to keeping pace with anticipated demand. Other measures will almost certainly be needed.
Historically, wireless network capacity has been increased most effectively through the use of “cell splitting,” which reduces the average physical spacing between RAN base stations. The goal is to allow available spectrum to be reused more intensively in a given area. Theoretically, cell splitting can be done indefinitely, but there are practical limits. For example, if a 4 GRAN in a given area grows from 100 base stations to 200, capacity will double only if the average throughput per base station remains constant. That requires that average channel quality – meaning interference levels – also remains constant, something that is very difficult to achieve as cells get closer together.
For data networks, a more effective means of cell splitting will probably be the extensive use of femtocells – very low-powered base stations deployed mainly indoors or near street level where they can serve concentrations of traffic without contributing much interference to the overlying “macro” RAN.
A good deal of hope for future 4G network capacity increases is being pinned on development of advanced RAN technologies. In particular, the industry expects multiple input-multiple output (MIMO) transmission systems and “smart” antenna array technologies to provide vast improvements in spectrum efficiencies. Perhaps they will, but right now I am more than a bit skeptical. The main problem is that, for optimal performance, these systems rely on certain “theoretical” characteristics for the RF path between base station and user device that are easy to simulate on a computer but that often do not exist in the real world. My guess is that technical advances in femtocell utilization will prove to deliver more spectrum efficiency, at much lower cost, than MIMO and smart antenna systems.
That brings us, finally, to good old RAN optimization, the primary objective of which is to maximize capacity by minimizing mutual RF interference among simultaneous users of the network. As with all cellular systems, a critical part of 4G optimization will deal with RAN configuration, primarily in the form of base station locations and, most critically, antenna heights. Given the dearth of automated tools available today, configuration optimization is tedious, labor-intensive and expensive, but it is necessary for maximizing the capacity of available spectrum. It is also an important part of the cell splitting process.
In addition to configuration optimization, and unlike CDMA-based 3G networks, LTE RANs require frequency planning, mainly in the form of reuse management for downlink OFDM subcarrier groups. Frequency optimization in LTE RANs is a particularly complex proposition because of the nature of packet data service. In the ideal, every assignment of “resource blocks” (which includes one or more specific subcarrier groups) for downlink packet transmission, and every decision on transmit power levels to be used, is made so as to minimize mutual co-channel interference levels throughout the RAN. That’s a task of enormous complexity.
The good news about RAN optimization is that it can be hugely effective. It doesn’t take all that much optimization to realize a 3 dB improvement in average downlink C/I, which will roughly double throughput capacity. That’s why I believe the best hope for LTE technology advances isn’t in systems like MIMO but rather in tools that promote RAN configuration and frequency optimization. The other strategies I’ve discussed will certainly be important, but in the long run, LTE capacity and performance will only go as far as RAN optimization will take it.
Drucker is president of Drucker Associates. He may be contacted at email@example.com.