Advertisement
Articles
Advertisement

From the Magazine: LTE Network Self-Optimization: Present and Future

Wed, 05/15/2013 - 10:56am
Elliot Drucker

One of the more interesting trends in the wireless industry is development of technologies for the so-called self-optimizing network (SON). Particularly in the case of LTE, it is widely accepted that successful implementation of SON features will be crucial if network capacity is to expand to meet anticipated growth in demand. With this in mind, it might be useful to take a look at where SON stands now and what it might provide in the future.

The Third Generation Partnership Project, (3GPP) is supposed to be developing standards that support SON features in a multi-vendor LTE network environment. The 3GPP first published TR 36.902 in late 2009 as part of general LTE standards Release 9. A few revised versions of that SON Release 9 standard have been issued, but the last dates back to mid-2011. It provides only basic descriptions of “use cases” that illustrate some of the potential functionality of some SON features, and few if any specifics on how they might actually work. Since then 3GPP has issued general releases 10 and 11 of its suite of LTE standards, neither of which included an update of TR 36.902.

This seeming lack of progress on SON standards should not be interpreted as disinterest in SON technology development.  Indeed, major infrastructure vendors like Ericsson and Huawei strongly promote their existing and planned SON capabilities on their public websites and at industry trade shows. These offerings provide an assortment of features that address virtually every aspect of network deployment and operation, but those dealing specifically with the LTE radio access network generally follow the key capabilities discussed in the last published version of TR 36.902. These include: configuration of physical cell ID; optimization of handoff parameters; determination of cell neighbor lists; optimization of random access channel (RACH) parameters; and configuration and optimization of inter-cell interference coordination (ICIC) features.

The first four of these “standard” SON features for LTE RANs can generally be thought of as automation of basic engineering functions in wireless networks. When a new base station is deployed, or an existing one is reconfigured, functions such as identifying non-conflicting physical cell IDs and determining neighbor lists have traditionally been approached as part of the engineering process. Automation obviously has the advantage of reducing the engineering effort (and thus the cost) involved in eNodeB deployment and reconfiguration, but more importantly it makes practical the widespread use of very small cells. Consider, for example, the technical and business model of allowing a customer to buy a femtocell device at a local retail outlet, take it home, plug it in, connect it to some sort of broadband Internet access, and turn it on with the expectation that it will operate properly and fully integrated with the surrounding macro network.  That obviously doesn’t work if the process requires case-by-case human engineering of air interface parameters. Automatic configuration of cell ID, RACH and handoff parameters, and neighbor lists is therefore crucial to the “plug and play” model of small cell deployment, which in turn is a big part of the attraction of “heterogeneous networks” or “Het-Nets,” another area that’s currently the subject of intensive technical development in the industry.

These four elements of self-configuration and self-optimization are certainly important.  However, in my opinion, by far the biggest potential “win” that might ultimately be derived from SON technology will come from management of inter-cell interference. I am not referring here to the currently used models for ICIC, or even enhanced ICIC (EICIC), which are static configurations based on simplistic RF propagation and interference models.  (See “Are Het-Nets the Answer?” Wireless Week August 2012.)  Instead, I am envisioning a future LTE interference management system based upon the dynamically changing potential interference situation in the network.

To see how such a dynamic interference control system might work we can start by imagining an extensive LTE network serving a vast distribution of mobile users. At any given time, a random fraction of these users will be actively engaged in data communications sessions. Most of the active UEs will be subject to potential mutual uplink or downlink interference. From moment to moment, the mix of active users, their geographic locations, and their data throughput demands will change.  That’s mostly what makes the potential interference situation dynamic.

Now let’s assume that within this network there are three cells – we’ll call them A, B, and C –with geographical RF propagation footprints that significantly overlap. Based on the current ICIC model, to avoid potential inter-cell interference UEs in these three cells would have to be assigned resource blocks from three different portions of available network spectrum, at least for UEs located near the largely mythical “cell edge.” (See “The Myth of the ‘Cell Edge’,” Wireless Week December 2011.)  But there will be plenty of cases where, for example, an active UE in cell A is located where it would be interfered with by downlink co-frequency use in cell B, thus putting it at the “cell edge,” but would not be bothered by co-frequency use in cell C. At the same time there may very well be an active UE in cell C that could tolerate simultaneous co-frequency use in cell A. Those two UEs – one each in cells A and C – could then be allocated common downlink resource blocks without excessive mutual interference.  Extended to every active UE in an LTE network, such resource allocation based on real-time UE positioning could affect a substantial increase in average spectrum reuse density and a corresponding increase in network throughput capacity and QoS. In its ultimate form, the dynamic system would optimize every downlink and uplink resource block allocation in every cell in both frequency and time domains so as to minimize RF interference.

This level of dynamic interference control will require two key elements. The first is complete real-time information about potential interference situations that have to be avoided. The most practical way to get this information is for each active UE to measure and report downlink signal levels from all potentially interfering cells, something that is actually within the capabilities of today’s LTE UEs.

The second required element as a mammoth system of high speed processing that takes the downlink measurements from UEs, translates them into RF path loss, takes into account the current throughput demand for each UE and each cell, and then, in real time, optimally allocates resource blocks for upcoming frame periods.

Such real time dynamic optimization may seem impossibly complex.  It probably is today, but in a few years this system might be completely practical.  It’s also possible that slightly less optimal results can be achieved with much less complexity. In any event, I believe that dynamic interference control is where future SON development efforts will reap the greatest rewards.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading