Carriers often make bold statements about the strengths of their networks—from data speeds and reliability to coverage area—as a way to attract new customers. They hire third-party firms to gather performance metrics on their networks, as well as competitors, testing them in ways that everyday consumers use them, and in turn use that data to fuel marketing and advertising campaigns.
The curious thing to keep in mind, however, is that all of these claims are based on data—and data is malleable. However, the strength of a claim made lies with the way data is collected – this is critical to a carrier’s ability to create provocative taglines touting “America’s number one network.”
The devil is in the method
There are multiple methodologies employed today to capture performance metrics for carriers. These can be broken down into two different camps. One is focused on collecting hundreds of thousands of samples of network activity through a controlled fashion (whether this is by way of app testing or drive/walk testing with benchmarking equipment) with the other being a more crowdsourced approach. In crowdsourced testing, an app is downloaded to a phone and the user either initiates a test themselves or an autonomous agent collects data in the background.
In terms of their usefulness, crowdsourced testing is good for capturing a very high level view, a snapshot of carrier performance, whereas controlled testing is an engineering-driven approach. The controlled testing approach is used when carriers want to monitor or perform detailed network assessments of existing networks as well as network upgrades and expansions. Both are used to make carrier network claims and are used in the media, but clearly controlled testing provides the most in-depth look into a network’s performance.
Like controlled app testing, crowdsourced testing utilizes a user’s smartphone to collect performance data —the primary difference, however, is that with controlled testing a trained engineer operates the app to collect samples systematically. Crowdsourced apps, on the other hand, randomly collect information in the background of a user’s phone during normal activity or when prompted by the user (for example, when users run tests to check their upload and download speeds, alongside other high-level performance details).
Crowdsourced testing enables a glimpse into the network—from this view it’s possible to get a general understanding of how the network is performing but it does not capture critical information needed for a detailed assessment of network performance. In addition to the data lacking depth, this approach is limited by virtue of it being uncontrolled. There could be any number of variables at play that could produce random results, distorting the true performance. In particular, data is generated from random locations at random times, and from a myriad of users that have different devices, firmware versions and operating systems.
It is, however, cost effective, as a trained specialist isn’t required to perform the tests and the only real equipment involved is a user’s smartphone. The data is easy to generate and present to media, so unsurprisingly reports generated from crowdsourced data can gain quite a bit of attention. While the information should be taken seriously, it should also be taken with all of the caveats that comes from uncontrolled data.
Controlled testing can be further broken down by testing with either an app or device integrated with specialized benchmarking equipment. Both involve a benchmarking specialist, with app testing gathering surface layer data and the latter capturing layer 3 data (data that contains critical network engineering-level information). App testing can yield details about a network’s average throughputs, maximum achievable throughputs, signal strength, task success rates and channel information across voice and data networks. Testing with benchmarking equipment, whether drive or walk testing, captures detailed metrics involving voice and packet data. Think of app testing as a way to understand the “frame” of a network, but testing with dedicated equipment is required to understand the “soul” of any given network.
In order to collect detailed information on voice, for example, testing with benchmarking equipment is a necessity. App testing just can’t deliver comparable information on voice, which is crucial, as according to recent research commissioned by Global Wireless Solutions voice still ranks as one of the most popular uses of phones today. To properly determine voice quality, benchmarking equipment utilizing standard call tests and algorithms must be deployed (i.e., tests involving MOS, Mean Opinion Score, and POLQA, Perceptual Objective Listening Quality Analysis).
It can be a Catch-22, though—why not always capture performance metrics on data and voice through this sort of testing? The main hurdle is that this sort of testing is resource intensive. The equipment is expensive, and if drive-testing is employed to gather data across regions then costs relating to fleet maintenance need to be considered. Beyond that, this method collects gigabytes of data, which is then further processed, reviewed and analyzed by teams of wireless engineers, QA technicians, IT specialists and more. It’s a lot of manpower, but required in order to thoroughly dig through the collected data to identify trends, assess performance and compare the data against previous collection periods. Data collected from app testing gets a similar analytical treatment, albeit the resulting information isn’t as in-depth as an assessment generated from measurements collected from actual test equipment.
In the end, one maxim should be followed by all whenever reading material on a network’s strengths and weaknesses—trust, but verify. Reports speaking to network performance should be thoroughly examined to see just where the data both comes from, as well as the methodology employed to capture it.
Dr. Paul Carter is president and CEO of Global Wireless Solutions, a network benchmarking, analysis and testing company. Global Wireless Solutions has worked with a number of prominent businesses in the wireless industry, including AT&T and Sprint. Prior to GWS, Dr. Carter directed business development and CDMA engineering efforts for LCC, an independent wireless engineering company.