PR Archives:  LatestBy Company By Date


Press Release -- August 29th, 2017
Source: Verizon
Tags:

A primer on network testing

View PDF of this release

By Roger Entner, Guest Columnist, roger.entner@reconanalytics.com

It’s the time of year again. No, not holiday season. It’s the season where wireless network testing results are being released and companies use the often conflicting data to jockey for media and customer attention. To better navigate the basis of these claims, we’ve put together a primer regarding network testing comparing the strengths and weaknesses of the three key platforms: drive testing, crowd testing, and surveys.

Drive testing

Drive testing is done by a fleet of cars and small trucks that have multiple network providers on board. The vehicles then drive for potentially thousands of miles in a given city and its surrounding area, making non-stop calls and data connections measuring which network – 3G, 4G LTE etc. – it can access and how reliable and fast the connection is, for each provider. For voice, it measures if the call goes through, is dropped, and the voice quality of the connection. For data, it measures if the connection is possible and is maintained, and the speed and latency of the connection.

Contrary to what you might assume, not all devices connect equally well to a network. To account for this, the drive tests are usually done with the same device or as similar as possible for all networks tested. Most of the time, Android devices are used, as Android allows much greater access to the underlying mechanics of the connection than iOS devices.

While among the most scientific of approaches, the weakness of some drive testing results is that every year the metrics are recalibrated. When done right this does not distort the rankings, but makes it extremely difficult to accurately track the progress the various carriers have made year after year.

Crowd sourced data

Crowd sourcing data is gathered via an app that consumers download to their device. The tests are done by users whenever they choose, on any device – whether old, new, perfectly working or slightly damaged. If a customer has a defective device and over and over runs a crowd test application to verify his or her experience of having a slow connection, is it the network or the device that is to blame?

People usually perform such a test when they want to show off how fast their connection is, or to find out why the connection is slow. This often leads to the extremes being recorded, rather than giving a true sense of the average connection. Additionally, because of the nature of crowd sourced data there is an over-representation of certain demographics. For example, urban areas with younger consumers tend to witness more tests, meaning wireless networks that are less built out in rural areas can be favored in the results.

An advantage of this approach is that it is an ongoing measure. Customers are doing these tests every day, whereas due to the significant effort involved with drive testing or surveys, those tests are done monthly, semi-annually or even annually.

Crowd-sourced data’s strength comes from the fact it is sourced from real users in real-life situations, often via millions of tests – but that is also its central issue. Such tests are not repeatable or verifiable and anomalies in the data shake my faith in them. For example, a crowd-sourced provider a few years ago broke out their results for a big brand carrier and its two sub-brands. While all were running on exactly the same network, the crowd sourced provider presented three vastly different results. It is also uncontrolled and not immune from outside manipulation.

Customer experience studies

The third most common way to measure the performance of a network is through a survey asking consumers regarding their experience and perceptions.

The advantage of surveys over crowd sourcing is that surveys are generally created with a representative panel that represents the age, race, gender, socio-economic, and geographical distribution of the underlying population.

Though it holds this benefit over crowd sourcing, survey testing has some issues as well. Survey results can confuse the performance of a device with the performance of the network. In addition, survey testing relies on the recollection of consumers of how the network has performed, which may be imperfect as people take into account not only how the network is behaving now but also how it has behaved in the past. In addition, people’s opinions are highly subjective. What is fast and reliable to one person is not necessarily fast and reliable to someone else.

Summary

While the marketing teams behind each of the different test types can use whichever method best suits their purposes, it’s instructive to look at what network engineers use when making their decisions about network improvement. With drive testing results more indicative of actual network behavior than any other methodology, engineers rely on it as the most scientific approach. Crowd sourced and customer survey data is of course important, insightful and should not be ignored by any means. However, when it comes to truly evaluating a network’s performance, I’d look first and foremost at what the drive tests reveal.

How the three tests compare

Drive Testing Crowd Testing Survey
Consistent & Repeatable Yes No No
Controlled measurement of network Yes No No
Sample size Millions Millions to Billions Thousands
Device bias No Yes Yes
Self-selection bias No Yes Yes
Geographic & socio-economic bias No Yes Yes
Subject to manipulation No Yes No
On-going testing No Yes No
Longitudinal analysis No Yes Yes

Roger Entner is the Founder and Lead Analyst of Recon Analytics. Roger’s main focus is the competitive telecom market place and he is a leading expert on researching the wireless experience. More information at:  http://reconanalytics.com

PR Archives: Latest, By Company, By Date