How should regulators measure broadband quality?

One of the surprises in the Federal Communications Commission’s proposed “network neutrality” rules is a requirement for ISPs to report packet loss rates. This additional rule is significant since measurement is always de facto regulation, in this case of ISP service quality.

Buyers of broadband services wish to select the best-value service provider and plan for their needs. To make informed choices, they need a clear view of prospective service performance, and thus its likely fitness-for-purpose. How should the quality of a broadband service be represented?

Speed is the current QoE proxy

“Speed” has, until now, been an adequate general proxy for QoE. One simple metric has allowed the market to function. However, users are now increasing the diversity of performance demands they place on broadband networks. Peak speed is an increasingly weak technical proxy for application performance and user experience.

The market needs to function in the face of this increasing sophistication of demand. As such, regulators are charged with finding appropriate (new) supply performance metrics. Yet holding service providers to account is difficult.

The end-to-end supply chain is complex and hard to manage. The user’s experience may not be directly affected by their access provider, but another supplier further along the value chain. Little “hard evidence” is available of where problems exist, or their real cause. What data exists only weakly reflects the true customer experience. The attribution of root cause of performance problems is riddled with uncertainty. Attempting to pin blame could embroil regulators in endless lawsuits.

This lack of standard industry performance metrics means citizens are reliant on 3rd party benchmarking services. These typically indicate how well a service is optimised for peak burst speed, not performance of essential interactive applications.

The FCC picks the wrong metric

In the fact of this need for better metrics to make the market more transparent, the FCC has chosen to focus on average packet loss. Unfortunately, this is a very poor choice of metric. There are two main reasons:

  1. There is no quality in averages. The vital detail of the “tail” of the probability distribution is invisible. In this case it would be the level of burst packet loss. Without this data, the measure tells us little about the true customer experience.
  2. This is a system of two degrees of freedom, among load, loss and delay. Decreasing loss means (all other things being equal) that you increase delay, or have to reduce the accepted load. By fetishizing low loss, we paradoxically make the customer experience worse.

Hence this metric is unfit for purpose, and may drive undesirable market behaviour. Indeed there are many unanswered questions: Over what network elements is the reporting to happen? How should this loss be reported? What does any particular loss pattern mean? What happens during events like DDoS attacks?

So if not this, what is a good regulatory metric?

Properties of a good regulatory QoE metric

A QoE metric that enables true market transparency needs many properties:

  • Be a strong proxy for QoE. We want an accurate view of network performance as seen through the eyes of the customer.
  • Be able to isolate problems in supply chains. The methodology for attribution of blame has to be robust in order to stand up to challenge in court.
  • Offer an auditable evidence chain. We need to be able to avoid data tampering, and track the data inputs measures to the QoE output measures.
  • Be non-intrusive. We don’t want a “Heisenberg effect” where the act of measurement distorts performance.
  • Work for all types of bearer. It’s no good for regulatory use if you can’t help consumers compare all offers.
  • Be cheap to gather and operate. The costs of collection, storage and analysis have to scale sub-linearly with network size, subscriber count, data usage, and time.
  • Be non-proprietary. There need to be multiple potential suppliers, and the metric must suitable for incorporation into standards.
  • Have a scientific basis. This means having a firm mathematical foundation that is documented.

There is (and can be) only one such measure. We call it “ΔQ”.

In an upcoming article I will summarise what this (new) ΔQ measure is all about. For a sneak preview, see this deck.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.