Why “neutrality” is too weak to protect broadband buyers

Multiple regulators have begun to take an interest in broadband quality metrics. These efforts are a strong hint that the regulatory approaches proposed by “net neutrality” campaigners are insufficient. That is because “neutrality” is too weak to give broadband buyers the service quality protection that they need.

Based on my conversations with Dr Neil Davies of Predictable Network Solutions Ltd, I present here the eight reasons why “neutrality” cannot work, and why a quality floor is needed instead.

Broadband policy is full of fallacies

To grasp the problem of “neutrality”, you first need to understand (at a high level) how packet networks work. There are three key properties to understand:

  • They are stochastic, i.e. based on probabilistic processes.
  • There is a “predictable region of operation” (PRO) for application performance outcomes.
  • The PRO (and hence the performance outcomes) is an emergent property.

See the article Three under-appreciated facts about broadband for more details.

Absolutely all of the academic, regulatory and analyst literature we have ever encountered on the subject of “net neutrality” fails to grasp the implications of these three facts. They make the same repeated basic errors: the broadband “game of chance” doesn’t work the way that they assume it does.

Fallacy #1: There is no PRO

A key failing of typical analyses is a lack of understanding of the existence of the PRO, either for any individual application, or the collective service. Indeed, there is a presumed Panglossian utopia of perfect experiences to all, with no consideration of behaviour in overload (i.e. as you go outside the PRO).

Why care? It doesn’t matter how (un)neutral a network is if it is being driven outside of its PRO for the applications(s) being delivered! This is not a theoretical concern: consumers are experiencing this all the time, and we measure it for a living. For example, every YouTube buffering event on a wired internet-ready TV is an application going outside of its PRO.

This one issue alone is “game over!” for the idea of “neutrality”.

Fallacy #2: Local “neutral” behaviour matters

Typical analyses fixate on false concepts of “congestion” and “QoS”. This fetishizes the individual throws of the “network dice” at myriad switches and routers along the end-to-end journey. That focus comes at the expense of understanding the emergent stochastic “pattern” and PRO of the whole system.

For example, an ISP might have traffic management rules that give local “priority” to VoIP. Yet it may have a worse service quality for VoIP than a lightly-loaded rival that does not “prioritise”. It’s only the PRO of the end-to-end service that matters to the user, since that’s all they ever experience.

Fallacy #3: ISPs have intentional semantics

The design philosophy of the Internet explicitly side-steps defining specific performance intentions and executing on them. There is not a single IETF standard for defining general application PROs, and no mechanism to deliver them. The emergent service quality is not an intentional behaviour of the broadband service provider.

The “best effort” approach (by definition) makes no assurances about the PRO, since there is no concept of a performance requirement. The system is not (usually) engineered to meet the needs of any specific application. That also means that as the network gets “hotter”, applications may fail in any order. No order is ever “neutral”.

For instance, the equipment behind many broadband offerings is optimised for maximum headline speed. This means they have large buffers, which causes VoIP to go outside its PRO, and fails at relatively low loads. And if all you care about is running speed tests, that’s the right behaviour! (Is it “neutral”? Who cares!)

Fallacy #4: Operational behaviour is always intentional

There is a false assumption of a naturally emergent deterministic behaviour. The fallacy is that we live in a self-regulating network Gaia, and this suffuses the literature. The Goddess of “neutrality” directs us to networking Eden.

There’s one problem: it just ain’t so. Historical operational behaviour is not necessarily the result of any engineered intention. This is really hard for people to wrap their heads around: the “it works!” of the past was “good luck” at the network casino, and is not a reliable guide to the future.

The PRO of the past was solely an emergent property of the stochastic properties of the past. Those stochastic properties are constantly changing. The PRO of the future will change, too, whether “neutral” or not.

Fallacy #5: Local ‘neutral’ behaviour is fair

Typical analyses wrongly presume some kind of fairness will emerge from configuring these stochastic processes in (yet to be defined) “neutral” ways. They are always wrong, because this confuses the stochastic cause and emergent effect.

“Fairness” is a property experienced by people, not packets. The only fairness we need to worry about is that at the emergent pattern level, not of the individual stochastic processes.

This means that regulations forcing ISPs to reveal their traffic management policies are misguided (at best).

Fallacy #6: The network understands intent

There is an implicit (and delusional belief) that a “network deity” that will somehow intuit the “right” emergent intentional semantics. You might laugh, but that’s the only possible explanation for the belief that “neutral” local behaviours will turn into the desired global ones!

Indeed, the absence of “intentional semantics” means that the assumed existence of any kind of “neutral” behaviour is untrue. The particular packet sizes and phasing of every flow varies, and any good outcome is always the result of good luck.

What “design” there is with traffic management merely biases the dice; it doesn’t engineer any particular intentional outcome for any application or user.

Fallacy #7: Any intent is benevolent

Even if it “neutral” stochastic behaviour existed, and there was a network deity, there would be no reason to assume that it would act for the good of all! Which users and applications should be (dis)favoured as load increases?

There is no agreed technical definition of “neutral” stochastic behaviour. Indeed, there can’t be, since needs vary; my “good” is your “bad”.

We see a regulatory and policy-making community suffering from a collective gamblers’ fallacy: any lack of joy must be someone’s intentional negative action. Don’t you know, ISPs are eeeeeevil! In reality, you have experienced the wrong kind of caprice of the dice at the network casino.

Fallacy #8: ‘Violations’ can be easily detected

The policy literature is full of the implicit belief that there is an accurate, scalable, industry standard way of measuring “non-neutral” behaviour. What you won’t find is a scientific analysis of the tools and techniques available, and their relationship to the stochasitic and emergent properties of broadband.

As you might guess, we have some news for you. This assumption of the ease of detecting “violations” is not grounded in mathematical and operational reality.

The hell of regulating “neutrality”

So when a broadband user screams “neutrality violation!”, what is a regulator to do? It could just be that a network is being driven too hard, and the user doesn’t like the resulting performance. It could be that the accidental service outcome doesn’t happen to fit the user’s desires. It could just be an artefact of how they measure performance.

Even supposing there was some unacceptable outcome, you can’t be sure where in the supply chain it comes from. Maybe it’s a result of their home router’s interaction with their neighbour’s wireless network?

Merely looking at the configuration of all the stochastic processes doesn’t help you. A printout of every traffic management rule doesn’t tell you how those stochastic interactions delivered the particular objectionable emergent outcome.

This makes regulating “neutrality” a hell for regulators, since it wrongly presupposes any objective means of defining and detecting “violations”.

The stronger alternative: a “quality floor”

There is a way out of this hell. Forget “neutrality”. It’s an invented irrelevance and depressing distraction. Instead, you should only be concerned with the emergent outcome, and ignore the stochastic inputs.

That means setting a minimum quality level, so that network operators don’t run their networks too “hot”, and configure them appropriately. Fortunately, this quality level can be objectively measured, cheaply and non-intrusively.

Make them say what the PRO is and then do what they say. They then compete on their promises of quality, and how well they fulfil them. It’s that simple.

There is a transformation process required, by both operators and regulators, to put a quality floor in place. It requires you to reframe your measurement systems into a new way of thinking; start characterising demand; and create the quality floor that matters to customers and citizens. If you would like to discuss how to do this for both fixed and mobile networks, please to get in touch.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.