The real reason why network ‘neutrality’ is impossible

In “Net Neutrality: Discrimination, Competition, and Innovation in the UK and US”, Alissa Cooper and Ian Brown explore the relationship between two broadband regulatory regimes and their practical outcomes. The paper is of (paradoxical) interest as it (unintentionally) demonstrates how policy is being made without sufficient understanding of packet network performance.

This paper contains many common fallacies about performance. These fallacies are fuelling misdirected conflicts over broadband regulatory policy. The underlying (false) assumption is that ‘neutral’ (aka ‘non-discriminatory’) networks exist.

I am highlighting this paper as an exemplar of an endemic gap of scientific understanding. Bridging this gap will, I believe, transform the regulatory debate for the better.

Networks have performance constraints

The performance constraints of broadband need to be understood and respected, much like spectrum policy needs to fit within the immutable limits of the physics of electromagnetism.

The first error of the paper is to implicitly argue for a world that lies well outside of the mathematical constraints of statistical multiplexing. There is then an inevitable real-world failure to deliver on their utopian vision.

This reveals a second error, which is a mischaracterisation of the relationship between the ISP’s service performance intentions, the described traffic management rules, and the delivered operational experience.

The confluence of these errors leads to an unhelpful blame game between users, application developers and ISPs. As ISPs are stuck in the middle, they are unfairly singled out as the baddies.

The underlying issue is that the universe of discourse of the paper fails to reflect actual networks in operation. Examination of the truth of its subsequent policy-related claims becomes a moot question.

There is a schedulability constraint

The essence of packet networking is to statistically share resources, resulting in contention. Networks have multiple performance constraints, and one of them is schedulability of that contention. The effectiveness of packet scheduling is in turn limited by two factors: our knowledge of demand for performance, and the sophistication of the mechanisms in collectively constructing matching supply.

The authors posit the existence of a world of good and predictable performance from ‘non-discriminatory’ networks. Such networks are presumed to have minimal knowledge of differential performance demand, and minimal mechanisms for differential supply.

For this world to sustainably exist, it requires one of two impossible things to happen. Either contention is always negligible (so the schedulability constraint doesn’t matter), or appropriate scheduling of contention happens by magic (so the constraint appears to be unfeasibly high).

In the former case you have to believe not only in a cornucopia of resources but also that more capacity always solves all performance issues. Neither is true. In the latter case, you have to believe in the unbounded self-optimisation of networks. How else do you explain that the right scheduling choices are made?

A semantic model of ISP service

The authors repeatedly refer to ‘(non-)discrimination’, as the presumed technical means by which ‘neutrality’ is achieved. What does this mean?

The concept of ‘discrimination’ is only relevant to the extent that someone, somewhere is not getting the performance that they might have desired. It contains a philosophical trap for the unwary.

To see this trap we need to relate:

  • What performance is ‘best effort’ broadband supposed to deliver? This is the ‘intentional’ service level. Who did we want to ‘win’ or ‘lose’ when demand exceeds supply?
  • What did we describe it as delivering? This is the ‘denotational’ service level. What did we write about the ‘rules of the statistical game’ (e.g. product description, traffic management rules)?
  • What did it actually deliver? This is the ‘operational’ service level. Who was given performance delight or despair in practice?

For example, we might have intended to satisfy all the performance needs of a typical family with a home worker; described the service as having 1Gbps with no differential traffic management; and in practise the service is unusable for interactive gaming in the evening video-watching peak period.

The truth of the matter is that not all performance demands can be simultaneously satisfied at all times at any feasible cost. Holding this in mind, what are the intentional, denotational and operational behaviours of a ‘non-discriminatory’ ISP service? Is such a thing even meaningful to discuss?

Uncovering false assumptions about semantics

You can guess the answer. Their concept of ‘discrimination’ being offered has no objective and measurable technical meaning.

Broadband is stochastic, and performance is an emergent phenomenon. The reality is that ‘best effort’ networks offer arbitrary performance, and indeed may behave non-deterministically under load. That means any behaviour is a legitimate one! That’s the deal with the Internet architecture devil we regrettably made.

The authors appear unaware of this. For instance, they assume that localised denotational information about differential traffic management rules provides users with meaningful information about the global (emergent) operational behaviours. It does not.

By turning the (ahem) neutral term ‘differential’ traffic management into the judgemental ‘discriminatory’ one, it effectively asserts a belief in the intentionality of statistical flukes. The assumed relationship between the intentional and operational does not exist!

The network casino doesn’t care about you

It’s like having gone to the casino several times and come out as a winner, only to believe that the purpose of casinos is to fund your family’s luxury lifestyle. This mistakes prior benevolent operational randomness at the statistical multiplexing casino for intentional intelligent design.

This incorrect assumption that operational behaviours are intentional is absolutely pervasive, even among telecoms and networking cognoscenti. Humans are hard-wired to evaluate ‘intentional stance‘, so we unconsciously imagine there is a ‘homunculus‘ in the network doing good on our behalf when we experience goodness.

The real reason why ‘neutrality’ is impossible

Any specific operational behaviour is not intentional, no matter how strongly your intuition might feel it is. (It is theoretically possible to construct packet networks where the operational behaviours are intentional, but that is not how ISPs are currently designed or managed.)

The idea of ‘neutrality’ has focused attention on local scheduling mechanisms and whether they are ‘discriminatory’. But an impenetrable mathematical labyrinth separates the local mechanisms from the global user experience.

Ultimately we only care about the user experience, and not packets, since we want fairness for people. So regulation has become concerned with the ‘wrong’ side of the labyrinth. This is a subtle issue, but cuts to the core of the intractable ‘neutrality’ firestorm.

The blame game: who stole my performance?

In the authors’ worldview, the inability of networks to manufacture universal and eternal delight in performance gets treated as a fault of ISPs. The idea it is simply a mere scheduling constraint of mathematics is unthinkable.

Without ISPs as the baddies, the fallen angels of broadband might include users and developers, for their greedy demands and negligent engineering. Or maybe even lawyers and economists for having encouraged poor market incentives for use of a scarce resource.

To protect against the anxiety of these dangerous thoughts, we have to instead invent a universal entitlement to good performance. If I don’t get the performance I want, then someone, somewhere, is denying me my due. No, it’s worse than that! They are… DISCRIMINATING NEUTRALITY VIOLATORS!

The only alternative in the current network architecture model is to believe that arbitrary allocation of disappointment is fair and desirable. In an Alice in Wonderlandtwist, this performance caprice has been relabelled as ‘non-discriminatory’ in the paper.

This is to assert that equality of opportunity for unplanned misery trumps effective service delivery. A moment’s thought tells you that having all applications fail unacceptably, but all fail equally often, is not the basis of good broadband policy. So ‘neutrality’ is not merely impossible, it’s also absolutely undesirable!

The missing framework for reasoning

The underlying issue the authors face is the absence of a framework to even begin to talk about the problem. As a result, the paper’s position is akin to writing about spectrum policy in terms of the luminiferous ather. None of the conclusions can be depended upon, since the system under discussion is fundamentally misdescribed. This is a systemic problem, and not the fault of the authors.

I can now reveal I have pulled a naughty trick on you. This is not a review of the paper listed in the first sentence. I haven’t cited a single line of it. You could substitute practically any paper or book on ‘net neutrality’ as the opening line, and make exactly the same valid critique. (That said, yes, I did read their paper, and yes, this is a valid critique of their argument.)

We regrettably have an increasingly large body of self-referential literature on the subject that makes identical technical and reasoning errors. They have collectively disconnected from the reality of network performance by ignoring the mathematical ‘labyrinth’. Instead, they have created an alternate universe of fantasy ‘neutral’ networks.

The time has now come to rethink our approach broadband policy. The first step is to abandon ‘neutrality’, both as a term and concept.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.