The one reason net neutrality can’t be implemented

Whilst people argue over the virtues of net neutrality as a regulatory policy, computer science tells us regulatory implementation is a fool’s errand.

Suppose for a moment that you are the victim of a wicked ISP that engages in disallowed “throttling” under a “neutral” regime for Internet access. You like to access streaming media from a particular “over the top” service provider. By coincidence, the performance of your favoured application drops at the same time your ISP launches a rival content service of its own.

You then complain to the regulator, who investigates. She finds that your ISP did indeed change their traffic management settings right at the point that the “throttling” began. A swathe of routes, including the one to your preferred “over the top” application, have been given a different packet scheduling and routing treatment.

It seems like an open-and-shut case of “throttling” resulting in a disallowed “neutrality violation”. Or is it?

Here’s why the regulator’s enforcement order will never survive the resulting court case and expert witness scrutiny.

The regulator is going to have to prove that the combination of all of the network algorithms and settings intentionally resulted in a specific performance degradation. This is important, because in today’s packet networks performance is an emergent phenomenon. It is not engineered to known safety margins, and can (and does) shift continually with no intentional cause.

That means it could just be a coincidence that it changed at that moment. (Any good Bayesian will also tell you that we’re assuming a “travesty of justice” prior.)

What net neutrality advocates are implicitly saying is this: by inspecting the code and configuration (i.e. more code) of millions of interacting local processes in a network, you can tell what global performance is supposed to result. Furthermore, that a change is one of those settings deliberately gave a different and disallowed performance, and you can show it’s not mere coincidence.

In the 1930s, Alan Turing proved that you can’t even (in general) inspect a single computational process and tell whether it will stop. This is called the Halting Problem. This is not an intuitive result. The naive observer without a background in computer science might assume it is trivially simple to inspect an arbitrary program and quickly tell whether it would ever terminate.

What the telco regulator implementing “neutrality” faces is a far worse case: the Performance Problem. Rather than a single process, we have lots. And instead of a simple binary yes/no to halting, we have a complex multi-dimensional network and application performance space to inhabit.

I hardly need to point out the inherently hopeless nature of this undertaking: enforcing “neutrality” is a monumental misunderstanding of what is required to succeed. Yet the regulatory system for broadband performance appears to have been infiltrated and overrun by naive observers without an undergraduate-level understanding of distributed computing.

Good and smart people think they are engaged in a neutrality “debate”, but the subject is fundamentally and irrevocably divorced from technical reality. There’s not even a mention of basic ideas like non-determinism in the academic literature.

It’s painful to watch this regulatory ship of fools steam at full speed for the jagged rocks of practical enforcement.

It is true that the Halting Problem can be solved in limited cases. It is a real systems management issue in data centres, and a lot of research work has been done to identify those cases. If some process has been running for a long time, you don’t want it sitting there consuming electricity forever with no value being created.

Likewise, the Performance Problem can be solved in limited cases. However, the regulator is not in a position to insist that enforcement actions are restricted to those narrow cases. It is unavoidably faced with the general case. And the general case is, in a very strict sense, impossible to solve.

The Halting Problem is a subset of the Performance Problem. If you could solve the latter then you could solve the former. You can’t solve the Halting Problem, so the Performance Problem is also unsolvable. QED.

This single reason from computer science is enough to tell us that “net neutrality” is a technical and regulatory dead end. The only option is to turn around and walk away. You can argue as much as you like about its moral merits, but mathematics has already decided it’s not happening in the real world.

So if not “neutrality”, then what else?

The only option is to focus on the end-to-end service quality. The local traffic management is an irrelevance and complete distraction. Terms like “throttling” are technically meaningless. The lawgeneers who have written articles and books saying otherwise are unconsciously incompetent at computer science.

We computer scientists call this viable alternative “end-to-end” approach a “quality floor”. The good news is that we now have a practical means to measure it and hard science to model it.

Maybe we should consciously and competently try it?

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.