Ofcom publishes scientific report on net neutrality

Imagine for a moment that a regulator, prior to issuing potentially controversial rules about “network neutrality”, got its technical house fully in order. Imagine that regulator hired the leading experts in the field for scientific advice, so that its rulings were grounded in technical reality. If you can, imagine an open process that was open and subject to scrutiny by the whole technical community. Could this ever happen, or are these hopes just the ravings of a deranged fantasist?

I am pleased to reveal that the UK telecoms regulator, Ofcom, has just published such a landmark scientific report. It is written by my colleagues at Predictable Network Solutions Ltd.

The download link is here.

The title of the report is “A Study of Traffic Management Detection Methods & Tools”. Whilst that sounds rather dry and academic, its contents are transformative for the “network neutrality” debate. Why so? The report identifies the many false technical assumptions being made about detecting “discrimination” or “throttling”.

Summary of key report findings

Here are some top findings and highlights from my interpretation and summary of the report:

  • Not a single one of the players offering traffic management (TM) detection tools is fit for regulatory use in the UK (where localisation of issues in the supply chain is especially important). They all have limited utility, relevance, accuracy and/or scalability. (“…we must conclude that there is no tool or combination of tools currently available that is suitable for Ofcom’s use”). There is a long explanation of their (often embarrassingly common and severe) individual and collective failures to deliver on their promise.
  • You cannot conflate ‘equality of [packet] treatment’ with delivering equality of [user application] outcomes. Only the latter matters, as ordinary users don’t care what happened to the packets in transit. Yet the relevant academic literature fixates on the local operation of the mechanisms (including TM), not their global aggregate effect.
  • You cannot legitimately assume that good or bad performance was due to the absence or presence of TM. (“The absence of differential traffic management does not, by itself, guarantee fairness, nor does fairness guarantee fitness-for-purpose.”) The typical chain of reasoning about how TM relates to QoE is broken, confusing intentional and unintentional effects.
  • Networks cannot choose whether to have traffic management or not. (“…since quality impairment is always present and always distributed somehow or other, traffic management is always present.”) This ends the idea of a “neutral” network being one free from TM.
  • There is a fundamental false assumption that any current observed performance outcome is intentional TM. (“…even if an outcome is definitely caused by e.g. some specific configuration, this does not prove a deliberate intention, as the result might be accidental.”) This instantly blows apart most current discussions of “discrimination”, which imply an intentional semantics to broadband that does not exist. It also eliminates the possibility of an oath of “do no harm”, since there was not intentionality to the emergent outcome anyway.

Traffic management is not the real issue

Some TM detection techniques are effectively denial-of-service attacks on the network. The regulatory approach of asking ISPs to divulge their TM policies is of limited value. Entities like M-Lab (funded by Google) are offering potentially misleading ISP performance data. The bottom line is that an alternative approach is needed.

TM transparency and detection are the wrong questions to focus on, since they don’t tell you what matters, which is what application QoE is being offered. A footnote captures the issue, arguing instead for an enforceable quality floor:

“By its nature, the intention behind any TM applied is unknowable; only the effects of TM are observable. It may be worth noting that, due to this, the best way to ensure end-users receive treatment in line with expectations may be two-fold: to contract to objective and meaningful performance measurements; and to have means to verify that these contracts are met.”

This would exist in a new framework that delivers transparency to users of the fitness-for-purpose of each service for different types of application use.

The significance of this report

In summary, my (highly biased) view is that this report is a masterpiece. It gives a rigorous and scientifically defensible analysis rooted in a profound understanding of the mathematics of statistical resource sharing. It reframes the regulatory performance measurement issue in a way that makes it possible to escape the quagmire.

Speaking of the mucky quagmire, it’s time to wake up and smell the technical manure of the net neutrality debate. It’s all around us, and the clear-up job is a big and dirty one. These findings obsolete several papers and books on the subject by legal scholars. Their understanding of network performance is unsound, and they have been unintentionally fuelling the conflict as a result.

Furthermore, that we can’t yet properly measure the services we are offering in a customer-centric way is an industry embarrassment. The technical weakness of these tools is a cause of industry shame. This should be chastening for all of us in the broadband business to do much better.

On a more positive note, Ofcom looks heroic, at least compared to other regulators whom we won’t name. They now have their technical facts in order before proceeding towards making new rules. Regulators worldwide should be thankful, and need to pay close attention to this report. The issues being addressed will only intensify in importance.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.