Network neutrality: nasty or nice?

Network neutrality is the problem child of the telecoms regulatory and policy classes. The idea’s origins are in the imbalance of power between network users and owners, and the very real potential for abuse of that power by telcos and ISPs. The freedom to connect and communicate is both precious and fragile.

The neutrality concept takes as its starting point a reasonable desire: fair user access to the network, on fair terms, and at a fair price. However, it then engages in a philosophical error: it anthropomorphises packets – as if they were people or physical packages. This creates a false equivalence between what are arbitrary divisions of flows of data.

This mistaken treatment then results in an inappropriate application of previous common carriage principles to a fundamentally incompatible type of communications system. We have been seduced into this error because the same principles did work for telecoms data networks before broadband arrived.

Whilst the intent of ‘neutrality’ is good, this category error has unfortunate paradoxical consequences: the outcome is unfair, unreasonable and discriminatory for network users – to the point of being manifestly unjust.

There is a nice way out of this nasty problem. Read on…

Not a black and white issue

The supply of early data networks was based on Time Division Multplexing (TDM) technology, originally developed for circuit voice. These are “black and white” networks. Every flow either was rejected (“black”) to cause 100% loss, or found a network that gave the appearance of being empty (“white”). You had a reserved path that nobody else could use, just like a voice call. Data loss was seen as a network fault, and delay was fixed by the technology in use. This approach was continued with consumer technologies like ISDN.

In this black and white world we had complete phase isolation between the chunks of data being sent (i.e. no overlaps in arrival times) and complete flow isolation (i.e. no competition for transmission slots between users). This meant you got great quality, albeit at a high cost.

In this world, ‘neutrality’ had a clear observable outcome: non-discriminatory network interconnection; standardised pricing according to published tariffs; and a busy signal told you that your call was being rejected. Any other kind of performance issue denied the service provider the ability to charge.

This has now all changed: the essence of how modern broadband networks work is packet-based statistical multiplexing. Data is bursty, with big peaks, so reserving paths for single flows would result in a network that is mostly empty. Instead, we multiplex all the traffic together to improve efficiency and offer low prices.

Packets are bets on delivery

That means IP networks generally lack phase and flow isolation, as we no longer inhabit this black and white world. We’ve instead moved to a greyscale world, where flows encounter varying amounts of loss and delay, depending on what else is going on.

Hence broadband networks are a shared resource, with a finite capacity, and with users competing for their unfair share. The supply of broadband is a diminishable good, because all use always diminishes what remains for others.

Note that this doesn’t change even if we allow ‘priority lanes’ for video or voice: users are still in a stampede to trample one another and displace rival flows. Indeed, all those clever ‘adaptive’ voice and video codecs sending multiple copies of data can be seen as denial-of-service attacks on rival flows.

Internet Slack Providers

Broadband networks therefore require considerable ‘slack’ through over-provisioning to restore enough isolation so that users experience the illusion of having the place to themselves. This ‘slack’ reduces the chance of packets encountering too many other contending packets on their journey. This helps to ensure enough ‘good coincidences’ happen, so data can get through in a timely manner; and not too many ‘bad coincidences’, that might cause application failure.

Network neutrality fundamentalists pretend that the diversity of demand is irrelevant, and the constraints on supply do not exist. It is an extremist view that effectively outlaws tipping the odds so that more good coincidences happen, and fewer bad ones.

That’s fine if you’re willing to stay up to 4am so the network is empty enough that your online gaming works; not so great if your kids fall asleep at school every day as a consequence.

Right intent, wrong implementation

You can see this wrong thinking at work in the FAQ from SaveTheInternet.com:

Net Neutrality means that Internet service providers may not discriminate between different kinds of online content and apps. … With Net Neutrality, the network’s only job is to move data — not choose which data to privilege with higher-quality service and which to demote to a slower lane.

This statement explicitly denies the inherent nature of statistically multiplexed networks as trading spaces that are subject to schedulability constraints; it rejects the possibility of the network doing trades that result in a net benefit to both users and the network owner.

It is technically and statistically illiterate.

The road to hell is paved with ‘good’ market interventions

With ‘network neutrality’, we outlaw the use of better scheduling to address schedulability constraints. You are instead forced to use increasing capacity. This is very expensive, assuming it even works, or is even possible.

The structure of demand is constantly changing, and we keep adding more quality-demanding flows, like VoIP, gaming, femtocells —even rich interactive web pages are tough to deliver well. As a result, you need ever more ‘slack’, and as a service provider you have to increasingly over-provision your network. Meanwhile, that same overhead is also applied to other bulk flows. You can’t tell if it’s a two-way video packet, or an overnight backup, so they all need to be treated as the former.

So by treating all data as equal, it forces every flow to carry the over-provisioning cost structure of the flow with the tightest scheduling constraints that you are willing to deliver. This over-provisioning doesn’t just mean more routers, but more of all the transmission inputs: fibre, spectrum, power, towers and trenches. Your need to upgrade your copper network to fibre earlier, and densify your mobile towers further.

The net effect of network neutrality is to enforce the highest possible cost structure and the worst possible quality of experience onto users.

Unfair, unreasonable, and discriminatory

This outcome becomes most noticeable as networks saturate. That is why highly constrained networks that saturate easily, such as on planes and trains, are never ‘neutral’. The flows are highly managed, and some flows are blocked entirely. These networks are not some special exception, but are demonstrative of a general reality of broadband networks as a shared and contended medium.

With network neutrality we have a policy that:

  • allocates the most resource to the wealthiest users with the fastest links, premium speed plans, and most devices, who can most easily displace rival flows; and harms those with the slowest and cheapest connectivity. It is the antithesis of ‘fair’.
  • rewards the users who make the most greedy claim on the shared resource, using the most aggressive protocols; and disfavours those who are more frugal and collaborative. It is the antithesis of being ‘reasonable’.
  • imposes a one-size-fits-all structure to supply, and forces applications which are sensitive to cost or quality, but could in principle be scheduled within the available capacity, to instead become infeasible. It is the antithesis of ‘non-discriminatory’.

If you think that’s bad, try this for size. By encouraging the most ‘packet polluting’ use and users, who bear no costs to their behaviour, it creates the most network overload and instability. This pushes the network into its least-predictable area of operation; it exhibits non-deterministic and arbitrary behaviour – not even random. Bad things happen capriciously.

Packets are not people

Why do people fall for the ‘obvious truth’ of network neutrality, when in fact it is a mathematical insanity?

Problematically, our beliefs and attitudes are still unconsciously rooted in the black and white world. Common carriage regulations ignore issues of contention, and implicitly assume sufficient isolation from other users and services. Those assumptions are rooted in ideas of fairness that are effective in the physical world, where you have queues of people, or packages for delivery following relatively regular patterns of delivery, and relatively homogenous demand.

We take those ideas around fairness and try to allocate a false idea of network ‘bandwidth’ on the same basis, whilst ignoring the existence of contention between users and uses. We express concern about ‘priority’, as if someone was jumping a queue. We focus on packets, not flows, because we imagine them as being like physical things inside our heads, not creations of warring software programs grappling for more resources. We misapply ideas of fairness from humans in queues, and pretend that first-in-first-out is somehow neutral and natural for packets.

Neutralising neutrality

All of this is false reasoning based on a misconception of networks. The idea of “network neutrality” fails because it does not align with – and exploit – the fundamental trading structure of statistical multiplexing.

Networks are systems of two degrees of freedom — among load, loss and delay. This seems like a secondary technical detail, but in fact is central and crucial: it is what is true and common to every such network. If you were regulating the steam engine market, you’d want to work with pressure, volume and temperature. A law that tried to regulate the sizes of the lumps of coal would rightly be seen as bonkers.

At a fixed load, we can choose to give a flow more loss, and less delay, or vice versa; and we can trade loss and delay between the flows to get better outcomes. Furthermore, flows based on TCP/IP require the relationship between loss and delay to be stable.

Any scheme for ‘fairness’ that doesn’t align with this mathematical reality is forced to allocate the fundamental network resources via second-order effects. This in turn forces the gross misallocation of that loss and delay to the flows as instantaneous demand exceeds supply.

This is neither good for the user, nor for the network owner. Everybody loses by making bad trades.

Eliminate erroneous entitlements

Unlike in the physical world, users experience no direct cost from sending a packet and causing contention to other users. There’s no stamp to buy, and you don’t have to stand in line for hours. Broadband networks are rationing systems for a finite resource, but ones where the users have been given the ability to print unlimited ration coupons at home. Those who turn up with the most coupons the most often get the most hot and yummy packets.

There is no entitlement for all of these flows to encounter a network of the “black and white” kind; we live in a greyscale world. In particular, the entitlement of the user was not to communicate at maximum speed with any other user as if they have a circuit. That’s simply not how broadband networks work.

So what is the user entitled to?

Broadband service represents an ‘option to communicate’. When they exercise that option, the users of broadband services should feel entitled to receive something that is ‘fit for purpose’ for a reasonable and normal range of ‘purpose’ that was promised to them at the outset.

Restoring reason

There is an alternative framing of ‘neutrality’ that is fairer, more reasonable, and non-discriminatory. However, this requires a fundamental re-think of the nature of the contract between the user and the broadband service provider. The Hayekian free marketeers have an even bigger problem than the Marxist proponents of regulatory network neutrality. The only idea madder than ‘all packets are created equal’ is that this is a ‘fair and free market’.

The underlying regulatory problem remains an imbalance of power. This has many sources.

Firstly, we have broadband services that are traversing monopoly or oligopoly paths, both fixed (local loops) and mobile (licensed spectrum). You would hardly think of the bread market as “competitive” if the only two kinds of loaves on offer were sliced white and bran-filled brown. It wasn’t always this bad. When we had dial-up consumers had real choices. Rather than upgrade that model to higher speeds, we instead got sold a “broadband” pig-in-a-poke, and consumer choice was eviscerated.

Secondly, the broadband service provider is free to overload and over-contend the network by adding more users. You have no recourse, and you are left paying your 24 month lock-in on your contract come what may. Of course, if you don’t like white ISP bread, you can always sign a contract for only brown bread for two years, which the provider is free to adulterate at any time.

Thirdly, the other users (with whom you are competing) have no incentive not to be as aggressive and greedy as possible.

When you buy a broadband service, you have no knowledge of the intent of the provider, nor of the other users. Your payments are fixed, but your quality of experience is not assured, and can undergo an unlimited level of decline. There is a huge asymmetry in the power relationship and information between the user and ISP. This deserves intervention to make it transparent.

Introducing Quality Transport Agreements

What is fair and reasonable is in the eye of the beholder: there is no single perfect social and technical outcome; the “right” trades in the network are not fixed in stone. However, what is non-discriminatory can be modelled using mathematics.

We can instead make ISPs declare the statistical properties of the connectivity (and classes of transport) they offer. This creates transparent and enforceable contracts that allow a diversity ideas and trade-offs.

These are called ‘Quality Transport Agreements’ (QTAs).

QTAs rightly hold the network provider’s feet to the service delivery fire, by making network providers say what they do, and then do what they say. They can be used for both fixed and mobile networks. Such contracts also fix the worry about ‘blocking’ or ‘degrading’ of service: you know exactly what you are getting when you sign up.

There is a quid pro quo. If users are to get fitness-for-purpose, they have to declare the purposes for which they are willing to pay for fitness. In return for demand being quantified, the service provider is held to account. However, users cannot expect to receive more than the QTA offers as a natural right. If you want more, expect to pay more.

A maturing market

The broadband market (driven by illiterate boardrooms) is still pursuing a misjudged phony war for market share. This is the immature tactic of the early market that we saw in the early ‘90s (for fixed broadband) and 2000s (for mobile). It made sense then, but it doesn’t make sense in the saturating market we see today.

In future, ISPs should be competing on their ability to characterise demand, and configure their (most emphatically non-neutral but QTA-compliant) network to deliver the outcomes they have promised. In that market, there would be a huge diversity of providers, to reflect the diversity of users and needs.

For more information see the “Broadband Market Evolution” presentation: markets evolve, and users won’t need maths degrees to access the benefits of QTAs, as the market offers proxies for fitness-for-purpose.

Regulating reality

This form of statistically-literate approach is possible today, but policy-makers continue to treat broadband as if they are still regulating circuits. The evidence is apparent everywhere: a fixation on peak speeds (i.e. capacity constraints), whilst ignoring the equally important contribution of stationarity and bounds on loss and delay (i.e. schedulability constraints).

We can never escape the inherent reality of the system we are dealing with. Networks are trading spaces for a contended resource; they have two degrees of freedom; and they don’t have a control lever with a position marked ‘neutral’.

We therefore have to transcend the unhelpful split of users as ‘saints’ and broadband service providers as ‘sinners’. Users want to trample each other, and their unreasonable behaviour has to be curbed to create a fairer outcome for everyone. ISPs are referees who should be free to make up the rules of their game, as long as they publish them first.

Let’s regulate broadband networks for what they are, not a fantasy based on how they used to be. “Neutrality” is currently rather nasty. When framed and facilitated differently, it can be really nice.

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter