Broadband – what kind of ‘network freedom’ do users want?

The US appeals court has rightly thrown out an attempt by the Federal Communication Commission to impose common carriage regulation on broadband ISPs. Both the language and conceptual tools being used by both sides to argue their case fail to reflect the actual nature of broadband. Specifically, historical ideas of ‘carriage’ for physical objects are being inappropriately extrapolated to virtualised distributed computing systems.

I have been discussing these issues with Dr Neil Davies and would like to share some resulting insights from our conversation.

The history of communications and its law originates with the passing of people and their goods. In the physical world, we have long recognised two key freedoms of communication: that of association, and that of movement. The freedom of association is about making choices over whom we relate to (and don’t); the freedom of movement allows us to embody those associations.

Everything in the physical world is fundamentally constrained by its geographical space and context. Indeed, the only way in which you could associate with other people was to move into their physical proximity. Any blocking of movement was also an inhibition of your right to associate. Because of the need to move to associate, we tended to conflate these two distinct communications freedoms.

We are social creatures largely defined by our associations. Historically, who you are is intimately tied to where you are, since that defines what associations are possible. You are known as, say, ‘the village priest’. The history of humanity is largely the densification of associations (into towns and cities), because more associations allow for greater trade and wealth. The closer physical connection is just a means that allows for greater speed and diversity of association ends.

The circuit model of telecommunications followed this same basic geographically-constrained pattern. The structure was tied to the land you occupy (local, national and international calling regimes). Your network identity was geographically-based, and price related to geography and distance. Association was made through initiating a connection between two geographically addressed network nodes. The resemblance with the physical world was strong, and the same regulatory concepts worked.

The success of the global data internetwork concept was to break this link to geography. Instead of being physically where you were, you instead become whomever you choose to logically associate with. This very newsletter is a logical association of people, and has been distributed globally. Geography still matters, but is greatly subservient. We can create new forms of association on top of the internetwork that are not constrained by the underlying geographic end point addressing system.

Yet the Internet as a specific instantiation of that concept is a very early and limited one. It has two baked-in assumptions. The first is a global address space, so that everyone is automatically associated by default with everyone else. Thus the act of association is hidden, and the choice to not to associate is lost. The second is that there is no native concept of the timeliness of the arrival of data, since there are no agreed bounds on any impediment to its movement.

As industry luminary John Day notes: “If the Internet were an Operating System, it would have more in common with DOS, than UNIX.” DOS has a global address space, and little understanding of the differential performance needs of applications; Unix isolates applications, and allows for different performance needs. We therefore need to bear in mind that what we are regulating is an early form of internetwork. We should not mistake its present implementation, based around TCP/IP, for a universal set of principles.

A central fallacy in the network neutrality debate is to continue to conflate these freedoms of association and movement. The resulting ‘freedom to connect’ mirrors the primitive nature of the Internet, and imposes legacy circuit-like thinking upon it. It is as if we designed PCs around a ‘freedom to compute’ that failed to reflect any modern understanding of security layering and process scheduling. The regulatory debate is thus mired in DOS-level thinking, when it needs to leap to the UNIX level.

This subtle conflation fallacy can be seen in the underpinning legal theory behind the network neutrality court case: a ‘virtuous circle’ of more users and content providers, which maintains the growth of the Internet and demand for broadband. In other words, both sides generally bought into the theory of the beneficial nature of the ‘freedom to connect’.

This ignores that there is a cost to association that must be borne by someone. It shows up in the need for larger router address spaces, IPv6 upgrades, carrier NAT devices, firewalls, intrusion detection systems, anti-DDoS systems, and so on. We can argue over what that cost is, but it is not zero. Therefore it is false to assume that having everyone associated by default with everyone else is automatically optimal.

This distinction between association and connection is not an academic question. Technologies like machine-to-machine are investigating alternatives to TCP/IP (for example RINA) precisely because of the technical and security costs of (involuntary) over-association. It’s why a local configuration change in China can accidentally redirect all their traffic to Wyoming. In a computer, the cost of association shows up as shared memory space for inter-process messaging. There’s a good reason why association is only done ‘on demand’ to the extent needed, because it can be a significant expense.

The fallacy of conflation of freedoms shows up in a second way. The deal with ‘best effort’ broadband is that you get whatever you get. There is no implied fitness for any purpose whatsoever, and thus no promise of any bound on packet loss or delay. If you didn’t like what you got, find another supplier, or go without. This has been positioned by proponents of the Internet as a benefit, since it is supposed to somehow avoid the network provider usurping power over its users. In practise, it leaves users defenceless against the predations of other users competing for a shared resource.

The equivalent of a ‘freedom of movement’ in packet networking world is the freedom to contract with the network for a specific delivery outcome. By doing so, the network isolates you to an agreed extent from the effect of other users contending for that shared resource. That freedom is one that is being denied to users today.

Again, this is not an academic issue. There are socially and economically important applications that ‘best effort’ broadband is unsuited for: small cell backhaul, interactive sign language for the deaf, home workers requiring high-quality video conferencing, and so on.

In these examples, there is clearly an opportunity cost to having others be excluded from the shared resource. Thus users who want assurance of higher quality should pay more; and those willing to yield and have their demand time-shifted should pay less. A ‘freedom to connect’ ignores the costs users impose on other network citizens and the impact on their needs. It is really a ‘freedom to pollute’ the shared resource, but without the polluter paying.

The current approach to broadband technology and regulation is denying users these basic freedoms of association and contract. Whilst the intent of those advocating an unbounded ‘freedom to connect’ may be benevolent, their impact is not. The consequence of continued ‘DOS thinking’ in networking is to inflate prices, damage security, harm stability, misallocate costs, reduce choice, and perpetuate technologies that are past their use-by date.

Future network users will look back at the ‘network neutrality’ conflict in puzzlement. Why were so many people campaigning so hard for an insecure, expensive and under-performing data networking system?

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter