Rise of the Sensible Network – exploiting the statistical multiplexing gain

A decade ago I worked at Sprint in Kansas City. During that period in my professional life I experienced a painful inner conflict. The executives in the telecoms business were busy pushing high-margin “controlled” network services like voice, SMS, IPTV and WAP. These met an onrush of what later came to be known as “over the top” services.

To my simple mind, there was an obvious potential to disintermediate the telcos’ services. The inevitable result would be a disastrous meltdown in telco profitability. Yet few executives at Sprint seemed to notice or care.

Thus I was relieved at the time to come across the seminal and influential 1997 paper from David Isenberg, “The Rise of the Stupid Network”.  This paper legitimised my internal conflict, making it clear that I was not alone in seeing that the telco emperor was facing a business model wardrobe malfunction. David’s uncomfortable position as a dissident within AT&T was one I could readily empathise with.

I recently had cause to re-read David’s original (and sometimes controversial) work. It was a pleasant surprise to rediscover, as subsequent detractors have overlooked the subtlety to his arguments. With the advantage of a great deal of hindsight, education and experience, I would like to offer an informed critique. Specifically, I’d like to draw out what I believe are the key and valuable insights from the paper; some limitations to its arguments; and lessons we can draw for the future.

Three timeless insights

The paper asserts three timeless insights scattered within its text. Here is my interpretation and summary:

1. Grab the gain: There was an artificial scarcity imposed by a circuit-centric mentality. The telco cult of over-reliability insisted on delivering only high-quality data flows, regardless of actual user need or the resulting cost. The move to packet-based statistical multiplexing offered an efficiency gain, at the price of (potentially) less reliability. The Stupid Network was more appropriate to users’ ability to pay, and thus democratised communications.

2. No gatekeepers: Tying applications to the network provider creates friction for developers. If service creation requires business development or marketing approval from a third party, then that is a gatekeeping function which depresses innovation and extracts high rents. The Stupid Network eliminated the need to get special permission to have your application distributed & billed by any and every telco. This unleashed a torrent of new ideas and applications.

3. Prefer generality: The generativity of a public network depends on being able to support unknown and unknowable future application demands. Optimisation of supply to the demands of one era can inhibit meeting subsequent unanticipated demands. A network typically has more value if it can adapt to do many things well enough over a long period, rather than few things to perfection for a short while (to the extent there is a trade-off between these). The Stupid Network has proven to be fitter – in a Darwinian sense – than its rivals and alternatives.

For getting these core principles right, I believe future historians of the Internet and broadband revolution will look back kindly on what David wrote. Users can access a wide range of online services at very low cost; the innovation cycle is timed in days and weeks, not years; and the ability of intelligent devices to work around limitations of the Stupid Network has thus far proven to be considerable.

The price of packets is costly contention

There is an intellectual struggle evident in David’s paper, notably in a (subsequently deprecated) section on “idiot savant” network behaviours. The efficiency gain of packet networks comes at a well-known price: by foregoing the user and flow isolation of circuits, we encounter contention between users and uses. Therefore a central question about packet networks is how best to allocate that contention.

I believe the basics principles David outlines are sound. I also believe these insights have then been interpreted in ways that are problematic for the future of a networked society. Specifically, we have confused these timeless ends with the current Internet (and TCP/IP) as one transient means. Rather than being a distracting side-issue, those “idiot savant” behaviours are central.

Let us look again at the key principles, holding in mind this central role of contention for shared resources.

Grab the gain: Application quality problems cannot endlessly be relieved (at any sustainable cost) by simply throwing bandwidth at the problem, even if this was historically true. The structure of network demand and supply is changing, invalidating simplistic extrapolations. One reason is that small amounts of real-time traffic drive high cost, not because they consume “bandwidth”, but because they demand exclusion from the network of rival flows. Another is that networks can’t tell the difference between real-time and bulk data (since they are monoservice). That means the bulk data inherits the undesirable cost structure of real-time data. Furthermore, keeping TCP/IP networks stable also means keeping them needlessly empty. Over-provisioning is thus an ineffective and unaffordable long-term approach to resolving schedulability and resilience issues. Consequently, the economic gains of statistical multiplexing are being lost to an increasing and costly waste of resources in access networks.

No gatekeepers: The undesirability of tying of applications to the network ended up confusing two orthogonal issues. The economic relationship between the application provider and the network is distinct from the technical one. In losing the gatekeeper, we also lost a ‘referee’ between competing claims on the network. That referee has two related functions: helping the network make good resource choices on behalf of the user for intra-user contention; and resolving competing inter-user competition for resources. We can create better networks that empower users by acting as their agents: servants of user choice, not capricious and exploitative masters. Failure to address this issue places us in danger of re-creating gatekeepers to future networked services, in order to get the referee function back.

Prefer generality: Users have been sold an entitlement to network usage, regardless of the effects on other network users: “communication rights without contention responsibilities”. As a result, the networks we are building are unsuitable for applications that are sensitive to cost or quality. The harsh reality of Internet access is that it offers a failure-prone user experience, with reliability not on sale at any price. Future digital services critical to society’s functioning – such as teleworking, e-healthcare, smart grids, automotive applications – all demand that missing dependability. You can only build these higher-order systems from sufficiently predictable sub-components. The continued generativity of public data networks is at risk, since they lack suitably predictable costs and experiences. Indeed, we are repeating the very mistake David warned about, attempting by technology and regulatory policy to ossify a network optimised for long-lived file transfers and Web browsing.

The job of the network is to deliver data in a timely manner, and you can’t endlessly mask delayed or missing information by performing more clever computations at the network edge. The ability of intelligent devices to work around limitations of the Stupid Network is not unbounded, as they cannot reverse the arrow of time.

Hence we’re at an impasse due to our historical technology choices, and it is time for a re-think.

Same ends, different means

In my view, the three key principles point us in a very different direction to the current Internet.

Grab the gain: We must fully exploit the potential of packet-based statistical multiplexing, distancing ourselves even further from our circuit past. We need access networks that we can run ‘hot’, whilst successfully delivering a wide mix of different application types, and that are stable under load. That means using more advanced techniques to trade contention and isolate users and applications from one another.

No gatekeepers: The tying of applications to the network remains an issue. We should not be concerned about the mirage of “neutrality”, and its false equivalence of packets or flows. Demand is heterogeneous, and there is no one-size-fits-all form of supply. Rather, we should embrace a diversity of supply, with varying assured service levels and payment models. However, where networks use public rights of way, these capabilities should only be offered with fair, reasonable and non-discriminatory terms. Taking the “contention referee” out of the statistical game of chance is a recipe for long-term disaster, since all users are greedy scoundrels and will cheat! But the referee can’t have side-bets on the game’s outcome, or shares in the teams that are playing.

Prefer generality: The generativity of the network is dependent on delivering successful application outcomes. Merely having the right to inject as many packets as possible is not a source of value, and all packets are pollution of a shared resource. If we aim for successful outcomes at an affordable cost, we must allocate that pollution to where its effect is least toxic. To continue to enjoy the statistical multiplexing gain, the polluter must also pay for the contention pain. That means applications must express their needs and preferences for cost and failure, so the toxicity can be traded about wisely.

The network cannot satisfy every user demand, everywhere and always. The job of intelligent devices is to express what trade-offs they are willing to accept when not all demands can be simultaneously satisfied. The network then becomes an agent of the users, negotiating inevitable “failure” in a way that works to the greater good.

Today’s networks fail unpredictably and let down vulnerable users. Over-provisioning contributes to our billowing CO2 emissions. Our network protocols reward greed, and punish grace. These are not the values we wish to promote, and are not a basis for a sustainable infrastructure.

Rise of the ‘Sensible Network’

In his paper’s concluding section, David wrote:

The changes that now portend the Stupid Network are likely to shift the telecommunications value proposition from “network services” to something else. If I knew what it was, I would not be wasting my time writing these words.

I believe we can now clearly see what “something else” looks like: a utility distributed computing service, to support an information society. This information utility must offer both quality-assured and unassured data flows, to meet a wide variety of cost and quality needs. It offers the dependability and simplicity in use that we take for granted with other utilities, like power and water.

The progressive way to maintain an affordable, open, generative utility public network infrastructure is to embrace the mathematically and technologically inevitable. We must adopt a polyservice network that can both exploit the statistical multiplexing gain (for low cost) and isolate the flows (to get dependable experiences). This is the essential “idiot savant” behaviour we are missing.

The alternative of maintaining the monoservice status quo is not attractive: a proliferation of application-specific overlay networks at huge expense to the public and environment; closed managed broadband services vertically-integrated back into the network; and a withering of the public and open application distribution space due to severe “packet pollution”.

Failure to embrace this change could deliver the antithesis of David’s vision: a world where networked application distribution is entirely and expensively controlled by a few giant cloud services providers like Apple, Amazon and Google working in close cohorts with an oligopolistic telco industry.

Conversely, we can vigorously pursue the valued ends of “grab the gain”, “no gatekeepers” and “prefer generality”. However, to do so we must re-think the means. The time has come to transcend the now-unhelpful “stupid” vs “intelligent” framing of data networks.

I invite you to welcome the Rise of the Sensible Network. Get in touch

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter