Future of Networking – The 3 fallacies of packet networks

Statistically-multiplexed resources like packet networks are relatively common in everyday life — if you’re in a queue, you’ve met one. However, the vernacular in which we understand them lets us down when it comes to packets as they aren’t physical “things” and don’t fit our “thingy” intuition. With my PNSol colleagues I have been developing a simple model involving, of all things, a ski-lift to demonstrate how existing approaches to managing the resource fail under load, and how Contention Management algorithms do a better job.

I would like to share with you three insights that came out of the workshop to develop this ski-based educational model of statistically-multiplexed resource management. Each relates to a fallacy that pervades the telecoms industry. The effect of these fallacies is to make broadband needlessly expensive, and often unfit-for-purpose.

Fallacy #1: “Transmit” is always better than “Drop”

All packet networks today are built on an assumption of “if you can send it, do send it”. This seems logical; after all, if you have a packet in the front of a queue, and the transmission link is idle, why not send it on? The problem is that this packet may end up stuck in some later queue, where a fast link goes to a slow one. The application is still stuffing packets in, and filling that queue. Should a more urgent packet come along, it is now stuck in a longer queue, or gets dropped. Delivering the ideal customer experience may instead require dropping some packets at the earliest opportunity, to signal back to the application (via TCP) to slow down and avoid overloading the downstream bottleneck.

In our skiing model, some selected classes of skiers get sent to the bar when they arrive at the ski lift, rather than shivering in the queue beyond their tolerance. The overall effect is an increase in customer satisfaction, even if sometime people miss the chance to ski.

Fallacy #2: QoS delivers Quality

Why not just add priority to the above model for the urgent packet via Quality-of-Service features? Well, apart from the network needing to know what to prioritise, there is a deeper problem. When you prioritise one packet, you delay another. If you keep prioritising, you keep adding delay to the non-priority application. Eventually, you end up over-prioritising one (e.g. voice) while the user experience of another (e.g. Web) goes out of acceptable bounds. Furthermore, you may end up generating unwanted packet loss in loss-sensitive applications as queues fill up with delayed traffic behind priority traffic.

All kinds of complex schemes have been proposed to deal with this to make priority “fair”. However, the idea of QoS and priority is not underpinned by any meaningful and enforceable conceptual model of what “quality” really is in terms of fundamental properties, such as loss and delay. The end result is that EVERY attempt to add priority via QoS ends up decreasing the overall value-carrying capacity of the network. Furthermore, the more time-sensitive that traffic, the greater the shrinkage. What QoS delivers is not “quality”, because quality (by definition) requires some predictable outcome with respect to some desired reference.

Indeed, QoS can never deal with the complex phasing and interaction of packet flows; it can’t cope with managing multiple competing types of priority; and can never make acceptable trade-offs between loss and delay. It just adds very costly loss and delay randomly to non-priority applications.

So whilst QoS offers guarantees for individual flows, Contention Management works at the system level, managing the trade-offs for the resource as a whole. CM uniquely understands what the macro (end-user experience) effects are likely to be of the micro (per-packet) decisions you make. The consequences of complex emergent properties – such as the inherent dangers of phasing – can for the first time be modelled so that these emergent properties are stable and managed.

In our skiing model, we can pick any skier to be next to go on the lift. There is no “queue”, but rather a pool of skiers “ready and waiting to go”, and we pick the right skier to go (and who to send to the bar for a rest).

Fallacy #3: Quality is composable between network links

This leads us to our third fallacy, which is that by managing priority at each link, you can manage priority across the network as a whole. This can never work for a simple reason: the speed of light. You can never build control loops between the different queues that can signal and co-ordinate congestion management. What we are attempting to manage are transient effects of momentary contention between two packets in queues waiting to be transmitted. The difference in time of servicing a packet (which is holding up some other contended packet), and synchronising between elements of the network, is maybe a thousand-fold. By the time you have signalled “more” or “slower”, it all way-too-late.

Taking our skiing model, with CM we could build a multi-stage lift system, but manage the entirety of the contention at the base lift; nobody queues in the cold at altitude for subsequent lift stages. In contrast, QoS would attempt to manage “priority” at every stage.

These may seem like three fairly esoteric ideas. Considered together what they tell us is that telecoms even today remains a crude form of packet alchemy, ripe for change through advances in packet chemistry.

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter