How to do network performance chemistry

In this article I’d like to present a simple tool that can be used to make sense of a wide range of networking phenomena. It’s a genius idea cooked up by my colleagues at Predictable Network Solutions Ltd, and is the first step in networking chemistry: being able to identify the components from which everything is made. It is particularly useful at getting beyond the idea of ‘bandwidth’, which is a term that should be associated with the ‘Earth, Water, Air, and Fire’ level of scientific sophistication.

Network performance

We like to think about networks as if they are conveyor belts for moving packets along a path, or “beads on a string” moving along to their destination. In this model, what matters is delivering as many beads as fast as possible.

An alternative (and paradoxical) framing is to consider them instead as machines that generate disappointment, in the form of loss and delay. Why? Well, the ideal network delivers data instantly and perfectly, and real networks are always disappointing in comparison to that ideal. Networks decide, if only implicitly, how to allocate that disappointment to the flows of data.

The best possible network is one which firstly minimises the disappointment; then for the disappointment that does (and must) occur, it impairs each flow with just as much disappointment as it can tolerate, and no more! This is a subtle re-framing of the same situation, but by shifting our vantage point we see network operation with a very new and useful slant.

Decompose the ‘disappointment’ in network performance

In this alternative (quality-centric) framing of networks, we can begin to decompose the ‘disappointment’ (posh name: ‘quality attenuation’) into its constituent parts. One of the core learning points from our Fundamentals of Broadband workshop is how to go about this. Since this idea is too important to keep secret, here is what you too need to know.

Let’s imagine for a moment we watch a few hundred packets flow along a network path of one or more links. We plot the delay each packet incurs as it passes along that path. (We’re going to ignore loss for now; a generalised model is for the advanced class.) The result is a scatter plot of the level of disappointment over time:

A scatter plot of the level of disappointment over time

A scatter plot of the level of disappointment over time

Spot the pattern? Me neither! So we sort this same data by the size of each packet:

Scatter plot of delay versus packet size

Scatter plot of delay versus packet size

Now that’s lot clearer. What do we see?

  • As packets get longer (to the right) the delay goes up (the red dots are higher up the chart). This should not be surprising: longer packets take longer to transmit.
  • The amount of delay is bounded below by roughly the green line; as you make packets longer, there is a minimum time to deliver each packet. This minimum time also gets longer as the packet length increases.
  • Most packets take longer than that minimum time to deliver. They are well above the green line.
  • There is also a quantisation effect of the size of the packets (the gaps between the bars), which for the sake of simplicity we’re not going to dwell on – it’s not relevant to this networking chemistry lesson.

Unfortunately the y-axis of this plot isn’t shown from zero; note that the green line, when traced back to the y axis, would have crossed it at a value (just) above zero. Let’s re-draw the diagram in its conceptual form, and annotate it:

How is the disappointment of one packet constituted?

How is the disappointment of one packet constituted?

We’ve plotted a small number of packets (in grey) in the same way, bounded by the green line. The red dot is one example packet we’re going to consider. How is the disappointment of that one packet constituted?

  • The green line has been extended back to the y-axis; this delay is the amount of delay that a hypothetical packet with a zero-length payload would encounter. This is due to the speed of light, plus address look-up in routers and fixed encapsulation overheads. This is the intrinsic ‘geographic’ (G) delay of the path.
  • Then we have the additional delay due to the length of the packet, because it takes time to turn a packet into a bitstream over any link. This is the ‘serialisation delay’ (S) of the path.
  • Finally, we have the additional delay: since the network isn’t empty, packets spend time in queues waiting for other packets to be transmitted. This ‘variable contention delay’ (V) is the result of applying load to the network.

For any packet traversing the network, it will experience ‘disappointment’ either as loss, or as delay due to G, S and V. And that’s it! There’s your protons, neutrons and electrons of network performance chemistry (with a black hole for the loss).

Some points to note:

  • The ‘disappointment’ is a mathematical entity with a number of representations, and it always and everywhere comprises G, S and V. It isn’t some empirical thing that may or may not happen depending on the situation or network.
  • Delay due to the speed of light isn’t going down. Indeed, G has been rising over time. Marconi’s radio communicated at the full-on speed of light. An IPv4 address lookup adds an overhead; and IPv6 one even more.
  • Copper is ‘faster’ than fibre for a single bit (G), because the speed of light in copper is higher than in glass.
  • We have been making networks ‘faster’ by dropping S. So when fibre is said to be ‘faster’ than copper, people mean we can both serialise packets faster, and also transmit more packets.
  • On a traditional time-division telecoms network, V was always effectively zero, since we reserved uncontended paths and contemporaneous time slots for the data. On broadband, V can be very large – when you add up all the buffer sizes along the path it can become tens of seconds.
  • V is not a fault, or due to ‘bad broadband’; it is intrinsic to flows from many applications and users attached to a single point all contending for a shared resource, as well as upstream contention between many users. However, too much toxic disappointment from V does cause applications to fail.

We can also see trends of how networking is changing over time:

  • Before the telegraph, everything was dominated by ‘G’. Thereafter we could communicate at the speed of light (plus any intermediate relay operators), but tapping Morse code was slow. The world of data since then has been dominated by reducing ‘S’.
  • There are decreasing returns to dropping S for many applications. As you drop S, your ‘fire gun’ packet in an online game gets delayed closer and closer to base delay of G. Going from 1mpbs to 1gbps on your access network doesn’t change much for this application.
  • In future, the story will all be about managing V. That ‘fire gun’ packet has no value if it takes a second to arrive, because it spent ages queued up behind video downloads.

What ultimately matters to any application is not G, S and V separately, but their combined ‘disappointment’, and its effect on quality of experience. They each contribute to a decrease in flow across the network, but applications don’t know or care what the underlying components are, only the total. That means a religious focus on any one of these is inappropriate. Fibre, copper, cellular and satellite all have different characteristics, and that’s OK. What matters is whether each transmission system is fit-for-purpose for the user’s needs.

It is common to believe all you need is ‘bandwidth’ to get good user outcomes. Yet what people call ‘bandwidth’ often confuses G, S, V as well as the overall network capacity. Does ‘bandwidth’ mean the average delivery bitrate (a hard drive in the post has high ‘megabits per second’, but G is measured in days)? Does it mean a transmission medium with low S, but perhaps with a tiny overall capacity allocated to your flows? Does it mean a medium which is relatively uncontended and thus flows experience little disappointment from V?

When someone uses the word ‘bandwidth’ about broadband, you can be pretty certain that they are neither thinking nor communicating clearly!

I’ll no doubt be coming back to this G/S/V diagram in future, as it helps to explain a lot about networks and the mistakes people make when reasoning about broadband and bandwidth.

For further fresh thinking on broadband, bandwidth and network performance please get in touch.

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter