More network quality measurements with ∆Q metrics

I am having fun running around taking measurements of broadband access using high-fidelity ∆Q metrics. Here are a few readings I have recently taken.

More network quality measurements with ∆Q metrics

Most network performance metrics are based on latency, jitter and packet loss. This choice of metrics is unfortunate, since they are based on averages (and there is no quality in averages), and introduce three variables with an unknown coupling.

In contrast, quality attenuation measures can be expressed using a single ∆Q metric. Indeed, ∆Q is the “ideal” metric for network performance. As I have recently written, I have a ∆Q measurement demo running on my laptop (as well as an iMac at home), and have publicly posted up some example measures.

Here are a few more high-fidelity quality attenuation measurements using ∆Q metrics to share. I hope they are of educational interest.


BT’s broadband infrastructure is very lightly utilised

You don’t need a PhD in stochastics to understand that the packet trace below is for one awesomely stable and high quality service. It was taken downstream from my ADSL line at home at 6.30am today, with Zen being my retail supplier, and it being delivered over BT Wholesale’s infrastructure. (That I can’t be bothered paying extra for “superfast” FTTC tells you all you need to know about how much extra value I perceive in it.)

More network quality measurements with ∆Q metrics 2

Here’s essentially the same data translated from human to mathematics. What it says is nearly all the packets arrived with utterly negligible variable queueing delay due to load (V).

More network quality measurements with ∆Q metrics 3

What this means is that some network equipment salesperson got a great bonus selling BT on far more capacity than it actually needs. This level of quality cannot be maintained in the long run (or through the daily usage cycle), so is also setting an unsustainable expectation with end users, which will eventually drive dissatisfaction and churn.

If you are investor in BT, I thank you for your munificence and generosity.

Using mobile broadband on the train really sucks

Staying with our kindergarten level of network analytics, I think you can tell immediately that the trace below represents a seriously sucky service. It’s my Huawei 4G mobile hotspot with a Three broadband SIM being accessed via my MacBook Pro laptop. This was captured whilst I was on a commuter train from Staines into London this morning.

More network quality measurements with ∆Q metrics 4

The spikes are the classic pattern of queues forming. They go up quickly, and drain at a steady rate. We can break this round-trip packet trace into the upstream…

More network quality measurements with ∆Q metrics 5

…and the downstream:

More network quality measurements with ∆Q metrics 6

The combination of buffering in my laptop, the hotspot, and the mobile network is resulting in a pretty yuk experience overall. For some periods, when there is little load and continuous coverage, the service is great. But when it’s bad, it’s really horrid, and in both directions (but not always at the same time).

When it’s in both directions simultaneously, it could be poor or absent coverage. We would need additional probe points to decompose this into its causal factors.

These ‘spikes’ are the result of an infrastructure that has been constructed with a single class of service that is optimised for throughput and peak ‘speed’. Each part of the system is working as advertised, but the collective effect is barely usable for any kind of interactive service.

It also reflects the mobile operator determination to deliver your packets once they have passed the rating point for billing, no matter how uselessly late they arrive.

London and Dublin are not the same place

For my next amazing network performance analytics trick, I am going to prove that London and Dublin are not the same place.

Now some people accuse us network scientists of being a bit detached from reality, but that’s not really fair. We always have a good sense of geography because we know how many milliseconds we are away from the nearest fixed Amazon AWS location.

First, here’s the round-trip to London AWS. It was taken over the WiFi from my laptop at home last night around midnight. It’s got a lot of ‘noise’, as you can see. Imagine if all you had was some average ‘ping time’ over five minutes: all this detail and its structure would be lost!

More network quality measurements with ∆Q metrics 7

Now we take this same data, munge it about a bit, and split it into the G(eographic), packet (S)ize and (V)ariable delay due to load. The G is the bit between the arrows. (For more on G, S and V see Fundamentals of Network Performance slide deck and webinar.)

More network quality measurements with ∆Q metrics 8

Here is the data from home to London AWS. The base geographic delay is around 27ms round-trip.
Path ∆Q|G (ms) ∆Q|S (µs/o) ∆Q|V mean (ms) ∆Q|V stddev (ms) ∆Q loss rate (%)
boris-s001→​london→​NHC→​london→​boris-s001 26.88 21.23 48.19 71.38 0.16
Now we look at the time to Dublin AWS and back, just the ‘speed of light’ base delay (plus routing overheads). So it’s not the total latency, as that conflates it with other effects, such as link speed and scheduling. In this case, the G is around 36ms.
Path ∆Q|G (ms) ∆Q|S (µs/o) ∆Q|V mean (ms) ∆Q|V stddev (ms) ∆Q loss rate (%)
boris-s001→​dublin→​NHC→​dublin→​boris-s001 36.10 21.53 56.48 78.13 0.16
So as you can see, Dublin AWS is about 9-10ms further way from my home, no matter how fast your links are, or how your schedule your traffic.

In the interests of science, I flew today from London City Airport (yes, it’s my photo):

More network quality measurements with ∆Q metrics 9

And came to Dublin:

More network quality measurements with ∆Q metrics 10

I have been sat here in Dublin airport for the last hour or so taking measurements of their rather fabulous WiFi Internet service.

More network quality measurements with ∆Q metrics 11

Here’s the G, S and V to London AWS:
Path ∆Q|G (ms) ∆Q|S (µs/o) ∆Q|V mean (ms) ∆Q|V stddev (ms) ∆Q loss rate (%)
boris-s001→​london→​NHC→​london→​boris-s001 15.38 0.77 29.14 50.03 0.00
And here’s the same to Dublin AWS:
Path ∆Q|G (ms) ∆Q|S (µs/o) ∆Q|V mean (ms) ∆Q|V stddev (ms) ∆Q loss rate (%)
boris-s001→​dublin→​NHC→​dublin→​boris-s001 4.02 1.34 28.74 50.25 0.00
As you can see, Dublin is now around 11ms closer to me than London is.

So we have conclusively proved that London and Dublin are not the same place, at least as far as computer scientists and network performance engineers are concerned.

I think you’ll agree, that’s progress.

 
 

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.