Fab or f#^*%d? How to tell if your broadband is any good

How can broadband users select a supplier that is good for their needs? How can regulators ensure that there is an open and transparent market with rewards for better quality? How can customers hold their broadband service provider to account for the performance on offer?

I am so glad you asked! Here’s how…

1. Characterise your key application’s Predictable Region of Operation (PRO)

Every application has a “performance surface” of how it responds to the network service quality. This can be described in terms of latency in each direction (i.e. packet loss and delay viewed as a single phenomenon of “quality attenuation”).

Here, for example, is the map for HTTP time to load a web page, and how it varies with the loss rate (vertical axis) and delay (horizontal axis, ms).

HTTP time to load web page

The “acceptable” contour defines the application Predictable Region of Operation (PRO). For instance, we might decide that 10 seconds is our target limit for an OK customer experience for Web apps.

2. Turn this PRO into a Quantitative Timeliness Agreement (QTA)

Packet networks are probabilistic systems, so everything is about the odds of an outcome, not absolute certainty. That means we need to define the outcome as a probability of that “good enough outcome” being met, i.e. the bound on the rate of failure to deliver it.

The packet loss and delay requirement to ensure that this application performance level is met the technical demand specification for network performance for this application.

It is both a necessary and sufficient condition to deliver enough packets that are sufficiently “fresh”, and hence uncontaminated by the awful chronotoxic “fungus” of time.

LatencyThis can be done with a “quantitative timeliness agreement” that defines a limit on that loss and delay.

The demand QTA

The simple version of one of these QTAs is a boundary line between a “good enough” distribution of “freshness” (within the desired PRO) and “not good enough” (outside of that PRO).

Fab or f#^*%d?

There are several PhDs waiting to be claimed for the non-simple cases… and some caveats on statistical stability that I’ve glossed over. Turning PROs into QTAs is bleeding-edge network science. You can watch proper network scientists with real beards talk about this kind of thing.

3. Measure the “freshness” of the packets (i.e. quality attenuation) of your broadband service supply

Here’s one I did right now on my home broadband. Easy!

Broadband measurement

I have shown the round-trip for illustration, but you can view the original charts here [PDF] and the raw data is even embedded inside the PDF (if you have Adobe Reader).

(Incidentally, I have long had a fault on my DSL line that BT and Zen Internet between them have failed to resolve. Every 20 seconds or so there is some kind of bearer signal fubar mess that I’ve not been able to isolate. If I can’t do it, what hope is there for the everyday member of the public?!?)

ACHTUNG!

You can only proceed through the rest of this recipe if you have this high-fidelity raw data. Only multi-point measurements of probability distributions of packet loss and delay (using ∆Q metrics) are up to the task! No peeking!

4. Turn that supply data into a delivered quality probability function

OK, so you peeked, darn you! But don’t say I didn’t warn you…

An “improper” cumulative distribution function “summarises” this raw latency data. It is “improper” because it doesn’t reach 100% (and not because it went to the wrong schools and has a socially impaired accent.) The “gap” at the top is the packet loss.

The supply CDF

Here’s that same data from my home broadband turned into a CDF (just for the variable contention delay, but you get the general idea).

Cumulative distribution

5. Compare the QTA (demand) with the delivered quality (supply)

If the supply is “better” than the demand requirement (i.e. the CDF is to the left of the QTA), your service is “fab!” for that application use.

FAB quality network

If the supply is “worse” than the demand requirement (i.e. the CDF clips the corners of the QTA), your service is “f#^*%d!” in terms of meeting the expected performance demand.

Bad quality network

Again, I am glossing over some caveats and details, in case my scientifically-minded colleagues are wincing right now.

6. Quantify the “slazard”

We can even create a synthetic metric to quantify how much “slack” we have due to over-delivery of quality…

Slack quality network

…or how much “hazard” we have due to under-delivery of quality…

Beware of hazard

You might want to ask yourself why this simple engineering of broadband demand and supply is not in the textbooks, and isn’t taught on undergraduate courses. Why haven’t we yet agreed on a basic thing like the metric space for supply and demand?

Now go ask all your industry friends what they are going to do about it…


Bonus! Here’s a charted example from pioneering Kent Public Service Networkcomparing this “hazard arming metric” (red — derived from high-fidelity ∆Q metrics) with the standard five minute load (green — low-fidelity averages). There are very important differences!

High fidelity

I think you can tell by now that broadband “speed tests” are a waste of time and money. If you want the “real deal” you need to measure quality, not quantity, because it’s the (quantity of) quality that determines the user experience.

Complete my demand survey pronto to get a chance of access to high-fidelity measurements using your own laptop.

If you would like to book a training course on network performance measurement and management, contact me on training@martingeddes.com or hit contact. You never know, your competitors may already be upgrading their engineering to high-fidelity metrics! (You know who you are…)

 

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.