Five reasons why there is no Moore’s Law for networks

A common misconception in telecoms is that there is an equivalent of Moore’s Law for networks. Whist it is true that we have seen exponential growth in data transmission rates – itself driven by past rapid improvements in opto-electronics – no such property holds for networks as complete systems.

The belief that there ought to be such a property is a pernicious one, since it perpetuates the idea that broadband networks are somehow like pipes. In this belief system, all we need to do is to keep increasing the rate of flow – i.e. supply ever more ‘bandwidth’. This false metaphor leads us into irrational design, marketing and operational decisions that are damaging both the telecoms industry and its customers.

So, why is this common belief wrong?

Reason #1: The ground is the limit, not the sky

With Moore’s Law, we are creating integrated circuits of ever more complexity, to maximise the computational capabilities of a device. To the best of our knowledge, there is no intrinsic upper bound to this process, bar those ultimately imposed by the physical resources of the universe.

Conversely, with networks the opposite is the case: we are aiming to minimise the latency of communications. After all, if all we want is raw capacity, and we don’t care about latency, then sending DVDs and hard drives through postal service has plenty of bandwidth!

This latency has a hard lower bound of the speed of light. One of my colleagues heard a telco CTO instruct his staff that they were to reduce latency on their network by 10% every year. A moment’s thought tells you that can’t happen!

Reason #2: It’s not just about link speeds

Technology improvements decrease the time it takes to squirt a packet over a transmission link. However, when packets contend for that link, there is a delay whilst they wait in queues, which can easily offset any improvements in technology. Indeed, networks are changing structurally, making them more sensitive to contention delay. One reason is that the ratio between the capacity of the edge and core is changing.

For example, in the past it might take a thousand users of dial-up modems offering a typical load to be able to saturate their backhaul. Today, a one gigabit home fibre run may have a shared one gigabit backhaul, which means a single user can easily saturate it with a single device running a single application. Wireless technologies like beam-forming also work to increase contention on mobile networks, by allowing more users to operate concurrently on a single piece of backhaul. Indeed, we are moving from a world where it took multiple handsets to saturate the backhaul one cell, to one where a single handset may be able to saturate the backhaul for multiple cells – simultaneously!

There is no technological ‘get out of jail free’ card for contention effects, and no exponential technology curve to ride.

Reason #3: Demand is not fixed

When we increase supply in a broadband network, demand automatically increases to fill it. That’s the nature of TCP/IP and modern (adaptive) applications: they aggressively seize whatever resources are available. If the network is “best effort”, the applications are “worst desire”. Hence improvements in technology don’t automatically result in corresponding improvements in application performance.

Indeed, in some cases adding more supply can make things worse – either by over-saturating the contention point, or moving it around. This isn’t a new phenomenon: data centre architects have long known that adding more CPUs to an I/O-bound server can make performance regress.

Reason #4: Demand is not just for volume

Computation can be measured by the number of logical operations performed, which is a simple scalar. Data networking requires low enough latency and packet loss, and those have to stay sufficiently steady for applications to work. This ‘steadiness’ is called stationarity, and is a statistical property that all applications rely on. When you lose stationarity, performance falters, and eventually applications fail.

It seems that many in the Internet community at large are waking up in the middle of the night screaming “non-stationarity!”, albeit in various technical dialects. Correspondingly, the IETF is kicking off a major working group right now with the aim of addressing non-stationarity issues due to the Internet’s (problematic) architecture. The resource crisis definitely isn’t bandwidth!

Hence the resource we are trying to create isn’t some simple scalar with a hyper-growth curve. We also need the absence of variance, which has no technology-driven improvement like Moore’s Law. Indeed, growing demand acts to destroy the stationarity of statistically-multiplexed networks. Furthermore, this happens earlier in the life cycle of every new generation of access technology.

Reason #5: Physics is not on your side

Even increasing link speed isn’t an endless process. As the head of Bell Labs Research says in Scientific American:

We know there are certain limits that Mother Nature gives us—only so much information you can transmit over certain communications channels. That phenomenon is called the nonlinear Shannon limit. … That tells us there’s a fundamental roadblock here. There is no way we can stretch this limit, just as we cannot increase the speed of light.

Both fixed and mobile networks are getting (very) close this limit. We can still improve other bottlenecks in the system, such as switching speed or routing table lookup efficiency, but there are severe decreasing returns ahead.

The bottom line

Moore’s Law is driving hyper-growth in volumetric application demand, but the nature of supply does not have a corresponding hyper-growth decline in cost. That is because volumetric capacity is not the only concern – latency matters too, and this is constrained by both the speed of light as well as the schedulability limits of the network.

There is no magic technology fix through increasing link speeds.

Application performance is increasing dominated by latency, not bandwidth. That is why Google has a person employed as “Round trip time Reduction Ranger”. His job is not to reduce the speed of light, or cause technology miracles to occur. What he does do is to chop up and rearrange data flows, trading (self-contention) delay around, in order to get better overall application outcomes.

Similarly, the future of telecoms in general is firmly centred on managing latency due to contention between flows created by competing applications and users. This means scheduling resources appropriately to match supply and demand. That in turn allocates the contention delay to the flows that can best withstand its effects.

To believe otherwise is just a dumb pipe dream.

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter