How to end the endless blame game?

Our struggle to build engineered broadband services has resulted in an endless blame game between industry players. How to blow the final whistle?

How to end the endless blame game?

Everyone has experienced excessive web page load times, Internet calls that have failed, and streaming videos that have endlessly buffered. Virtual working is already the widespread norm, but enterprises struggle to buy affordable high-quality access for unified comms for home offices and remote working.

Virtual reality beckons as the “new phone/conference call”, yet we have a capability chasm to cross to get there. We cannot even deliver reliable basic 2-way voice and video to everywhere it is needed.

We are collectively becoming dependent on smart sensors and their intelligent control systems for energy, transport and healthcare. Blockchain is revolutionising how we build those solutions, and whole new industries are waiting to be born. The demands for service reliability and predictable application performance are rising fast.

This drives the need for consistent quality from the component inputs into these higher-order systems, and is already widely recognised as an issue in the communications industry. Industrial users demand SLAs, and if telcos won’t supply them, then a new industry to replace telecoms will have to be invented.

Whilst improved supply technologies like fiber and 5G are necessary, they are not on their own sufficient to meet future demand. If nothing else, they only form a part of the whole digital supply chain, so cannot solve all quality problems.

We will continue to struggle with “microbursts”, “bufferbloat” and other network quality instabilities that damage the user experience. Merely increasing quantity and peak access speed does not fix this, and indeed can make quality worse.

At present, we have a profusion of workarounds for the underlying lack of digital experience quality management:

  • On the network supply side, we have over-provisioning, Active Queue Management (AQM), traffic management and data caps.
  • For enterprise demand, we see SD-WAN and WAN optimisation.
  • On the cloud application front, Google and others continually tweak browser protocols and codecs or build overlay networks to make them more aggressively gain resources (at the expense of others).

Whilst some of these workarounds may form multi-billion dollar markets, they fail to address the root issue. It leaves a pervasive mismatch between demand and supply, which creates an endless blame game: between enterprise IT and network, the consumer and ISP, telcos versus their vendors, the device maker versus the network operator, the network operators versus the cloud provider, the regulator and the incumbent telcos, and the general public against the industry as a whole. Adding more workarounds makes this situation worse, not better.

The lights are green in the network operations centre, but the application is unusable, so the user sees red. The converse can also be true. We lack the “lenses” to see what’s going on in the user’s eyes, and the “levers” to deliver a truly engineered experience.

And example of this is SD-WAN for enterprises, that bonds and blends different bearer technologies, and steers the packets according to need. It can certainly improve the experience being delivered: its vendors are not snake oil salesmen.

However, when the experience is poor, who should the enterprise pick up the phone to call? And if they call the telco to complain of their experience, how can the telco begin to isolate the problem and know if they are at fault? The real cost of buying an SD-WAN solution is not the up-front purchase price, but the internalised and externalised costs of management and support.

There are hidden costs and risks to all these workarounds:

We want scientific models of cause and effect, but today are forced to rely on weak and transient correlations as our guide. Making the supply chain more complex to manage merely invites ever-growing user dissatisfaction.

Somehow, we must “up our game” in networking to a “fit-for-purpose” approach if we wish to deliver engineered experiences. Public tolerance for our excuses of “best effort” are wearing thin, and we are going to find ourselves under growing political and pressure to get our engineering house in order to remain credible and retain legitimacy.

The telecoms industry dream is a general-purpose network supply that is configured to meet specific user application demands with a known performance “safety margin”. For this to happen, we need clear user experience “lenses” joined to precision configuration “levers” via tight management system “control wires”.

This in turn requires us to apply the quality management and safety engineering methods we take for granted in other technical domains, and apply them to networked access to cloud applications. Better technology for lenses and network levers alone cannot solve the problem: it needs a “management system upgrade” to cut through the complexity, and know how to set the controls.

Indeed, the core issue is not technology at al: it is us humans. If you believe that engineered quality is possible, then you have a chance of achieving it, and providing a tightly managed service quality to be proud of. If, on the other hand, you do not, then you will forever be stuck with today’s embarrassing level of uncontrolled quality variation, and the inevitable woeful user experience that results.

Do you believe network performance engineering is possible, or not? We already know the technical answer via an existence proof. So maybe you should act into that knowledge when forming your forward-looking commercial strategy! After all, you don’t want to be to blame for our industry’s shame, do you?
 

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.