Who will step up to solve interoperable network quality?

The telecoms industry has many bodies who could, in principle, step up and help solve our core “missing units” problem. But who might actually do it?

As I have previously written a hundred times, the telecoms industry has a serious problem with its foundations. What we sell is (the absence of) latency, but we don’t have an agreed “universal unit” to contract service latency, or to internally manage the network resource to deliver it. Water has litres, electricity has watts, and we have… an embarrassing gap to fill.

The scientific answer is already baked into the universe we inhabit. You need to break latency into its three basic components (distance, packet size, buffering delay), express these as probabilities, and (de)convolve the probability functions along the supply chain. It’s not hard, and ought to be on undergraduate computer science courses (but isn’t).

The human organisational answer, however, has yet to be found. As an industry, we need to confess to ourselves that we’ve let the technology stray well ahead of the conceptual foundations. The result is the nonsense excuse of “best effort” to cover for our collective failure to actually engineer performance. There is unconscious shame attached to this failure, which itself inhibits progress.

All we have to do is align the “timetables” of our “packet trains”. The potential service value uplift and cost efficiency benefits are enormous. So, who needs to step up and act? That’s a tricky one. Noting that I am not a “standards person”, here’s a quick (and possibly inaccurate) survey of some of the possibilities.

The IEEE has the right ethos and credibility, but this is a distributed computing problem, not an “electrical” (or radio) one, so it maybe falls somewhat beyond their remit. They could well be involved and provide “brand endorsement” for whatever emerges.

TMForum is an under-appreciated industry body that does great work standardising telecoms business processes. They could use an industry quality standard as an input to reengineer those processes, but don’t have the capability or desire to do the basic science and engineering. So whilst unlikely to be instigators, they are key stakeholders.

ETSI has the track record and authority to act. They are also involved in the right problems, like 5G slicing and low-latency services. Can they cope with the cultural tension between “telcoheads” (radios and transmission is primary) and “netheads” (it’s fundamentally a computing problem)?

ONAP is taking a much-needed initiative to bring together traditional standards development processes with open source software. They could well be the owner and repository for reference implementations of standardised network measurement and quality management tools for SDN. The idea of something led by “implementation pull” and not “standards push” does appeal.

The GSMA is, well, the GSMA. When not milking vendors for conference exhibition fees, they do good work in many areas. Their history around things like roaming and VoLTE mean they are legitimate actors in creating interoperability. However, the GSMA’s drive is fundamentally commercial, so they don’t see it as their problem to solve basic science issues. There’s also the minor issue of dealing with past screw-ups, like IPX‘s quality standard not actually specifying a working phone call. Maybe when “quality SLAs” drive the industry economics they will get more interested?

3GPP in theory should be on top of this problem, as they’ve given many years of thought to radio resource management and scheduling. It would require admitting that various aspects of systems performance engineering of 3G, 4G and (likely) 5G fall below what is desired or achievable, which is a loss of face. Issues of organisation design and psychology may dominate those of science and technology.

Broadband Forum has done a sterling job with TR-069 to define operational standards for telecoms equipment “out there” in the field. It’s an intriguing thought that the science adoption could be driven, say, by a pragmatic focus on “boring” things like fault isolation. A horse with long odds, but the muscle to win the race?

In principle the ITU has the authority to work on this kind of problem. It’s a genuine “universal” one, aligns to their history of managing trusted interop at (inter)national boundaries, and many of the stakeholders who might benefit most are in “classic” ITU territory (like Africa). The globalist UN context is problematic in a post-Trump world, since the USA is unlikely to play ball. But if the ITU didn’t exist, we might still have to invent it. Ultimately, interoperability of digital supply chains is an issue that governments will care about.

The IETF doesn’t really do engineering (even if participants think they do). Abandon all hope, as far as I am concerned, which may be harsh, but the track record isn’t great. A scientific interoperable quality standard isn’t a “tinkering and explore” kind of problem. But there are useful things to rescue from the rubble, like the IPPM and LMAP initiatives for performance monitoring.

Professional bodies like the IET have a role to play, and have taken the lead on issues such as demand-attentive networks. There is also a “protecting the public and industry professionals” issue, as our failure to manage the “safety margin” of what we create results in an ethical and financial liability. Our engineering legitimacy is on the line: as a society, we don’t allow “best effort” bridges or aircraft.

There’s also a role for policy makers (like DCMS in the UK), enlightened regulators (FCC, Ofcom… but not BEREC, who don’t grok science), consumer advocacy bodies (e.g. ACCAN in Australia with NBN), equipment vendors (who feel brave enough to tell their customers they forgot to agree a unit of supply and demand), cloud giants (Amazon, Oracle, Microsoft and Apple, are you listening?), and telcos themselves (at least those whose R&D departments haven’t been lobotomised by the bean counters).

The bottom line is that this interoperable quality standards problem is so systemic and endemic it is “too big” for any one party to solve it alone. It affects every operational and business system in some way, every business process, every product, every financial model, every role, every input technology, and every institution. This rapidly becomes overwhelming for the average company manager tasked with solving some near-term problem.

Even if we know the technical answer, turning it into practise is going to take decades. Just as the automobile industry took 50+ years to adopt a quality management culture, so will it take telecoms and cloud a long time to morph. We have barely begun to mature our quality control beyond the semi-managed chaos of an “oil patch” under every router.

Then again, as the Chinese say, the best time to plant a tree is 20 years ago, and the second best is right now. Do get in touch if you feel the urge to act! I can give you the map, and act as a guide, so at least you can walk confidently in the right direction. Who knows, you might even find fellow travellers from some of these institutions heading the same way…


For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.