The journey to broadband experience visibility and control

The broadband industry is struggling to gain visibility and control over the experience it delivers to customers. How to bridge the ideal into the present reality, and provide a realistic path to upgrade our capability to define and deliver service quality?

In my last article I discussed how the telecoms industry faces a long-term task to fully incorporate quality into its product definitions and service delivery. The question this naturally invokes is one of how to get there.

The path can be broadly characterised by two mind-sets: the idealists, who see the end game; and the pragmatists, who are anchored in the present reality. Resolving the difference between these offers a productive avenue from which to make practical progress.

The ideal of visibility and control

Every broadband user has an objective experience resulting from the performance of the applications they use. This in turn results in a subjective response of delight or disappointment.

In order to become a customer-centric business you need to understand both the objective experience and subjective response at some level of granularity across your base. Who are you over-delivering resources to compared to their willingness to pay? Who is being unduly disappointed and is being put at risk of churn? Why is this so, and what choices do you have as a service provider to do things differently?

Broadband service providers make resource decisions that affect the overall satisfaction of their customer base. At long timescales they buy capacity, at shorter ones they light up that capacity and change routes, and at the shortest timescales they schedule packets. We need to have visibility of the impact of any resource allocation decision if we are to make good choices.

In our ideal world we could take network measurements that would give us a very strong proxy for the performance of the applications that users care about, and would understand how that translates into an economic response to buy the service or not. We would also be able to control the resources of the network to optimise the alignment of resources to willingness to pay.

The ideal exists, but hasn’t spread very far yet

Over the last few years I have documented the new science of quality attenuation, and the ∆Q framework that provides the mathematical framework to quantify the network performance and model the user experience. The framework is supported by breakthrough tools and technologies for measurement and management.

These collectively implement something extremely close to the “ideal” in terms of both visibility and control. The measurement captures the instantaneous properties of the network, and the user experience is made up of continuously passing instants. The new mechanisms for control can work backwards from any desired (mathematically feasible) experience outcome and implement it under software control.

This framework has been developed and deployed in multiple operators as a high-end consulting activity. As such, it is an applied R&D programme, with the practitioner pool is limited to a small number of individuals. The control mechanisms have been deployed in test networks, but not at full scale in commercial operation.

The problem is that to spread and scale this framework the supporting tools need to become adopted by a much wider community of users across the whole ecosystem. This in turn requires productizing and dissemination of the tool chain, and adoption and endorsement of the underlying science by mainstream standards bodies.

Both take considerable time and money. The current reality therefore continues to fall well short of our demonstrated ability to execute against the ideal. It is likely to stay that way unless something dramatic and unexpected happens.

The pragmatic: pervasive, but lacking direction

In the meantime, we have a broadband industry that wants to upgrade the visibility and control it has over the customer experience, but lacks a clear path to do so. There are dozens of vendors offering network and application performance monitoring, cost and QoE optimisation, and traffic management systems.

Choosing between them is hard, as it is difficult to verify and validate their technical claims. There are plenty of network capex liposuction surgeons promising you big savings, but they have poor eyesight and shaky hands, meaning you can find yourself sucking away the customer experience faster than you save money.

What is happening is that we are accumulating every more network measurements at finer granularity in space and time. There are systems to correlate this network data with other user and customer experience information (e.g. application logs, net promoter scores). This “big data” approach seeks control through identifying what are hoped to be persistent relationships between the network-centric and user-centric data sets.

This results in network management overwhelm: whilst there is a lot of data and analytic ability to turn it into information, we have little true insight. That is because the ability to abstract out only what really matters is weak, and the predictive value of the models that results is relatively low.

This means that the industry is often “flying blind” with respect to the experience it delivers, and the impact of the choices it makes over allocating network resources. That is not a recipe for long-term success if we are to take control over the quality of the service and responsibility for the experience we deliver.

Bridging the ideal to the pragmatic

What is missing is a common industry approach to evolving our quality management practises. This is not a problem that is confined to any one operator, vendor or market, but crosses the whole digital supply chain.

I see the need for the following:

  • For key industry bodies (like TMForum and IEEE) to step up and identify the “ideal” and describe what the destination of complete visibility and control looks like. What is possible, how can it be achieved, and what are the steps to get there?
  • For a framework to describe the management system to deliver the ideal, and a capability maturity model to help people locate their current reality and take appropriate next steps on the journey. This documents the processes and practises required at each stage of development.
  • For analysts to be able to locate vendors and products within that framework, so that technologies are adopted at the appropriate level of maturity for an organisation and its ambitions.
  • For a technological capability to benchmark current measurement and management techniques to understand the limits of their application. We can then safely exploit what we already have, and take capability upgrades to the next level of visibility and control as and when necessary.
  • The way to bring the measurement and management “ideal” out of the lab into the mainstream is initially to disseminate it among the R&D community, and incorporate it into product development and evaluation processes in the lab. Only later can it become part of the operational tools used to measure and optimise deployed systems in live operation.

    Multiple consulting engagements with “big name” equipment vendors and operators have taught me that the industry is somewhat lost when it comes to managing quality. The mainstream consulting organisations lack the necessary scientific understanding and practical experience to help them. This opens up an opportunity for boutique consultancies like my own to fill the gap. My job is to define the “ideal” destination, and describe the journey to complete visibility and control, without forcing people to engage in infeasible jumps of capability on the way.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.