Demand Attentive Networks

The Institution of Engineering and Technology (IET) has recently issued an important report on Demand Attentive Networks (DAN). (For details and to download a free copy, click here.) I believe this report reflects a major telecoms industry structural shift, from a supply-push to a demand-pull model, which may in turn trigger a significant industry restructuring.

I have interviewed one of the report authors, Gavin Young, who is Head of Fixed Access Competence Centre at Vodafone UK. Gavin is acting in his personal IET capacity for the purposes of this interview, and thus this does not represent any statement on behalf of Vodafone.

MG: What is the role of the IET, and its interest in this area?

GY: The IET evolved from the Institution of Electrical Engineers, and is the largest professional technology membership body in Europe. It has over 160,000 members based in 127 countries. It has policy panels that address a number of concerns, including telecommunications. The IET is often approached by government departments for impartial expert advice on matters of national importance, such as the upgrade of private police radio systems versus use of public mobile networks.

In the case of DAN, the IET wishes to move the broadband agenda on from an overly-simplistic focus on access speed. There is still work to be done in rural areas with respect to speed, and the final 5%. However, there are other equally pressing requirements on our communications infrastructure that need to be addressed to meet society’s evolving needs.

This requires looking for greater efficiencies and value by working across industry silos: fixed, mobile, devices, etc. The path that brought us here, and continued local optimisation within these silos, will not take us to where we would like to go. It is naive to believe the market acting alone will deliver key societal goals in the absence of appropriate policy and regulatory structures.

How is the context of the industry changing, in terms of supply and demand?

It is no secret that we are seeing a demand for capacity that has been growing heavily, both for video download and user-generated content upload. From a technical perspective, we can see how to deliver these needs, with FTTx and 4G.

Yet as applications and services move to the cloud, we are ever-more dependent upon the resilience and performance of the broadband service. Indeed, broadband has become critical social and economic infrastructure: to work at home, VPNs to the office when on the road, and even for school work.

However, the cost of failure of the service falls on users, and this is not adequately reflected in the products on offer today.

As we moving into next-generation access regimes, there is consolidation that reduces retail choice. There is also consolidation of the underlying infrastructure supply, for instance with mast and network sharing. Whilst there remain smaller and innovative niche players, over time we are seeing scale advantages that drive us towards a single-supplier monoculture. This induces new resilience risks, and also doesn’t meet national goals for choice and service.

Thus in future, we will need a different combination of fixed and mobile broadband assets working in tandem to deliver high levels of application service continuity. Whilst we have some partial solutions today, such as bonding or manual cut-over, these are only interim steps on a longer journey.

In what way do users experience a failure of supply to meet their demand today, and in the near future?

When you look beyond basic connectivity speed, the symptoms of the problem show up in three ways.

The first is the structural latency when you combine all the elements of the infrastructure, especially with shared access media like wireless or cable. This makes the presence of the network impairment visible to users.

The second issue is the lack of schedulability end-to-end to deliver good application outcomes. People experience a “busy hour syndrome”, where the access line is fast enough, and the home network works OK, but there is an inability to cope with multiple concurrent users and uses. Applications do not behave as desired, despite the necessary capacity being there.

The third issue is the general availability level of the broadband service, and the speed to resolve failures. Examples range from DSL lines losing sync, to resolution of more major breaks and failures.

All of the above become less acceptable as networks become more mission critical.

If I was a CxO of an broadband service provider today, I would probably think that I am very attentive to demand! After all, I have megabits on offer by the bucket-load. What beliefs and actions are causing them to fall short of the DAN vision?

The problem is that the industry’s marketing messages have not matured much over the last fifteen years. We have come a long way from dial-up and ISDN, and in the past speed was the major problem. The Application Service Provider model has been around for many years, and the cloud now makes that real, since broadband access has plugged the speed gap.

However, just throwing further capacity at the above performance and reliability problems doesn’t fix them. We need to look at the complete experience, which is not just about peak speed. It includes a long chain from home network, to access, to transit, to data centre.

At each interface, we have a built-in safety margin. There are efficiencies to realise along the whole of that chain by eliminating waste. Unfortunately, many network operators are using techniques like 15 minute usage counters, so they are missing out all the detail of the real customer experience. That means they often have too much safety margin, or too little.

That is because using averaged or peak network measures are poorly related to the actual application outcomes that the customer experiences. Whilst it gets you a green tick on a box for a network reporting system, it doesn’t mean we’ve done our job and actually made the customer happy.

What technical or economic forces will drive change by operators and their suppliers?

Capital budgets aren’t rising as fast as demand for this traffic and the corresponding over-provisioning to hide the underlying poor scheduling choices. Investors are finding it unsustainable to just throw bandwidth at the problem.

There are limits imposed by physics, mathematics and economics, and we are hitting them. Some suppliers are clearly disappointing users already, with poor streaming video experiences. There is increasing churn for enterprise suppliers who don’t maintain the required application experience.

Moving to a demand attentive model is an industry-wide imperative, since we can’t endlessly throw capital resources at the network whilst simultaneously failing to deliver the performance and continuity of service that users desire.

What kinds of technology or infrastructure changes are required to bring a DAN world into being?

The first step is to realise that speed is necessary, but not sufficient. What we need to focus on is the user experience. The DAN paper mentions more than 50 technologies, but DAN is not about a technology or architecture. It’s a philosophy about how we create value. There is no secret ingredient; rather it is more about how you combine the technologies and infrastructure together end-to-end to meet the customers’ needs.

DAN thus suggests a different way of working, which cuts across today’s industry boundaries. We have lots of local optimisation, but the pieces don’t join up. Too often, we throw a new technology at a point problem, which just moves the customer experience failure around.

Indeed, the barriers are often nothing to do with the technology, but can be planning rules or regulatory structures that haven’t caught up, and thus inhibit how we can use it. For example, there have been challenges with planning rules around small cells. The UK government has removed barriers to deployment by only regulating truly necessary thing, such as their physical size.

We have a lot of work to do with civil engineering, at all scales from street furniture to grand projects like the HS2 high-speed rail link. All civil engineering projects are opportunities to leverage existing construction for new communications infrastructure. If you are digging up town centres, there should by default be a pro-active plan to put fibre ducts in, to get fibre closer to customers.

Infrastructure is a long-term game, with 20+ year planning cycles, and we need to plan for the kind of pervasively connected world ahead. We have seen this kind of approach in other countries like Sweden, and it is known to work. The UK has a chance to learn from their experience of raw supply creation, but to also create a richer demand-led model.

In practise, how will applications express demand, and how will demand-attentive networks react?

The starting point is to understand what degradation is acceptable end-to-end for each application, in terms of packet loss and delay. The network can then follow the customer, maintaining an assured experience.

This is done using NFV and SDN, where resources are allocated via a distributed and dynamic architecture, rather than a centralised and static one. As market demand evolves, the resources can follow the customer demand. For mobile, this also implies more MIMO and beam-steering to focus resources on where the user is. Self-organising networks are another set of techniques in the DAN toolkit. Taken together, these are all about building capacity and extracting its full value, rather than leaving it sterile.

These many technologies act by matching supply to demand over three timescales:

  1. Sub-second. Engineers see this as “QoS engineering” and schedulability. It means giving real-time traffic a different treatment from a background trickle top-up of smartphone or DVR content. To achieve this, you need to know what has to get to the user right now versus what can be time-shifted.
  2. 24 hour/diurnal. ISPs have been using “social engineering” and pricing plans, for example to drive traffic into the 2-6am quiet zone. There is a schedulability and analytics issue to be solved here. For example, commuters come home and load up tablets with content for their commute next morning, but do it in the evening during peak hour. The device needs to be able to auto top-up in off-peak, but with the assurance the content they want is there and the battery will still be charged up. This has to be made easy.
  3. Weeks/Months. We need to improve the way we launch new products, and SDN and NFV make a key play here. When developing new services, we don’t need to have service-specific hardware any more. Demand-attentiveness is about quickly responding to the market with new capabilities. That means we need to turn 18 month product development cycles into just weeks.

This last example adds new major new hazards. We need strong controls before anyone can reconfigure a network, since a lot can go wrong. Once those controls are in place, we can do a soft launch, try new ideas, and then scale the ones that work. Whilst the programmable network is full of new hazards, it also takes away the risk, pain and hassle of today’s “lift and shift” between tin, where customers can (and often do) get dropped in the transition.

I am excited by the potential to progress on all three of these fronts, but there are many small steps to be taken, one at a time.

The report notably proposes an increase in network sharing and national roaming. What drives this recommendation?

Thinking ahead to 5G, the IET foresees the potential for more network sharing. We already see lots of sharing at the wholesale level, and joint ventures for network and mast sharing. Nobody would have dreamt of these, even quite recently. So where does this trend lead us?

With 5G, we know that there will be limited spectrum available, and government and regulators are talking about dynamic spectrum sharing. Some of the estimates for 5G spectrum use imply a much more radical approach than what is currently on the table.

Add to this the “white space” radio for broadband and machine-to-machine applications, and this suggests a dynamically allocated resource coordinated using centralised databases. Such techniques may be an adjunct which could, for example, be layered on top of assured access to licensed spectrum. There are numerous permutations to explore here.

Thus with prospect of 5G, it is not just a faster version of what went before. This network experience is just there when you need it, and does exactly what you need. That provokes the need for a sensible debate about what options are feasible, both technically and in terms of the business model. Furthermore, spectrum regulation is not something to be done by one country in isolation, since there is a whole global supply chain to align.

What are the key policy issues that regulators and governments need to re-examine for a DAN world?

The first and key thing is to get away from setting targets in terms of peak or average speeds. At the moment a 30 Mbit/sec minimum target could rule out valid technology approaches that could deliver a perfectly good enough solution for many people. The focus on speed rather than user outcomes is simplistic and becoming harmful, especially for the last 5% to be reached. Remove this artificial constraint, and you can make a lot of lives better.

Next, we need to re-think our network measurement regimes, to better reflect the actual customer application experience being delivered.

When local and central government authorities procure infrastructure, this tends to be very localised and disjoined in terms of SLAs and contract clauses. Aggregation of public sector demand could drive a lot more efficiency, since there are far too many cumulative safety margins and costs padding the system.

Finally, there are lots of local authority assets & buildings, and they need to use their buying power to drive change. The public sector needs to exploit communications as an integral part of a holistic civil engineering strategy. A key objective is not just to grow capacity, but also to grow resilience as society comes to depend on these services working continuously.

At the end of the day, it’s about customers, not technology or networks. How would users tell the difference between what they have today and a DAN world?

The end user would have the illusion that they are on an empty network, and a perception of infinite bandwidth, even though this doesn’t exist. When the feeling of unconstrained capacity cannot be delivered, applications degrade gracefully, and in the most appropriate order.

Today we have networks that are either empty (and costly) or congested (and getting in the way of a good experience). In future, the interaction will be ‘tactile’: things just do what you want, and you hardly notice there is a network in the way, just as the touch-sensitive screen on your smartphone works without lag.

This requires an alignment and coupling of the engineering and political and policy environment at all levels. Readers of this interview are encouraged to reflect on what it means for them, and to think beyond their own domain. Who is upstream and downstream from me? What conversations could I have, and what asks or offers can I make? How can I remove unneeded safety margins or risks, and maintain needed ones? How can I collaborate to aggregate demand and build matching supply?

If you would like to get in touch with Gavin to discuss any of the issues raised in this article, he can be contacted at

To download the Demand Attentive Networks report, click here.

To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter