My response to BEREC consultation on ‘net neutrality’ guidelines

To BEREC Board of Regulators

Consultation on document BoR (16) 94
Dear Sir/Madam,
I am a computer scientist who specialises in network performance. I consult to tier 1 operators, equipment vendors and NRAs. I am also involved in the technical and commercial development of quality-assured broadband services.I have undertaken (in my own time and at my own expense) a review of your proposed guidelines for implementing “net neutrality” in the EU. I am deeply concerned about the level of scientific competence of these guidelines.

The guidelines directly contradict and ignore credible research from one NRA (i.e. Ofcom) on traffic management detection. In my view they are not technically implementable, as no fit-for-purpose monitoring system exists. To continue to pretend otherwise is to harm users, service providers and application developers

Even more concerning, they appear to place members of professional engineering bodies and NRAs in conflict with their duty to protect the public. I am particularly concerned about sections #96 to #123 on “specialised services”, which have very poor technical merit. They appear to be unethical, placing users and enterprises at direct risk of economic harm.

Taken a more generalist view than my industry specialisation, these guidelines also do not sustain their claim to support an open, innovative ecosystem of communications service and application providers. There is a diversity of user needs and applications, and the essence of these proposals is to treat them as homogenous. The technical approach to network scaling is inherently unsustainable.

These guidelines appear to protect the commercial interests of some content and application providers (see #75 on advertising). However, they do not adequately protect end users and ensure ongoing fitness-for-purpose of their broadband service. They focus on irrelevant internal traffic management issues, neglecting the core problem of an undefined overall service quality level.

They fail to be technologically neutral, since broadband “specialised services” can (and must) compete against other ECS (like ISDN, MPLS and Ethernet). These guidelines (maybe unintentionally) place undue burdens on one access technology.

The number of internal contradictions and technical errors within these rules suggests they have not been given proper technical or economic scrutiny. The quality of your advisors seems to be inadequate for the task. Indeed, the weak understanding of networks on which these guidelines have been built has resulted in a flawed consultation.

I consider this as an inappropriate use of industry resources. It is not the job of industry members to provide free scientific education through this retrospective mechanism. In my view this process has seriously damaged BEREC’s credibility and legitimacy as a regulatory body. At this sensitive time for all European institutions, few can welcome such a development.

In order to rectify this unfortunate situation, I urge you to address the following matters before proceeding to issue final guidelines:

  • There needs to be a concrete proposal on how NRAs can measure operational service performance, and determine (reliably and affordably) whether it is compliant with the ISP service’s technical definition. The absence of technical consideration of monitoring and enforcement proposals is a dereliction of BEREC’s job.
  • The section on “specialised services” needs reconsideration and reconstruction. In its present form it creates perverse incentives. The artificial burdens placed on ECS providers seem unrelated to any tangible user harm. There needs to be consideration of pre-existing technologies and approaches that give a performance advantage for a fee (e.g. paid peering, content delivery networks).
  • These guidelines appear to have taken no account whatsoever of the UK market structure, or similar attempts to break vertical integration and enable competitive markets. The terms “unbundled” and “wholesale” do not appear once. There needs to be a consideration of how regulations are supposed to work in a market where there are multiple actors at the application, interconnect, wholesale and retail levels interacting.
These are not details to be delegated to the NRAs. The compatibility of these guidelines with the need to localise in the supply chain and enforce compliance is an intrinsic requirement upon BEREC. My strong recommendation is to explore and expand the requirement for minimum end-to-end quality of service levels, and drop requirements on traffic management transparency.

A detailed list of specific issues and comments follows below.

#43 and #44: You state “…whether it is the ISP that picks winners and losers…” and “Each of these factors may contribute to a material reduction in end-user choice…”

These statements are contradictory. As such, the determination to ban ISPs from being able to “pick winners and losers” cannot be sustained on the basis proposed.

For instance, one ISP might specialise in servicing the needs of users of Apple devices, and give preferential performance to iCloud applications; another to Microsoft users and to their Azure/Office365 applications. This differentiation and diversity would result in a material increase in end-user choice over a homogeneous ISP market with “one-size-fits-all” services.

The underlying false assumption is that any expression of intention by ISPs is by necessity aimed at rent extraction. In competitive markets (e.g. mobile in most EU countries, retail fixed broadband here in the UK), such practises are more likely to be a response to demand for more efficient and effective network resource allocation.

#50. “NRAs should take into account that equal treatment does not necessarily imply that all end-users will experience the same network performance or quality of service (QoS).”

These guidelines suggest that what matters is equality of technical input (at the traffic management level), not experience outcome (at the end-to-end service performance level). The problem (indirectly acknowledged here) is that ISP performance is an emergent outcome of a stochastic system in operation. Users and applications ordinarily get different and varying performance without any intentional action by the ISP.

This therefore raises the requirement to be able to distinguish “flukes” (unintentional bad performance) from “faults” (deliberate “slowing down”). As such, Ofcom commissioned a report on traffic management detection, which concluded that no such tool exists that is fit for regulatory use. Therefore, what you have proposed cannot successfully be implemented by any NRA, and is open to legal challenge on technical grounds.

#59 In order to be considered to be reasonable, a traffic management measure has to be based on objectively different technical QoS requirements of specific categories of traffic. 

The performance of particular applications is an objective relationship to packet loss and delay. However, the customer experience is a subjective one, and different users place different subjective values on applications and content. This rule effectively eliminates the ability of ISPs to respond to the individual needs of their users with enhanced performance using whatever customer usage insight they have gleaned. This appears to be in contradiction to the stated desire to service the needs of users.

#56 “NRAs should require ISPs to provide transparent information about traffic management practices and the impact of these practices”

The end-to-end performance outcomes of ISP services are emergent. It is not possible for ISPs to know, in general and in advance, what the impact is their traffic management practices will be on end-to-end QoE. These practices have to continually evolve and adapt as the emergent outcome of “best effort” is a perpetually shifting baseline.

#60 Traffic categories should typically be defined based on QoS requirements, whereby a traffic category will contain a flow of packets from applications with equal (similar) requirements. 

#63 Based on this, reasonable traffic management may be applied to differentiate between objectively different “categories of traffic”, for example by reference to an application layer protocol (such as SMTP, HTTP or SIP) or generic application types (such as file sharing, VoIP or instant messaging)…

This sets up an arbitrage that will be exploited. When applications can get a performance advantage by masquerading their traffic (by protocol, port number, etc), they will do. This can and will create a “tragedy of the commons” and perverse incentives for the efficient use of a finite shared resource.

#61 “Encrypted traffic should not be treated less favourably by reason of its encryption.”

This is not technically implementable. Encrypted traffic (by definition) may have obscured information necessary to reverse-engineer the implicit “objective” quality needs by using deep packet inspection.

#65 “In the event that traffic management measures are based on commercial grounds, the traffic management measure is not reasonable. An obvious example of this could be where an ISP charges for usage of different traffic categories.”

This would insist that there cannot be differential pricing for applications with objectively different resource demands on the network. (This is also profoundly odd, given that it has been a standard feature of common carriage regimes in the past, and many existing telecoms services.) In the absence of a price mechanism to allocate resources to those who value it, there has to be rationing. The failure to respond to diverse subjective user demand (e.g. watching a work training video vs entertainment for children) must result in user harm.

#68 “In assessing traffic management measures, NRAs should take into account that such measures shall not be maintained longer than necessary.”

Network contention is an instantaneous phenomenon. Queues may build up at very low loads if the arrival patterns is bursty; conversely, there may be little contention at a particular buffer even at high average loads due to the packet arrival pattern. This regulation is not a technically meaningful one, since “longer than necessary” cannot be related to the risk of unacceptable QoE. Indeed, in new “polyservice” architectures, this guideline is meaningless and not implementable.

#75 “By way of example, ISPs should not block, slow down, alter, restrict, interfere with, degrade or discriminate advertising when providing an IAS, unless the conditions of the exceptions a), b) or c) are met in a specific case.”

This appears to be little more than naked lobbying on behalf of the interests of certain content providers at the expense of other content providers, ECS providers and the public. If users don’t like ad-funded content, there seems to be little reason why ISPs should have to engineer their networks specifically to support it.

#84 Impending network congestion is defined as situations where congestion is about to materialise, i.e. it is imminent. 

“Impending congestion” is not a well-defined concept in network engineering. For instance, it is possible to construct networks that can be safely run in overload, i.e. are always “congested”, but still deliver predictable application performance. BEREC is a well-resourced body, so it is not unreasonable to expect it to have sound technical advice and use precise terminology than can be related to actual network operation.

#89: NRAs should monitor that ISPs properly dimension their network, and take into account the following: if there is recurrent and more long-lasting network congestion in an ISP’s network, the ISP cannot invoke the exception of congestion management (ref. Recital 15); application-specific congestion management should not be applied or accepted as a substitute for more structural solutions, such as expansion of network capacity. 

This is in fundamental opposition to the nature and purpose of broadband as a statistically multiplexed medium. The ability to use idle capacity to reduce contention is not an infinitely scalable process; there are structural, protocol and stochastic constraints. Indeed, the essential point of broadband is to enable sharing through allowing contention.

This requirement implicitly implied a minimum quality level to “best effort” broadband where none has been contracted, insisting it is delivered through over-provisioning, whist simultaneously placing high burdens on the scheduling of traffic. It puts the broadband ecosystem into collision course with the limits of physics and mathematics, and thus is inherently unsustainable.

To continue, BEREC needs to have scientific evidence of the suitability and scalability of this approach, for which none exists (nor can it). This puts NRAs at odds with their duty to protect the weakest members of society, who can least afford the cost and consequences of the technical failure this policy inevitably results in.

#95 Beyond the delivery of a relatively high quality application through the IAS, there can [my emphasis] be demand for a category of electronic communication services that need to be carried at a specific level of quality that cannot be assured by the standard best effort delivery. 

There already is such demand, and it is being satisfied by use of alternative access technologies (e.g. ISDN, TDM) or overlay networks. This is not a speculative hypothesis, but a fact.

#96 “Such [specialized] services can be offered by providers of electronic communications to the public (PECPs), including providers of internet access services (ISPs), and providers of content, applications and services (CAPs).”
#97 “These providers are free to offer services referred to in Article 3(5), which BEREC refers to as specialised services, only when various requirements are met.”

These guidelines fail to capture that existing services from PECPs are being used to satisfy the demand for quality assurance. Special burdens are then placed on broadband access providers entering this competitive space, in contradiction to the claimed technological neutrality of the guidelines.

#97 “the optimisation is objectively necessary in order to meet requirements for a specific level of quality.”

This is a subjective judgement of the user as to whether assurance is necessary and worthwhile payment. It cannot be determined as an objective function of the application software itself. Furthermore, even if it were objective, the network simply does not necessarily have access to the relevant information.

#98 “the network capacity is sufficient to provide the specialised service in addition to [my emphasis] any IAS provided”

There is no reason to assume that IAS is always the dominant service. A consumer may see video for sign language as more important; an enterprise may want assurance for a key transactional application, with IAS being incidental. Therefore, placing special burdens on IP-based quality assurance to protect IAS is inappropriate and acts against the user’s interests.

#98 “specialised services are not usable or offered as a replacement for IAS”

This would not seem to be an enforceable requirement. Who is going to police whether a buyer of a quality-managed enterprise VPN is using it illicitly to get Skype or Warcraft to work better than “best effort”?

#98 “specialised services are not to the detriment of the availability or general quality of the IAS for end-users.”
#100 “All these safeguards aim to ensure the continued availability and general quality of best effort IAS.”

This would seem to create a highly perverse incentive for ECS providers: by dropping “best effort” Internet access, or by offering it as a completely separate access line, they can take specialised services outside of these regulations! (The guidelines claim only apply to IAS providers, not general-purpose telecommunications services).

#101 “NRAs should “verify” whether the application could be provided over IAS at the agreed and committed level of quality, and whether the requirements are plausible in relation to the application”

This is inherently impossible: the judgement of whether the risk of “best effort” vs the price of assurance lies in the subjective realm of the user. That “verify” is in quotes (with no technical proposals on how) suggests this is a matter of legal invention, not network engineering and operation.

#104 “Furthermore, the “specific level of quality” should be specified, and it should be demonstrated that this specific level of quality cannot be assured over the IAS.”

This statement reveals that BEREC simply has not grasped the nature of service assurance. No “best effort” IAS has such an assurance SLA. How can BEREC be in the business of making guidelines for a critical industry without an elementary understanding of the concepts involved?

#106 “If assurance of a specific level of quality is objectively necessary, this cannot be provided by simply granting general priority over comparable content.”

The network is a finite resource. Any reallocation of resources has to be to the benefit of one user/application flow, and detriment of the others. This appears to lack a basic understanding of network operation.

#106 “It is understood that specialised services are offered through a connection that is logically separated from the IAS to assure these levels of quality.”

This simply is not a meaningful technical requirement. Why is the addressing structure relevant to the performance isolation of applications, other than to artificially encumber “specialised services” with extra engineering requirements for no user gain?

#106 “The connection is characterised by an extensive use of traffic management in order to ensure adequate service characteristics and strict admission control.” 

This ignores that a “specialised services” can be for low quality-high bulk uses (e.g. overnight backup) which does not require admission control. Furthermore, the load regulation can be from contractual terms, or other technical means (e.g. application gateways, device-based security).

#107 “To do this, the NRA should assess whether an electronic communication service, other than IAS, requires a level of quality that cannot be assured over an IAS.”
#108 “The internet and the nature of IAS will evolve over time. A service that is deemed to be a specialised service today may not necessarily qualify as a specialised service in the future due to the fact that the optimisation of the service may not be required, as the general standard of IAS may have improved.”

These guidelines push BEREC into dangerous ethical territory. It confuses the existence and possibility of “success” with a limit on the risk of “failure”. This collides with basic engineering principles.

There is no quality floor to “best effort” broadband for a given application; the service outcome is not engineered, but emergent. There is also no “safety case” for the scalability of either the architecture or any implementation. Indeed, there is copious evidence that the Internet’s architecture is not scale-free. The EU itself is funding new architectures (e.g. RINA for 5G via PRINSTINE and ARCFIRE projects) because these are known problems.

The “general standard” can not only improve, but also suddenly deteriorate at a rate faster than any mitigation can be put in place. By encouraging dependence on these emergent qualities without adequate warning, any member of a professional engineering body would potentially be in breach of their duty to the public. This is because they would be encouraging users and application developers to take on a hidden risk.

#112 “Specialised services shall only be offered when the network capacity is sufficient such that the IAS is not degraded (e.g. due to increased latency or jitter or lack of bandwidth) by the addition of specialised services. Both in the short and in the long term, specialised services shall not lead to a deterioration of the general IAS quality for end-users.”

If users express a preference for applications with assured quality, why should resources be artificially diverted into IAS? Indeed, given that “all traffic is equal” by default, the deterioration of general IAS quality is a given, since to get more resources you simply need to send more traffic! How is this to be objectively measured, given that IAS does not today have a minimum quality bound, and its performance is emergent?

#113 “In a network with limited capacity, IAS and specialised services could compete for overall network resources. In order to safeguard the availability of general quality of IAS, the Regulation does not allow specialised services if the network capacity is not sufficient to provide them in addition to any IAS provided, because this would lead to degradation of the IAS and thereby circumvent the Regulation.”

All networks have limited capacity. (I hope BEREC grasps this; the statement above is troublesome, since it calls BEREC’s understanding into doubt). Users and applications inherently must compete for resources; broadband is a statistically shared system. The whole point of assurance is to provide preferential access to resources when the network is under load. The above requirement simply cannot be fulfilled; unbounded resources cannot be thrown at IAS when there is no economic justification.

This would also create a situation where individual end users were disallowed from using the assured applications of their choice because the “leftovers” for IAS are somehow deemed inadequate. Who is to tell users that their collective choices are some “wrong” (given the stated aim is to protect them)?

#115 NRAs could request information from ISPs regarding how sufficient capacity is ensured, and at which scale the service is offered (e.g. networks, coverage and end-users). NRAs could then assess how ISPs have estimated the additional capacity required for their specialised services and how they have ensured that network elements and connections have sufficient capacity available to provide specialised services in addition to any IAS provided.
#117 Specialised services are not permissible if they are to the detriment of the availability and general quality of the IAS. 

#118 However, detrimental effects should not occur in those parts of the network where capacity is shared between different end-users. 

In today’s networks there is a common underlying transport, often Ethernet or MPLS. This is moving towards software control (e.g. SDN, SD-WAN). There is already a continual competition and reallocation of resources between multiple telecoms services (e.g. VoIP SIP trunks) and IAS. This guideline is placing special burdens on specific telecoms services (quality-assured packet data) which are potentially competing against other (often legacy) telecoms services, not just IAS.

Some content providers may wish to have an entitlement to delivery of their services without payment to ECS providers, with an inherent lower quality bound offered for free. These guidelines seem to have little to do with protecting users, and everything to do with the business interests of those content providers.

#119 “in mobile networks…the general quality of IAS for end-users should not be deemed to incur a detriment where the aggregate negative impact of specialised services is unavoidable, minimal and limited to a short duration.”

This would seem to contradict our experience of the use of circuit-switched voice and VoLTE, both of which must take non-minimal and long-term resources away from IAS.

#137  “In order to empower end-users, speed values required by the Article 4(1) letter (d) should be specified in the contract and published in such a manner that they can be verified and used to determine any discrepancy between the actual performance and what has been agreed in contract.”

The result of a speed test is an emergent outcome of an application, which is not under ISP control (e.g. includes OS stack, other networks). Optimising for peak burst speed also potentially pessimises network performance for other uses.

#159 “It would help make the rights enshrined in the Regulation more effective if NRAs were to establish or certify one or more monitoring mechanisms that allow end-users to determine whether there is non-conformity of performance and to obtain related measurement results for use in proving non-conformity of performance of their IAS.”

BEREC has chosen to abdicate responsibility for whether its guidelines can be implemented, knowing that Ofcom has already published scientific evidence that they cannot.

#174 “require an ISP to take measures to eliminate or remove the factor that is causing the degradation”


This assumes ISPs have the capability to isolate such performance issues in digital supply chains. In general, they do not.

#174 “impose minimum QoS requirements”

Establishing a “quality floor” (based on end-to-end packet loss and delay) would be a viable approach to resolving many of the service definition, monitoring and enforcement issues raised above.

#175 “In the case of blocking and/or throttling, discrimination etc. of single applications or categories of applications”

As Ofcom have already established, this is not (in general) an implementable requirement using current technology. “Throttling” is not a meaningful concept in the context of an emergent performance outcome of a stochastic system.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.

Trackbacks

  1. […] author, net neutrality legislation is unnecessary, counterproductive and technically difficult to implement. The final version of Guidelines may, if drafted carefully, eliminate some of the problems. I will […]

Speak Your Mind

*