12 reasons why Virtual Quality Networks (VQNs) are inevitable

There is a new telecoms and cloud industry growth sector, Virtual Quality Networks (VQNs). My belief is that VQNs are due to become as pervasive for enterprise users as Virtual Private Networks (VPNs). They will be far more profitable as they are much more valuable.

Here are a dozen reasons why VQNs are strong candidate for the “next big thing” in telecoms and cloud.

Have any of these things ever happened to you?

  • Broken audio on a Skype call
  • Streaming media “circle of death”
  • Stalled big download or upload
  • Blocky video conference
  • Jerky remote desktop interface

If so, then you’ll immediately know what problem VQNs solve!

VQNs isolate important traffic based on quality needs, just as VPNs isolate sensitive traffic based on privacy needs. VQNs take existing networks, and “reschedule” and “rereoute” traffic to get the maximum value from them. Again, this mirrors how VPNs overlay existing networks without demanding them to be redesigned from scratch for better security.

I believe that VQNs are the “next big thing” in telecoms and cloud. I even have my money where my mouth is as a co-founder of VQN start-up. Here is why I believe VQNs are a timely bet on the future.

1. Mathematics insists we use VQNs to isolate different flows

The structure of networks is changing over time, and so the factors that affect the user experience change. The dominant factor of latency and hence user experience is becoming the (outliers of) contention between flows. Over time, this makes the “best effort” Internet less attractive, and VQNs more attractive.

Whereas physical communications is dominated by distance, digital communications has been in the past been constrained by link speed. This is now becoming relatively less important, as higher link speeds make a trivial difference to latency. Indeed, more bandwidth goes from being our friend to our enemy, as faster networks change state more quickly. As they become more dynamic they get harder to control from the edge.

What is happening is that we are just creating bigger bandwidth bazookas for users to fire at each other, causing rising “interference” (contention) and “instability” (non-stationarity). That means the old way of throwing more quantity at quality problems is reaching its scaling limits.

VQNs can take control over this contention near the network edge, and also bypass congestion in the core of saturated backbones. This means VQNs can help us to overcome the performance constraints of the present Internet architecture.

These mathematical limits are certain to bite harder and harder, resulting in declining Internet performance versus need, making VQNs ever more necessary.

2. New technologies and architectures support VQN delivery models

Software-defined networks allow us to “slice” networks by quantity into pools of resources with different quality. Google’s SDN technology Espresso is effectively a form of VQN, albeit using Google’s own transport network. Companies like 128 Technology are building a better kind of “slicer” based on decades of experience of the shortcomings of TCP/IP.

There are encouraging results from RINA trials, a full-blown replacement for TCP/IP based on solid computer science theory. RINA promises to be the best thing in networking since sliced bandwidth. [Disclosure: I am on the advisory board for the EU’s ARCFIRE R&D project.]

Finally, Contention Management scheduling technology can finely “dice” networks by quality, making for a full “slice and dice” solution. There is a theoretical limit to what you can achieve with packet scheduling and we are now very close to it with the most advanced mechanism. This can also deliver fully engineered and assured quality.

Before these technologies, VQNs were difficult and costly to construct. In the next few years they will become much easier and cheaper.

3. Technically misguided regulations make VQNs more attractive

The telecoms regulatory system has engaged in a fantasy fiction of “neutral” networks with a popular role-playing game of “Neutrality Violation!”. As long-time readers will now know, “neutral” packet networks don’t exist, and never will, since they are the product of the imaginations of well-meaning lawgeneers. “Neutrality” is a futile effort to force a virtualised computing service into a circuit paradigm, based on the false application of the carriage of metaphor for physical objects.

The fallout from technically misguided “neutrality” regulations is to dissuade network operators from engaging in rational resource allocation strategies. This results in insane pricing and economics, with widespread quality arbitrages. Indeed, it forces networks towards their performance cliffs at the maximum possible rate, since this “quantity is quality” model is inherently unsustainable.

Whilst “neutrality” is bad news for ordinary consumers, it is great news for VQNs as their “fast lanes” and “toll bypasses” will certainly be in great demand even sooner!

4. Application demand changes drive VQN demand

We are seeing a strong shift towards SaaS and cloud-based application delivery. Traditional voice services are being replaced by Unified Comms, often delivered as a wide-area network service. Enterprises are tackling security problems and device management cost by using virtual desktops. New forms of demand like augmented reality will impose further fresh demand on networks for predictable performance.

These application trends are accompanied by shifts in devices: from PCs to tablets and thin clients; from fixed desktops and portable laptops to mobile smartphones; and from symbols and keyboards to sensual worn interfaces, such as wearables or VR headsets. These tactile interfaces in turn drive demand for unbroken interactivity and the powerful illusion of presence.

The growth in enterprise applications with both high quality and/or quantity needs will drive additional VQN demand.

5. The hardware market needs VQNs to grow demand and differentiate

Players in the PC hardware market are all working off of small margins, and seek to differentiate their products to resist commodification. Enterprises are moving to cloud and SaaS, with more thin clients and tablet. This makes it hard to differentiate based on the hardware alone, and the software platform is often controlled by another ecosystem player.

In the PC market it was possible to generate high profits from adjacent hardware markets, especially printers. Toner and ink have had huge margins, as consumables have made up for the small margins on OEM hardware.

Rather than putting dyes on paper, VQNs help light up pixels on screens. These “ephemerals” are to the cloud as “consumables” have been to PCs. That means VQNs are the new “ink and toner” for the cloud, just with zero inventory cost and low production cost.

Over time the full marketing and distribution capability of the hardware players will swing behind VQNs as they face up to the need to differentiate their experiences.

6. History points us towards VQNs

The current Internet is very much like the “break bulk” system of cargo carrying that dominated shipping for millennia. It is the “MS-DOS of networking”, offering a “single-tasking” undifferentiated data transport model.

VQNs are a bit like a “MS-DOS compatibility box”, as we begin the transition to a “multi-tasking” Internet. They let us run “legacy” applications that are unaware of quality choices in a managed container. This is a bit like how the US military first dabbled in containerised shipping in the Korean war with the CONEX box.

The end game is a full “intermodal digital logistics” revolution, with the telecoms equivalent of the 40ft cargo container. I believe that this will trigger a major shifts in power and profit. Virtualisation technologies like RINA appear to match the requirement: a “container” technology that fully abstracts information transport with a standard API. As with 40ft containers, this is a full-on revolution, as the compute will go to where the VQNs concentrate high-value traffic.

VQNs do for telecoms something akin to what hypervisors did for computing, which took existing pre-cloud operating systems and virtualised their delivery. VQNs are a crucial step towards networks that are “made for cloud”.

7. VQNs align to what customers actually value

The telecoms industry is not the world’s craziest, by some way. However, we do have one ubiquitous and fundamental disconnect that keeps us well up in the leaderboard. What the customers value are applications that perform well enough. This in turn is dependent on the instantaneous quality of the network (and nothing else!).

Pretty much all of our models and management systems are designed to work with non-instantaneous metrics and measures of quality. As there is no quality in averages, that means the core parameters for our products (expressed as a quantity) do not reflect the quality of experience on offer. This is a major disconnect, because it means the telecoms industry is not in control of the user experiences it offers.

VQNs offer the ability to “shim” the current mad model, and engage in the process of transformation into a “quality-first” approach that reconnects our metrics with the actual experience being sold.

8. Systems theory demands that we build VQNs

Today’s broadband networks are generally built with a single class of service. Also, all networks have “performance cliffs” where quality drops suddenly as you increase the load. The problem with having a single class of service is that the system becomes fragile as it is driven into overload.

In contrast, VQNs typically offer multiple classes of service for different quality needs. After all, their reason to exist is to isolate traffic based on the demand for quality. At a very minimum the VQN offers a higher quality class than “best effort” for real-time or interactive applications. They way also also offer classes for urgent and non-urgent bulk data.

When VQNs introduce multiple classes of service this creates optionality: applications (or traffic management rules) have to choose a class of service. There will be a cost to asking for better quality, hence a incentive to engage highly cooperative resource management. This optionality demands application developers or deployers learn how to yield when the system is stressed.

VQNs help enterprises and applications to learn from small stress to cope with big stress. This is the basis for an antifragile architecture, which is a prerequisite property for any system to have longevity.

9. Business theory and trends indicate VQN adoption

Manufacturing industries moved to “lean” and “pull” models decades ago, and service industries have recently been following close behind. Computing has also been applying the Theory of Constraints with proven DevOps methodologies.

Telecoms is the laggard in these matters. VQNs are an opportunity to apply these techniques to our industry at multiple levels. New mechanisms move from a bandwidth “supply push” model to a quality “demand pull” one. The enabling VQN business processes are redesigned around the elimination of “waste”. For instance, automated fault isolation of quality issues removes the need for many of today’s manual and imprecise hunt-and-fix methods.

The quality revolution has already happened in most industries, but has bypassed broadband until now. VQNs are an opportunity to try out new “quality first” approaches using well-tried business theory.

10. Computing trends require VQN service delivery

The move towards content delivery networks and IoT is pushing us to a more distributed computing model. Mobile edge computing and NFV promise to offer wide area compute-on-demand well away from the data centre and network core. At the same time some compute is moving from the edge to the cloud, such as some TV set top boxes or virtual desktops.

As we move towards more highly distributed models, the unavoidable result is that performance becomes more sensitive to quality. It is at the network edge that the critical and dominant contention effects are biggest, and these will increasingly impact the user experience.

The trend towards edge-based compute for many new applications will drive demand for VQNs. They enlarge the quality “budget” for the application, lowering the failure rate, and allocate it with more control to increate efficiency and safety.

11. Recent engineering science advances make assured VQNs possible

In the circuit era for voice, telcos had very reliable predictive models of how much supply they would need to meet a given user demand. The desire for rigorous engineering broke down when everyone was making lots of money from packet data, since they were all too drunk on statistical multiplexing gain to care.

New techniques enable us to apply ideas from safety-critical systems to engineer experiences with known QoE safety margins. These take long-proven engineering methods, and apply them to distributed computing for the first time. You make be surprised to learn that some basic foundational mathematics is missing from the textbooks. Tour network equipment vendor cannot in general reliably measure or model the QoE “slack” or “hazard” in their deployed systems.

VQNs create a space for first-mover advantage in applying these breakthrough quality science techniques. What CIO moving to SaaS would choose a network with no performance safety case in preference to a VQN with one?

12. Economics means the money will be in VQNs

The broadband industry has unwittingly made a category error about its very nature. Whilst those very nice physicists help to construct transmission (using fibre and wireless), packet data is about how that transmission resource is divided up with computation. This means an ISP is a computing service built from transmission services; it itself is not a transmission service, even if it looks a bit like one.

Because the value of transmission is in bandwidth, and packet data looked a lot like transmission, it was assumed its value was also in quantity, with quality being important but secondary. The upshot is that we’ve sunk enormous amounts of capital into very idle networks. This is a bonkers economic model thay delivers inverted economics: costs tied to real-time, revenue to bulk data.

VQNs by definition offer managed quality linked to an application performance outcome. They help us to knowing what slack we have (or under-delivery), aligning resources to actual customer value and willingness to pay. The underlying technology allows us to contract transmission supply resources with a quality SLA, and then to trade-off quality, cost and risk to match diverse customer demand.

VQNs help to restore rational economics to broadband and the Internet. They make it possible to create new trading platforms where supply and demand for “quantities of quality” can be matched, and given a true market price.

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.