The Tao of Telecoms – A blueprint for a ‘lean’ business transformation

This is my “Tao of Telecoms”, the synthesis of my last decade working with (and learning from) a number of leading practitioners in network and management science. It is a first draft, and thus imperfect, over-long, incomplete and possibly even incomprehensible in places.

It offers the prospect of a viable path to a ‘lean’ business transformation for the telecoms industry. This breaks the historic trade-off between lower cost and better experience, allowing both to be improved simultaneously. When implemented, it fundamentally changes the economics of access to cloud-based applications.

Consider this a scribbled draft of a Lutheran note pinned on the door of church of the profligate Cult of Bandwidth. The lavish indulgence of over-provisioning quantity to mis-deliver quality needs to be abandoned, both for the sake of the poor customer as well as the exploited telco investor.

The telecoms industry is really two distinct businesses: an underlying data transmission industry (including the physical plant and infrastructure), and a networked computing services industry that also encompasses cloud (and hence data centres). I believe that the “telecloud” business is ripe for a ‘lean’ quality revolution that transforms the industry’s value and cost economics. Here is how it can be made to happen.

Over the past few decades, many industries have been transformed by ideas like Lean Kanban, Agile, the Theory of Constraints, Six Sigma, DevOps, and value-based Service Design. Telecoms has, thus far, largely proven resistant to these concepts. Instead it has followed a “fat pipes” paradigm in which quality has been controlled through quantity. Whenever you have a network quality problem, the first reaction is to throw more resources at it.

This has resulted in an industry with persistent patterns of gross over-delivery of quality, at high cost, to most users for most of the time; as well as uncontrolled and recurrent under-delivery of quality. It is a supply-led industry that forces a “purpose-for-fitness” model onto its customers. As one industry insider quips, “If the pain has always been there, you don’t know what it means for it to go away.”

We see this pain in many forms in our industry: products and technologies that are developed at great cost, yet were infeasible from inception, so fail in the market; an inability to accurately predict the user impact of capital expense upgrades, wasting money; an endless struggle to calculate the carrying capacity of any service in user-centric terms; and a maddening chase to isolate problems when they all too often happen.

We all have tales to tell of how this industry falls well short of the quality we desire. Indeed, it has a variability in quality far worse what we would tolerate from any other industry. I personally have had to replace DSL modems and my internal house wiring to fruitlessly try to resolve the performance of my home broadband service. And I’m supposed to be a performance “expert”!

Our collective frustration as users is self-evident. Everybody has experienced the “circle of death” watching streaming video, had a glitchy VoIP call, or had Skype break up and drop the video session mid-way. The issue is what to do about it.

To resolve this undesirable situation, a new management paradigm is required. Such a ‘lean’ business transformation works to deliver a fit-for-purpose supply that does not under-deliver quality; that also eliminates the waste of over-delivery; and is agile in its response to inevitable changes in the nature and structure of demand.

The “golden thread” that runs through any “lean” transformation is to conceptualise of the world in terms of dynamic flows (of single pieces), rather than static stocks (with fixed batch sizes). In today’s “fat pipe” networking paradigm, you reason about performance in terms of “megabits per second”. This is implicitly in a “batch” paradigm of “one second”. The quality you get is seen as a consequence of quantity: a “quantity with a quality”.

In a “lean quality” model we view things differently. At all timescales, you have to balance the demand for information flow with your ability to supply that flow. In terms of packet networks, that requires us to align the demand for timely information exchange with its supply. This means a network is in the business of supplying a “quantity of quality”, as that is what is being demanded.

The way to achieve this transformation to ‘lean’ is, in principle, quite simple. You need to:

  • understand what “fitness-for-purpose” is in the eyes of the customer;
  • translate this into a set of technical requirements for end-to-end information flow;
  • decompose this into a set of intermediate flow requirements through the service elements, and at any contractual or management boundaries; and then
  • make the appropriate operational trade-offs of resources in the network to deliver those flows.

This process is designed to align three essential aspects: what you wanted, what you asked for, and what you got. These have fancy names in computer science, being the service’s intentional, denotational, and operational semantics. The core tasks are to aim for the right thing in the first place; express the resulting requirements in the right paradigm (i.e. flows); and have the operational policies and mechanisms aligned to those requirements.

If you manage all of that, then you can achieve higher effectiveness and customer satisfaction whilst simultaneously improving efficiency and cost. That’s a ‘lean’ business!

A lean business transformation is ultimately rooted in a management system designed to deliver change. That change could be of a continuous improvement nature, which over time results in a qualitative and radical change; or it could be more of a “leap” in capability designed to be delivered as a whole. There is no right answer, as it depends on the context and need.

Whatever the improvement approach, a management system performs oneessential task: to provide a theory of the most relevant issue to focus on, so as to achieve the desired outcome.

In our case, it is to switch from managing stocks (with unpredictable quality) to flows (with predictable quality). That in turn decomposes into three things: what to change (a new intent), what to change to (a new denotational requirement), and how to cause the change (a new operational behaviour).

The job of the manager is to structure the changes to the timescales and tempos on which the business can operate, and keep within the externally imposed resource and policy constraints. As there will be many changes to implement over many timescales, there may need to be many concurrent changes in progress.

For the rest of this article, we will focus on one specific aspect of the problem, since it is a ubiquitous brake on progress: How can you engineer performance and ‘lean quality’? Specifically, how can you turn an intent for a particular experience for one end user into a written requirement for information flow, and then deliver upon that requirement in a predictable and affordable manner? And how to scale that to a portfolio of experiences for many users and a complete supply chain?

In order to answer these important questions, we need to note the context in which we are going to act as performance engineers. We work with digitalsupply chains, which by their nature involve information passing at or near the speed of light. That means we cannot have a literal translation of ‘lean’ practises from the physical world, where everything moves at the speed of sound or below. For instance, we cannot ‘stop the production line’ like Toyota did when we see a quality defect as cars crawl by.

Furthermore, our digital world is one of distributed computing systems, where we have duelling protocols and algorithms working to capture as much network resource as they possibly can. We need a means of dealing with the unique challenges of the context in which we work, just as say deep-sea oil and gas drilling has unique environmental challenges.

In telecoms networks we are working with the greatest possible geographical scope of operation, the fastest possible processes, with the most rapid imaginable rate of change, accompanied by the most heterogeneous known kind of demand, all atop the most diverse and irregular form of supply, being delivered using statistical resource sharing processes with innate variation. Managing quality in digital supply chains is hard for a good reason!

The first step in overcoming these steep challenges is to have a rich enough language in which to describe the problem domain. By the nature of ‘lean’ we start with a language located in the user experience demand side. We then work backwards into the system to see what is required to supply it. Therefore, we must begin by describing ‘success’ in the terms of the user, not the network.

The act of “engineering” anything is primarily an ethical one. The tendency is to think of engineers as people who figure out how to “construct success”, but that is really mere craftsmanship. What professional engineers do it to take full responsibility for unplanned failure. That means our first duty is to have a language in which to make the “safety case” for our performance engineering work, so we can make appropriate promises.

That safety case starts by describing the outcomes that users seek: suitably assured application behaviours that result from predictable service quality. This demand-side view then translates into a supply-side language of safety margins and performance ‘hazards’. This qualitative language can be used to make truthful promises about what a network service is capable of (and, equally, cannot do).

We want to be able to not just qualitatively describe the service capability, but also quantify it. How many users of which applications can we support? What is the rate at which we can expect to have glitches and failures, since these are inevitable features of the real world? How much resource capacity do I need to buy to deliver upon my service promises?

That means our qualitative language then needs to be turned into a quantitative engineering model. Most importantly, we wish to know how “at-risk” any particular user or use of the network might be of breaking our promises. This lets us manage our “portfolio” of quality of experience risks, and make rational choices about how to allocate resources. We can also make informed choices over when more resources are required, and the trade-off with risk.

In this way we can offer predictable outcomes to our other key stakeholders, being those investors who supply us with capital to construct those underlying physical data comms resources. They are just as concerned as end users to avoid unplanned surprises.

This quantitative “safety case” process needs a performance engineering model. Such a model must:

  • take a collective set of aspirations for user experience (our intent);
  • refine them into quantitative requirements for information flow (our denotation);
  • execute these via the network mechanisms (our operation); and finally,
  • assure the outcome (proving we fulfilled the intent) so we can bill for the predictability value we created.

Our engineering model also has to work across the whole lifecycle of our service delivery. The essence of any model is to tell us what might happen before we build something. In other words, we need to reason “ex ante” about performance, not “ex post” (or even “post mortem”).

This means relating quality to business processes at every stage:

  • (Product development) Ex ante reasoning about feasibility and cost.
  • (Sales, marketing and delivery) Ex ante reasoning about deployed performance and cost.
  • (Support and service) Ex ante automation of fault resolution; Ex post isolation of previously unknown or unexpected operational faults, and architecture or design problems.

To summarise thus far: we are in the domain of engineering quality of both the user experience as well as the network service. The quality of the outcome has to be aligned across intention, denotation and operation; as well as through the whole service lifecycle.

For this to happen, it would be really helpful to have a single universal concept of ‘quality’, both qualitative and quantitative. For if we have a variety of quality definitions and metrics for different stages and processes, then we inevitably will introduce a great deal of trouble in relating them to one another.

So what is ‘quality’ anyway, and how can we measure it in a consistent way? These questions may seem trivial, but are actually rather profound. The key to a lean business transformation for networking lies in reframing and reconceptualising the very nature of “quality”. In the process, we will adopt a whole new quantitative paradigm for measuring it.

In order to comprehend a new paradigm, you first need to become aware of the one you are presently working inside. This is a matter of making the unconscious and unexamined into the conscious and examined. It can be a difficult thing, as it may challenge our core beliefs, making us want to double-down on them to stay safe. It may also call into question our self-identity and status as “experts”; this is very uncomfortable, and can provoke intense internal resistance. You have been warned!

When we talk about the quality of a broadband “pipe”, it is usually located in a bandwidth paradigm and a “batch” over some period. Metrics for jitter and packet loss are usually expressed on a “per second” basis, as is throughput.

Indeed, we see a “batch size” for packet loss as long as a month in the GSM Association’s specification for IPX, the standard for quality-assured voice calls. You could have any pattern of loss you like in that month as long as the average is OK! Clearly this is not an effective standard, and the problem is not the GSMA itself, but the engineering framework we are all used to using.

The standard framing of network “quality” has several essential problems. The first is that the user doesn’t experience the “per second” properties of the network, but rather a continuous passing of instantaneous moments. There is no quality in averages, and when your quality metrics disconnect from the user experience, you can no longer reason about the “safety case” for that experience.

The second is that these metrics don’t “compose”, so you simply can’t add up the demand from different applications, and compare it against the supply. The numbers might “add” in an arithmetic sense, but that doesn’t mean you are doing a meaningful operation of rigorous performance engineering! You can’t reason about an end-to-end supply chain if you can’t reason about the quality of its elements and how they come together to form the overall quality.

The third, and most important reason, is the deepest and most significant. The standard view of “quality” is as a “positive” thing, and we consider giving more of this beneficial positive thing to some flows rather than others. You can think of this as a “quality augmentation” paradigm. Quality is something you want as much of as possible, and you can always go seeking more of it.

So you might offer “low latency” as a positivist benefit to sell, or (as have many people with Active Queue Management) searched for a “more powerful” packet scheduling algorithm. This point of view ignores an unavoidable reality: when you make a choice to give better quality to one flow, you must give worse to another. You can’t destroy the “impairment”, only shift it around.

The search for an absolutely positive “better” form of quality is inherently futile, since in can only mean a “worse” quality for someone else. This hasn’t stopped many from trying, and a lot of poorly considered academic papers being published.

We need a new engineering theory of “quality” that overcomes these issues. The most important requirement is that our concept of quality must be related to the delivered user experience. It must also be a metric that can be (de)composed along supply chains of many interconnected networks and their elements. Finally, it must also reflect the underlying physical reality of the systems we are dealing with, and be observable and measurable as such.

How to achieve these goals? The underlying reality of quality in networks is very simple: they attenuate quality. This is possibly the single most under-appreciated statement in the whole engineering discipline of packet data.

That means the ideal network replicates information instantly and perfectly. All real networks are worse than that ideal quality level, and quality is a thing that can only be lost. Hence, in networks, quality is a negative thing, and we create value by having only a limited level of this negative impairment or “attenuation”.

To fulfil our ambitions, we need to step through Alice’s looking glass into network Wonderland, and see the world from a new perspective. The very origin of our “quality universe” needs to change, to instant and perfect replication of information. We then need to see everything in terms of the “impairment” the network causes, and the “disappointment” that results to the users.

This reframing from “quality augmentation” to “quality attenuation” feels strange because it is. It somewhat resembles entering the world of quantum mechanics when you are used to classical Newtonian motion. In this paradoxical reframing, success is having allocated the unavoidable impairment to cause the right kind of disappointment! We are also dealing with how the smallest-scale randomness inside the network accumulates to affect macroscopic world of the end user.

Naturally, such negative language and mentions of uncertainty do not appeal to marketing departments who used to selling fixed peak burst bandwidth. Yet it is the essential reality of the world we work in. When we re-frame everything in terms of “negativity”, the problem domain become tractable to “proper” engineering, because this aligns to the underlying reality.

The “impairment” and “disappointment” can be viewed at many levels: the overall satisfaction with the application; specific aspects of its performance; the individual user’s experience; the particular network service quality being delivered; and the underlying mechanisms and their functions. We can relate all of these together by maintaining a consistent “negativist” view of quality. (The technical term is to see it as a “privation”.)

Quality attenuation is an abstract concept that has two concrete forms in terms of packet delivery: information is delayed in the process of being copied, or a packet is erased and it is lost entirely. There are no other options! (Bit-errors are typically considered a form of loss.)

What we are doing is making this delay and loss figural, since it is what matters. A subtle but critical change we are making here is to view that delay or loss as two facets of one thing: quality attenuation. They are not completely separate phenomena with some coupling that is left unspecified.

This new idea of ‘quality attenuation’ is the breakthrough that makes ‘lean’ networking possible. Without this reframing of the nature of quality, the situation is both serious and hopeless. With the reframing, it is merely serious.

Packet networks are man-made worlds where we make the rules and are God-like; the “games of chance” inside networks are not a natural science like physics, and instead are an “unnatural science” of distributed computation. Just like in physics there are basic laws that we find useful to model the world, so are there basic “laws of ludics” that describe the quality attenuation “game”.

The first law is the most important, and often the hardest to grasp: Quality attenuation exists and (solely) determines application performance outcomes. It isn’t bandwidth or jitter that matters, it is quality attenuation and nothing else. In other words, performance of your application is solely a function of the loss and delay the packets encounter. Furthermore, when you have applied a load to a network, a certain amount of quality attenuation has to occur.

We introduce a new term, the “predictable region of operation” (PRO), to capture the acceptable level of packet data quality attenuation to deliver an acceptable level of application performance quality attenuation. Note the alignment of the “negative” framing. This PRO concept allows us to denotationally capture “good enough” quality attenuation.

The second law is that quality attenuation is conserved, both as single points as well as along paths. If there was a counter of the amount of attenuation each network element introduced over time, it would only ever go up. And as any packet crosses a network, if it had a counter attached, the attenuation it experiences only rises. Once you’ve delayed a packet, you can’t undelay it; and once erased, it can’t be “unerased”.

This conservation law is critical, because we can only reason about choices for properties that are conserved. How much housework should each of your children do if you allocate it fairly? You can only answer that if the size of the job is independent of the child (i.e. it is conserved)! If the work to be done varied as you changed the worker, you couldn’t reason about the system and how to allocate the supply (of child labour) to demand.

The third and final law is a bit trickier to understand. It says that “mutable” quality attenuation is tradeable, with two degrees of freedom. Let’s unpack that.

Some quality attenuation is a property of pure geography and the speed of light. Some is also the result of the technology we have deployed, and the time it takes to “squirt” (serialise and deserialise) a packet over a link. These are “immutable” properties of the universe and architecture we have chosen. In contrast, we can schedule resources in radios, and also put packets into buffers. This is “mutable” quality attenuation that varies with load.

The “two degrees of freedom” results from a unique aspect of packet data: we are allowed to erase packets (i.e. incur loss). This is a necessary aspect of operation, as the alternative is to require infinite memory with potentially unbounded delay. That’s not desirable, so we have choices to make. Rather like “pressure, volume, temperature” for a gas, we have “load, loss, delay” for quality attenuation. Set any two variables, and the third is automatically set for you.

This “quality attenuation” model is the basis of a new performance science. It relates the denotational requirement (for application or network performance) to the underlying operation of the application software or network mechanisms.

Now, all the above is actually relatively simple. This ought to appear in undergraduate textbooks on network performance engineering or computer science. That is doesn’t appear in the textbooks tells you something vital about how theory has lagged behind practise in our discipline. That it feels awkward and unfamiliar is a testament to the unlearning we have to perform to exit a paradigm that no longer serves our needs.

The core of this science is the switch of our resource model away from “bandwidth” (and “quantities with quality”) to “quality attenuation” (and “quantities of quality”). The focus thus goes from “quantity first” to “quality first”. This model is a far better approximation to reality, so it offers a far greater level of predictive capability. In technical terms, we are seeking a resource model that has low “junk and infidelity” to the real world. It captures what is there (high fidelity), and doesn’t introduce artefacts that are not there (low junk).

What that means in practise is that we need a quantitative model of quality attenuation that captures only what is relevant in the world, and abstracts away irrelevant variation. Since the user experience is an instantaneous phenomenon, we must inevitably use probability distributions to represent quality attenuation.

As a by-product, we also require that these probability distributions do not change too rapidly. This is called “stationarity”, and is an essential (but little-understood) prerequisite for networks to function. Any successful model of network performance must also quantify the obscure but integral issue of non-stationarity.

This quality attenuation framing in turn has a number of important corollaries. The first of which is that we note that the scarce resource in a network is not capacity, but timeliness of information delivery. If we have an infinite amount of time to replicate the information, capacity limits are irrelevant. Hence the quality attenuation framing allows us to model actual resource (opportunity) costs, and relate these to application outcomes.

Hence we can begin to answer questions like “what does it cost (in resources and real dollars) to carry a high definition voice call compared to a standard definition one?”. At long last we have the toolkit that MBA types need to do their job and relate commercial trade-offs to engineering ones. We can even “budget” performance for a digital supply chain, just as we budget money, when we express it through quality attenuation.

Another benefit of the quality attenuation framing is that is we can now engage in reliable ex ante reasoning about performance. The conservation and composability of quality attenuation means we can “add it up”, and it is a meaningful operation. This can be done both in terms of demand, and supply, and how they compare.

The most basic task of engineering now becomes possible: to answer questions about “slack” and any under- or over-delivery of performance. This requires us to quantify the “performance hazards”, i.e. risk of under-deliver or over-delivery of the performance-related aspects of application quality of experience.

Finally, this model satisfies the criterion that it is measurable from observable events. We can see a single packet as it gets copied along a path, and measure the increase in quality attenuation. The arrival of a packet is an observable event; and the non-arrival is also an observable “non-event”! It is fine to have customer-centric metrics like net promoter score, but they can’t be observed from the network. Quality attenuation is simultaneously user-centric and also a network-centric measure, unifying these two domains.

Engineering is by its nature a quantitative act, and thus needs metrics and measures. This is not the place to present the mathematics behind the new science of quality attenuation. However, it is necessary to at least list the ingredients so that you have a sense of what needs to be learnt to gain mastery.

In order to model quality attenuation, there is an essential innovative leap required. This is to unify the probability of an outcome of an event (like a packet arriving with some delay) with the probability of a “non-event” (such as it not arriving at all). A new branch of mathematics is required that extends probability theory to include things that didn’t happen. Maybe it should be called “improbability theory”!

You can think of it like this: standard probability theory is rooted in the mathematics of modelling physical things and processes. When we roll a dice, we might have a model of whether it is biased. We don’t typically consider the dice never landing, or being stolen by your cat. Even if we do, we consider it as yet another form of “event”, rather than a “non-event”. That is problematic, as we now have multiple types of event to relate to one another.

That there is a major limit to commercial progress located right in the “basement” of network science is a difficult truth to grasp. It is a truth, nonetheless.

This new branch of mathematics is called ∆Q, which hints at its purpose:to quantify some change in quality attenuation. The building blocks of ∆Q metrics are “improper” random variables (i.e. ones that allow for “non-events”), and “improper” probability cumulative distribution functions (that don’t need to reach 100% as a result).

These metrics can be broken down into three elementary “bases”, which represent the geographic (G), packet serialisation (S), and variable contention (V) contributions to the overall quality attenuation. The neat “trick” is that these “bases” can be “convolved”, which is a posh way of saying they “add up” in probability theory.

This in turn gives us the basis for an algebra. Whilst many of us get a slightly sick feeling at the mention of the word, an algebra is a source of delight if you want to perform “proper” engineering. It is like we were trying to design aircraft before, but couldn’t add up the weights of all the parts.

This pure maths algebra then sets up a whole new calculus of performance as applied mathematics. We have a multi-level model (called a “morphism”) of how the network “impairment” and application “impairment” are related. Each “level” is framed in terms of quality attenuation, and quantified using the ∆Q calculus.

A specification language lets us define the desired relationship between supply and demand. The “performance contract” is called a Quantitative Timeliness Agreement (QTA). It imposes an acceptable ceiling on demand, and the resulting floor on quality that the service will deliver. This allows us to reason about fitness-for-purpose, and engage in questions of trade-offs of cost, capability and risk. These QTAs are expressed using ∆Q metrics.

In particular, we can use a QTA to directly quantify how “at risk” the experience is for a given deployment before it is built. This is just like how an architect and structural engineer calculate the safety margin for a skyscraper in an earthquake zone. We can model the static load (like the weight of the building) and the dynamic load (like wind and moving ground), and predict if our structure will “stand up”.

Today’s network engineering model is to build our “skyscrapers” and then see if they stand up! This is no better than the engineering of medieval cathedrals. If anything, it is worse, as we have yet to accumulate a solid body of craft knowledge.

Furthermore, we can take ∆Q-based measurements from a network and compare them to the QTA to see if we have slack (and potential for cost savings) or under-delivery (and the risk of unmanaged churn).

We have taken a long tour of this new world of possibility of ‘lean quality’. It started with considering a commercial transformation to a ‘lean’ model. In turn this resulted in a reframing of the nature of quality; a new engineering paradigm; a new performance science; and new branches of both pure and applied mathematics.

This may seem daunting to anyone considering going on this transformation journey. I have had many years of practise in this domain, with many failures and setbacks, and some successes. I have learned one essential thing.

There is a balance between human issues and technology issues. And the initial balance is 100% human and 0% technology. What matters most is not the mathematics, high-fidelity network measurements, or clever new mechanisms. What matters is you, the manager, and the quality management system you are operating.

Every business has a quality management system of some kind, and a process for improving it. The first step is to surface what your present system is, identify what is most unsatisfactory about it, and begin to understand what the true underlying root causes are. We always start with the people, then understand the processes they operate, and finally the technology gets attention.

There is a capability maturity model in terms of the visibility of quality and control over it. Wherever you are in that journey is a simple fact; your job is to orient yourself and your organisation towards the right future, and take the next step.

The project that everyone who wishes to ‘go lean’ needs to engage in is to upgrade their quality improvement processes. There is no point in enhancing the technology if the management system is incapable of absorbing its benefits. Instead, we need to understand what the real constraint is to improvement. It could be one of skills, policy, process, or metrics.

Initially the problem it is (everywhere and always) the skill of identifying the constraint to deliver more value. We have to work on the management system itself and its self-improvement! Thus better management science (and strong models of cause and effect of management action) must precede the use of network performance science.

You are what you measure, and inevitably we will eventually hit the point where the metrics are indeed the issue. At that point it makes sense to switch to quality attenuation analytics. This requires a new system of network event capture, both spatially as well as longitudinally over time. That data then needs suitable analysis to understand its temporal and spatial structure, and look for common failure modes.

This is an upgrade somewhat like going from poking a patient with stomach ache to instead doing a high-resolution functional MRI scan of their intestines. We can see the structure and motion, and make accurate diagnoses based on the result. Not only that, we can begin to use the advantage of the “superpower” of total experience visibility to rebuild business processes in ways that competitors who are “blind” cannot match.

The final stage is to fundamentally rethink the mechanisms of the network itself for ‘lean’ operation. New quality attenuation-aware mechanisms allow us to exploit all of the “resource trades” inside the network. These can deliver assured performance outcomes at the same time as fully saturating the network. In other words, it delivers the maximum possible value at the minimum possible cost. This is seen as “impossible” in the incumbent paradigm, but demonstrably exists in the world.

The combination of being able to fully visualise the packet flow (in terms of both the user and the network, at once) is half of the ‘lean’ requirement; the new mechanisms that restrict “work in progress” complete that classic ‘lean’ transformation. I believe that those who master these skills will become the dominant players in the communications industry as it merges with the cloud and IT industry.

Are you ready to take the first step towards ‘lean quality’?

 

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.