Dawn of the Hypercomputer

“The Internet” is a phrase we use to reason about a certain kind of universal interconnectedness. There’s something bigger in the background behind it, just we haven’t named it yet. In the absence of inspiration for something better, what may lurk in the mid-distance is the Dawn of the Hypercomputer.

What is the Internet?

The basis of the Internet is a distributed division of labour. Rather than having a central authority managing a unified network resource, it allows us to take autonomous networks and join them. When we internetwork like this, one network takes a packet, and hands it off to another network, politely asking it to pass it on. “Please send this tothere for me, much obliged!”

For that to work, there has to be a universal understanding of what ‘there’ means. The clever part of the design of the Internet was to be able to manage the address space in a way that allowed distributed routing to work. You don’t need to keep referring to any central authority as to where everything is. A hierarchy of scopes of control allows the problem to be shared out: local networks need only understand local routing, and backbones how to find other backbones and tier-1 attachments. (The reality doesn’t really match this idealised explanation, but that’s a bedtime story for another day.)

There also has to be a mutual understanding that there is no guarantee on the packet getting to there as it traverses a chain of networksor its arrival being bound by any time limit. That downstream resources will also fulfil the same deal of exchange is an aspiration only loosely assured by a set of peering or transit contracts. The ability to route globally is an emergent property of technical, economic, political and social forces and voluntary collaboration.

So to summarise, the basic properties of the Internet are:

      • You have a relationship with a local node
      • We make a resource request of a local node
      • The type is resource is transmit to a distant node
      • The outcome is typically good, despite you not having a relationship with that distant node
      • There are no guarantees, but it generally works, albeit without bounded degradation in transmission
      • This is all made to happen with a mixture of money, mutual peering, generosity or social obligation.
      • The network is a computer

        In my work with Dr Neil Davies of Predictable Network Solutions, I quickly learned one thing: networks are just large, distributed supercomputers. They store, process and transmit information, just on a different scale to ‘ordinary’ computers. It’s a profound mistake to think of them as ‘pipes’.

        If networks are computers, it is reasonable to ask: what if we extend the above transmitmodel to include storage and computation? Why don’t we have an “Interstore” and an “Interprocess” to complement the “Internet”?

        To a limit degree, we do. Cloud services offer computation and storage, as do content delivery networks. ISPs deploy home hubs with attached local storage. Projects like SETI@home aggregate computation resources. However, we have not solved the general case of being able to ask a local node ‘store this’ or ‘compute this’ and the rest being orchestrated collaboratively. This is a shame, because it would be really useful. Indeed, we may find an “Internet of Things” proves to be a tragically narrow vision of a connected world without complementary computation and storage capabilities.

        The nirvana of distributed computing

        Ultimately, we would like all resources to be fungible – a kind of ‘liquid computation’ where software agents can shuffle around and dynamically combine what’s available to solve our problems. The moment we take a video of the kids with our smartphone, we’d like it to start being replicated around locally without having to wait for it to be uploaded. We’d rather our private family photos were stored by us and our friends, rather than mediated and mined by Facebook.

        As with the Internet, sharing storage and computation resources could be motivated by a mixture of payment, reciprocity, voluntary sharing or social obligation.

        These ideas aren’t new. Researchers have been working for years on swarming, peering and meshing for distributed computing. We have software agents that slither among and across these resources. New trust models — such as cryptocurrency — enable new resource allocation patterns. What is new is thinking about it at a global scale, and as a single emergent entity.

        Queues, queues and more queues

        There are a lot of hard problems that would need to be solved to make this work. A core one is the fundamental difference between a network and a supercomputer. In networks inter-processor communication happens over distances where the speed of light (plus any intermediate queues) is a constraint in a way that is different to a densely-packed supercomputer. That is why we have protocols like TCP, to manage distributed state of communicating processes.

        A fixed set of physical resources need to be allocated to a large and dynamic load of transmission, I/O and CPU requests. What really matters is the math we use to manage and coordinate these requests to get efficient and effective outcomes. That’s the fundamental breakthrough we can look forward to that lets us begin to transcend the limited framing of the Internet and the problems it can solve.

        All those transmitstore and compute requests exist as part of a whole, rather than in isolation. Networking has much to learn from supercomputing, and how resources are allocated to waiting work. In particular, the applied mathematics used to manage contended inter-processor communication can be used to solve a wider range of distributed computing problems. This math will be about managing the probabilities of good outcomes as work contends for transmit, store and compute resources.

        The innovation needed is about how we manage trade-offs to achieve our aspirations of flow of work, utilisation of resources, cost and fairness. That requires us to provide trading spaces within and between these parameters, across transmitstore and compute resources. A whole slew of new protocols and technologies await invention and application, to take us beyond supercomputing to hypercomputing. (Given the large amount of queues involved, there is a reasonable hope that the maths will be of British origin – we’re good at queues.)

        Imagine there’s no Internet

        The challenge is to stop thinking about the Internet, or the cloud – just as you don’t think about the motherboard or CPU of your computer in isolation. Instead, there is an emerging unified computing fabric of which every device is a cooperating node. What comes after the Internet isn’t a better kind of network, it’s the Hypercomputer of which every connected device will be an autonomous and collaborating part.

        To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter