Is computer science a bad idea?

We send our kids off to do degree courses in “computer science”, thinking that we’re doing them a favour. Maybe the opposite is true? Here’s why…

Is computer science a bad idea?

I don’t have a degree in computer science. Nor mathematics & computer science. Not even when capitalised.

Now, I did go to university. And I did hang out with computery folk. Some with beards, others with ponytails, and a few with both. I also spent a very large number of hours at university dedicated to matters not exactly on the curriculum. And this did indeed have an effect on the award of a degree.

I hereby publicly confess to the many dawns I saw in the college computer room, having been reading Usenet all night on the pre-Web Internet. I am condemned by my record of perusing The Economist in preference to my mathematics textbooks. And undoubtedly, I will spend considerable time in purgatory for having been the first person ever to desktop publish the hillwalking club term card, rather than revising for my final exams.

But those are not the reasons I don’t have a computer science degree. My detachment from academic achievement did not result in me failing my exams or being “sent down”. Indeed, despite my manifest lack of effort, I still got a 2:2 pass.

The reason I am not academically qualified as a “computer scientist” is because my degree is in Mathematics and Computation. There is a difference. And it really matters.

Oxford degree ceremony

I did try to find my degree certificate to prove my case. I even rummaged around in the garage, because if you want an Ivy League type of degree, the first thing you should do is check in your old junk in case you already have one. This was the best I could find, though.

Let me explain why my recent alumni event name badge error is not a trivial matter.

Computers are physical artefacts, and they are engineered objects. They are the result of careful design processes to supply power, assemble the components, manage heat flow, and meet constraints of cost, size and battery life. Computing is what you do when you plug a computer in and press the ‘on’ button.

“Computer science” is a contradiction in terms. It’s not oxymoronic, just moronic. At best it might be the bit of physics dedicated to how we make transistors work. Nobody uses it that way, so it’s a very sloppy term.
Sloppy thinking is our #1 enemy in science, and first steps are fateful, so it’s an awful name for the discipline. Calling it “computer science” is like calling thermodynamics “toaster science”.

Science is about modelling the world in ways that both have a strong congruence to reality, and also offer us predictive value. Building computers is about using those models as a designer, not creating them in the first place. It isn’t science. Engineering uses science, but is distinct from it.

Furthermore, engineering is different from craft, since engineers are professionally responsible for fitness-for-purpose and the ethical consequences of unexpected failure. What is being taught as “computer science” is typically a bit of engineering and a lot of (algorithmic) craft.

Programming itself has normally been taught as a craft activity. I grew up on 6502 and ARM assembler with a dash of BBC BASIC. I know the craft sphere and space very well. I made a good living at it in the 1990s in banking and IT with Unix, C, Informix and Oracle skills. It was an easy way to get on in the world despite having acquired no immediately marketable skills at university.

After all, we spent all our time learning how to think. Knowing anything useful was incidental.

Oxford Uni 1

Yes, attendance is a privilege that admits you to an elite club — and it comes at a heavy price. The buildings of Pembroke College are 100% craft construction, and better for it. What I did at Oxford a quarter of a century ago was very different to that boyhood craft. Computation (capital ‘C’, as it’s the real deal) is the abstract and mathematical reasoning about what those physical computers might be able to do in principle when powered up. It is, indeed, a real and proper science.

Computation is distinct from computers or computing. So much so that I used to joke my degree is in “Maths and Even More Maths”. We went all the way down into the underground basement of mathematics, examined the strange philosophical bedrock formations, and came back up to surface again for a more normal life.

Computation is also a really strange science. Natural scientists go off to examine the one and only universe their equipment seems able to sense. I am so pleased for them that God gave them something to keep them occupied, otherwise they could be a real nuisance in society.

In contrast, unnatural scientists, like myself, are God. We have the job of deciding what kind of (distributed computational) universe to create. It’s of a different order. We’ve created three new universes before breakfast, and a countably infinite number by teatime. If we seem a tad arrogant as a result, that’s because we’ve earned it!

Oxford

I also studied software engineering at Oxford. No, not the stuff that passes for “engineering” on many computer “science” courses. My tutor, Carroll Morgan, literally “wrote the book” on the matter. We learned how to refine formal specifications into code, with a proof that the latter is an implementation of the former. After all, engineering is about containing failure and removing the human error in craft.

Functional languages were the only permitted ingredient on the computer programming menu. I learnt Orwell (from Richard Bird, who also wrote the book),which is the predecessor of Haskell, which companies like Facebook use today to do the “hard stuff”. It is the computer equivalent of learning etiquette and manners. Everything else seems rather rough and uncultured by comparison.

It was around this time that I decided I had better teach myself C darn quick so I could actually get paid employment once I had left. As punishment for this deviation from the true path, I spent two years writing AI systems in Common Lisp as my first job.

Oxford SU card

The inability to distinguish science, engineering and craft (however skilled and numerate) is endemic in the networking and cloud industry. I could fill a whole essay taking potshots at institutions, initiatives and technologies that manifest the resulting failure. No, make that a whole book! The absence of fundamental understanding and scientific thinking has widespread and severe consequences for the networks and cloud data centres we build.

What happens at places like Oxford, Warwick, Edinburgh, York, Bristol, and the bogland off somewhere near Norfolk, is that the brightest students of today are put in a room a few times a week with the brightest students of the past. These people are at the top of their field. During each tutorial they intellectually beat the crap out of you, week after week after week. Until eventually you learn self-defence to staunch the bleeding.

My younger brother, Robin, who followed in my footsteps — ”Martin, you’re a tough act to follow” — memorably says: “Oxford is a place of great learning, but not necessarily great teaching.” You have to become a self-sufficient learner to survive. The professor is going to teach you about his pet subject of Bessel functions in the week of your arrival, regardless of whether they are on the curriculum, or if you understood a word he said.

This process toughens you up mentally and emotionally. You slowly gain a “take no prisoners, cross all terrains” attitude to matters intellectual. (The alternative of dropping out was ego death, given what it took to get in, so was unthinkable.) This mindset stays with you for life. Eventually you even get the schadenfreude of beating the crap out of the next generation of learners to make them think properly, too.

[As an aside, here’s how I reckon I got in to Oxford, apart from a shining academic record and cute looks. I went to the Pembroke College open day, and Dr Morgan unsurprisingly recommended his book. I read as much as I could understand on the school bus. In the later short walk from the college lodge to the interview room, I noted to him that it might be rather hard to come up with a correct specification for a complex piece of software like a word processor, let alone refine it to code. I was probably ‘in’ before I even answered a single question.]

Oxford uni 2

I couldn’t be doing what I am doing today if I hadn’t gone through this mind-munging machine. I wouldn’t know the difference between faux-feeble “computer science” and real-deal “computation”. I would be too afraid to challenge “experts”, and would rely on authority and positional power as a source of proxy validity for their ideas, not my own reasoning effort.

So with all that in mind, this is what I and my many colleagues are really up to: putting in place the missing bits of the scientific and mathematical foundations of distributed computing.

People like Alan Turing in the 1930s put together a theory of computability. Claude Shannon and his mates cracked information theory (or “transmissability”) in the 1940s. And Dr Neil Davies (who is possibly blushing but deserves the credit nonetheless) solved the last bit of the puzzle, “information translocatability”.

This “translocatability” is the core problem of networking. When you assemble the complete systems of computation and transmission, will the distributed system perform to the desired level? How do you size the “slack” or the “hazard” of over/under-performance? The answer is underpinned by the ∆Q calculus, developed over the last 15-20 years. (More on that at qualityattenuation.science).

From the moment I joined the weird and whacky world of telecoms in 2001, I was looking for the basic principles to help me cut through the complexity of incomprehension and acronyms. I knew what science was, and was looking for real science, not engineering or craft.

Why so? Mathematical abstraction is a wonderful defence against ever having to engage in outdoors labour or agricultural activities, and it has worked wonders for me. May my farming ancestors rest in peace, as I type at my keyboard.

Oxford uni 4

Once given a demo of his novel technology, I was quickly able (when Chief Analyst at Telco 2.0) to spot the exceptional nature of Dr Davies’s work (done with the collaboration of many extremely bright colleagues, I hasten to add). It’s been a long hard slog over many years to get that breakthrough science (and the resulting measurement and scheduling technologies) into the state where they are ready to “go mainstream”. Paradigm change isn’t quick.

I’d also like to add John Day to that hall of fame, for his profound observation that in distributed computing there are only 2 protocol layers, and they just repeat (creating as many layers as you need); scopes are how we divide up the resource allocation scaling problem (and not layers); and that the fundamental nature of distributed computing is recursive. Pretty much everything is backwards in TCP/IP [PDF], which is why the Internet is a bit rubbish compared to everything else in modern technological life.

There was a moment, maybe five years ago, when I had the “Feynman experience”. He talked about the feeling you get as the first human to know something deep about the world. Wobbling and teetering on the shoulders of the computational giants, I put together the ∆Q maths and RINA architecture in my head. I remember it being a spring day, as I walked along the bank of the Thames in central London.

Bam! I saw the bedrock of distributed computing: recursive with two degrees of freedom. It was an astounding and unforgettable vista. Turing would have loved the view. Even if the metaphors are mixed.

I quickly gave Neil a call, who confirmed the sanity of my thought. And the Feynman moment was over.

Martin in Oxford

Topologically OK, geometrically less than ideal.

You have to be looking for new science in order to find it, given it’s so well buried under the crud of misconceived technologies and malformed craft theories. Then, for years afterwords, you have to deal with the utter incomprehension of everyone else. What do you mean we’re trying to build networking skyscrapers without any foundations? Surely someone at Cisco has all the theory sorted out!
[Nope. He left in frustration.]

In retrospect, learning Computation (and not computer science) was indeed the right way to go, although it took me 20 years to appreciate it. I might have turned out to be a bit of an intellectual thug and industry hooligan, but I blame my extreme upbringing for it all. We were barely more than children at the time.

Rather than doing a refinement calculus for the functional requirements of software, I preach the need for a refinement calculus for the non-functional performance of networks. It turns out that if you don’t have the industry skills, you have to create a new industry for your skills.

I am merely an athletic intellectual pygmy who hopped up the right tower of giant geniuses to get a sneak peek at the impressive view into a new and unexplored landscape. That I went to the right gymnasium is something for which I only feel absolute gratitude. It was not an easy experience, either. But that’s another story.

Who knows, maybe someday, there might even be a degree at the University of Oxford in Mathematics and Distributed Computation. I doubt my daughters will be interested in it, though. Daddy is far too boring and embarrassing for them to be associated with his work.

I can cope with that, on one condition. The young dears must keep well away from limp and wimpy “computer science”. Else I might be forced to engage them in intellectual thrashings.

And why would anyone want that?

Together in Oxford