Covid seen through a safety hazard framework

There is a language of risk that can be applied over many domains

Before being smeared by the corporate press and censored everywhere for pointing out that we have a problem with criminals in high places, I was part of a team doing pioneering work in telecoms. While I am not an expert in failure modes and effects analysis (FMEA), I did pick up some useful terminology and ideas that are transportable to other problem domains. I don’t take any personal credit for this understanding, but I do accept responsibility for the mistakes. It is just a helpful model and language to describe “how things can go right enough by not going too wrong too often”.

The language of safety cases

The basic concept is that there is little benefit in making mediocre experiences slightly better. When you fly with an airline, you don’t really care how smooth the landings are as long as they are safe. While a heavy landing might be uncomfortable, and generate extra wear on the undercarriage, it is not a significant part of the overall travel experience. In contrast, wrecking the aircraft by landing so hard it breaks the airframe is catastrophic — which is why the goal in safety management is always to “make bad experiences sufficiently rare”. You can’t eliminate them, but you can minimise them. It’s about the “tail” of the probability distribution of outcomes.

In particular, you want to make “ruin risks” extremely rare. In the example of a telecoms network, it could be when you have a vast national power outage, and every part of the network fails at once. Then when the power comes back you and all the elements reboot at once you can suffer a “signalling storm” and dependency loops that mean the network never recovers normal operation. A “black swan” event is an outlier trigger that doesn’t fit into the expected statistical distribution of demand — and these have to be planned for too.

There is a concept of a “hazard”, which is a characterised undesirable outcome. In packet networking, this could be as simple as the “spinning wheel” while watching a video as it buffers up. In an immersive virtual reality environment it could be a glitch that causes the viewer to lose their sense of presence in the synthetic world and realise they are wearing am awkward plastic headset. But the idea of “hazards” isn’t limited to such domains. Consider for a moment a cruise liner company that is cutting costs, and rather than use tugs to dock a ship it decides to just do it under the liner’s own power.

In this case the “hazard” is crashing into the jetty in poor weather. In our example, it it is managed (and minimised) by the use of tugs, which would give extra control over manoeuvring. The hazard is “latent” under most circumstances when the company policy changes to not using tugs, since the ship can equally easily dock without help using an expert pilot. The danger of collision could be when the wind is from a particular direction and the tide is running in a specific way. Under those conditions it is said to be “armed”. Lastly, if you have a wrecked boat and a crushed pier requiring millions of dollars of repair work and a large insurance claim it has “matured”.

There can be many kinds of hazards, which may be coupled together, and suffer correlated failure. A particularly good example of this is Qantas flight 32, which had an engine explode shortly after takeoff, with the shrapnel severing many systems in a way never anticipated during the aircraft design. If you do some hunting, there are superb interviews with the captain as well as the purser. When Apple pushes out a new iOS upgrade, it will generate a correlated load on a telecoms network. We normally take years to test drugs because the body is a complex “system of systems” with huge numbers of potential interactions and corner cases that have to be accounted for in order to make an overall safety case.

Any system of supply and demand will have a “predictable region of operation” (PRO), where a cap on demand guarantees a sufficiency of supply. A bridge will be designed to cope with a given static load (e.g. weight of cars and trucks) as well as a dynamic load (e.g. storms, earthquakes). An engineer is someone who takes responsible for the PRO and its safety margin, and is morally and legally accountable for any error in its calculation. You must have “skin in the game” in order to be able to recommend that a project or product is fit for purpose and the risks are sufficiently well understood and managed.

The overall risk is the probability of any hazard maturing multiplied by the cost of its impact. In order to limit that cost we have the idea of mitigation, so that the effect of any hazard being armed or maturing is limited. So if you find your Skype call not working, you could make sure you have the person’s phone number in advance so you can just make a traditional phone call. In the case of the human body, we might have a limb with gangrene that threatens the patient with risk of death if we just use antibiotics, and the mitigation is amputation, with the cost of mitigation being loss of use of the limb. Tugs mitigate the hazard of a ship crashing into the dockside.

Finally, every safety system involves trade-offs of cost, benefit, and risk. This describes a trading space where we quantify these and optimise for some overall desired (moral or commercial or technical) outcome. Eliminating risk is impossible, so we have to make difficult choices over what kind of experience we want to deliver given what we can afford, and what our tolerance for bad outcomes might be given the mitigations on offer. Grown-up conversations acknowledge this necessity to make compromises in an uncertain world, and not deal in absolutes. Every choice you make is a trade-off of some kind.

Mapping the language to Covid

This gives us a way of breaking down the safety case (or lack of one) for these gene therapies marketed as vaccines. For the sake of didactic argument we will treat Covid as being a viral illness as “officially” described, and limit this to ordinary legitimate pharmaceutical regulation.

The case for deliberate (attempted) genocide and criminal action may be strong to overwhelming, but this article is about learning the safety language, not about a civilisation-shaking criminal endeavour. So we will put aside all the worrying data about graphene, nanotech, hydras, and bioweapons. Alternatively, Covid could possibly be a giant psyop to wake up humanity to the dangers of transhumanism — a subject for another day. Irregular and unrestricted warfare adds a whole new dimension to the problem space, and should only be considered once the baseline of civilian risk management is mastered.

So… let’s run through our safety case “ingredients list”: bad experiences, ruin risks and black swans, hazards (latent, armed, mature), coupling and correlation, predictable region of operation, risk and impact, mitigation, trading space.

Bad experiences

One “bad experience” we were dealing with was death or disability caused by Covid-19 or one of its variants. The other “bad experience” is iatrogenic harm from the vaccines. The public were endlessly told about the dangers of Covid, with deaths constantly hyped up, even if their classification was highly suspect. Meanwhile, the other bad experience was casually dismissed as an irrelevant concern — “safe and effective”. So we have reason to be concerned about the due consideration of the bad experiences in this safety system.

Ruin risks and black swans

The obvious ruin risk is a thalidomide style pharmaceutical catastrophe, resulting in huge numbers of dead or injured, and on a timescale which does not reveal itself at the time the product is taken. This was ignored in the government advice — public demand was for a vaccine to end lockdowns (a state policy choice, not a medically driven one). No other intervention would be considered, no matter what the dangers.

Covid itself was already a “black swan” event, with only Spanish Flu as a comparable matter in recent documented and cultural history. Add this to a novel form of treatment, mRNA gene therapies, and you already have a kind of risk multiplier that should have alarm bells ringing. Then do mass deployment across the population and around the globe at the same time. The absence of any track record makes this endeavour a spectacular gamble with safety.

The “purple platypus” of a global genetic genocide and enslavement via transhumanist technology is for the advanced class, as discussed earlier.


Were the hazards of a blanket intervention into human immune systems properly characterised? Consider this paragraph from an article on Conservative Review (my emphasis):

Any fault in any of those factors can create auto-antibodiesTrojan horse antibodies (antibody dependent disease enhancement) or a misfiring of the immune system, which is some form of original antigenic sin or pathogenic priming that teaches the body to tolerate a specific strain of the virus or respond for a wrong strain. This is why vaccines take years to develop. And this is before we even discuss the fact that these shots are not even vaccines, but are gene therapies that code your body to produce a pathogenic spike that was the result of gain-of-function research and seems to potentially damage every organ system, particularly the cardiovascular system.

Here we can see several types of hazard — risks of immune system malfunction — being listed. Most people will have taken the shots with no understanding of these dangers, and that would also include the healthcare workers administering them. In order to have a safety case you have to know what can go wrong, and that was not done. The hazard space had not been fully explored or defined before mass adoption.

Coupling and correlation

In the fascinating article The trainwreck of all trainwrecks: Billions of people stuck with a broken immune response we have the following observation:

In other words: A homogeneous population-wide shift towards IgG4 for certain antibodies, can end up impacting our relationship to respiratory viruses other than SARS2 as well. You could expect for example, that vaccinated people may become better asymptomatic spreaders of other respiratory viruses, like RSV. We see evidence of cross-reactive antibodies between SARS2 and the human corona viruses. Do you want those to switch from IgG3 to IgG4? Probably not.

By deploying an artificial mutation to the immune system via genetic reprogramming you are making a correlated change to the herd, and that is coupled to the dynamics of how other diseases spread. The safety case for this kind of intervention is… “unavailable”. We would normally never allow this kind of untested software update to control systems in cars, let alone people.

Predictable region of operation

The immune system is wildly complex and not strictly of the same category as supply and demand systems. That said, we have a close parallel in dose-response. The data at How Bad Is My Batch shows wildly varying outcomes from different batches, suggesting that the dosages being offered may not be standardised. It has been suggested that this is a result of manufacturing, storage, and distribution problems for mRNA technology. Whatever the cause, we clearly have a failure to define the predictable region of operation of dosage (linked to the desired response) and manage the system to stay within it.

Skin in the game

The Twitter Files are showing how government agencies conspired with Big Tech to censor established and respected scientists raising ethics and safety concerns. Meanwhile, paid celebrities were used to promote the public uptake of the vaccines, reversing the ethical incentives. Indemnity was offered to those who participated in the rollout to the public, as well as those who produced the product. Medical licensing was used to cudgel anyone who questioned the narrative and diverted from the mass uptake of gene technology that was officially experimental.

In terms of “skin in the game” ethics, there could not have been more red flags.

Risk and impact

The uptake of these vaccines has been especially high in occupations that directly face the public, such as carers and health workers. Should anything go wrong, our ability to care for the sick will be gravely impacted. The secondary consequences and costs of caring for those harmed by the injections was never considered or calculated. The unemployed were not coerced into taking these injections, but the employed were — again, skewing the impact, and creating an extreme level of societal risk if the employed are injured en masse.

The genotoxicity across multiple generations will not be known until our children’s children are born, and we know their reproductive systems work correctly. The potential for destroying fertility alone should have been enough to prevent mass deployment for a disease that had no significant impact on overall mortality.


We mitigate the risks in two ways: either having less chance of hazard maturing, or less impact when it does. We could have avoided using these gene technologies by relying on other therapies: ozone, chlorine dioxide, ivermectin, hydroxychloroquine, and monoclonal antibodies, for example. Or we could have simply done nothing and let Covid run its course and herd immunity build up. The dangers could have been qualified and quantified by a slower rollout. Key workers should have been last to be injected, not first. The absence of a risk mitigation plan screams very loudly.

Trading space

The need to inject yourself with something that was experimental and potentially poisonous was presented as a pre-determined answer. There was no need to “consult your physician” to see if it is right for you, given your medical history. There was moral pandering to “save grandma”, when these therapies were never promoted (formally) as preventing transmission of any virus. Vast amounts of money were spent on buying these pharmaceutical products, which could have been put to other uses.

Children and adults of childbearing age were injected based on little to no evidence of safety with respect to human reproduction. There was merely an extrapolation of “well, grandma is still alive, and she’s had two jabs already”. That is never how safety cases are made. The rational discussion on trade-offs, located in specific contextual risk profiles, is notable by its absence.


Even without looking into the possibility of fifth generation warfare, we already have a list of problems that shows the mass deployment of Covid mRNA vaccines was criminally irresponsible. This framework is uncontroversial and standard practise in safety critical systems. I may not have the perfect model or presentation, and others are welcome to improve on my effort.

The purpose if this safety case framework is to take us away from an emotive debate located in politically and culturally driven beliefs. Safety is supposed to be “boring” because it is about learning from mistakes, and not about personal confidence or wild experimentation. Safety is only defined by final objective outcomes, not initial good intentions. Evidence and argument define the safety case, not opinions or ideology.

Terms like “anti-vaxxer” or “denier” are not science, and have no role in making a safety case. You are not “anti-surgery” for questioning whether it is appropriate to routinely lobotomise children, for instance. As the mega-disaster of these Covid vaccines keeps unfolding, such a rigorous safety framework can help us to understand what went wrong, and to make sure it never happens again.