No more automated suicide notes, please

The art of computer programming needs to put human concerns front & centre. That means software engineering needs to be reformed around new questions.

No more automated suicide notes, please

Extract from a (fuzzy copy) of a notice of impending eviction from a home.

As some of you will know, I have been supporting a fellow citizen for the past two years struggling against poverty, illness (physical and mental), homelessness, and a disjointed and often broken set of state support services. For the first year my charity was private, then I ran out of resources, so I was forced to make it public to raise funds.

I have learnt a lot in the process, and I would like to share an insight that came to me today. The above image is from a formal notice of her impending eviction from her home. The details of the issuer and receiver are blacked out, since neither is figural or indeed relevant. The problem I wish to highlight is a systemic one in our technological society.

This person has been the victim of many assaults by fate during her life, some of a serious criminal nature. She is a vulnerable member of society, struggling to cope in difficult circumstances. Whilst she inevitably has a contribution to her predicament, it fundamentally is the result of bad luck, not poor choices.

The discipline of Information Technology has brought us many benefits, removing drudgery from administrative tasks previously performed laboriously by hand. I used to work as a VDU data entry clerk as a student, and it was an awful tedium. That machines have replaced that job is to be welcomed.

However, there is a dark side to this. When you take the human out of the process, you can also remove the humanity. I cannot tell you if the above eviction notice was truly automatically generated, but let us assume it was. What I can tell you is that it made someone already sick and poor become genuinely suicidal.

Some of the messages I have received have been quite harrowing. The underlying problem is that computer science is entirely oriented around the concept of programming, and control over the machine. The core theory of computation and computability only captures what it means to process the symbols.

The theoretical and practical deficit is that there is no real contextual understanding of where the symbols come from in the world, or where they go to. This state of affairs is reflected throughout the hardware and software stack, as the tools we use to design and develop them. The human is not included in the model, except as a sideline of “UX” bolted on.

“Success” today in delivering an IT system to automate a business process is seen entirely in logical terms. Did you follow the flow of states and decisions to comply with the business policies and procedures? There are high rewards for delivering on the business outcome, and removing operational cost.

What we lack is a wider perspective of cost to society. The symbols have a real impact in the real world, and can cause real harm to real people. Rather than the human being a peripheral to the computer, providing and consuming symbols, the human needs to be put back in central focus.

In contrast to the Information Technology paradigm, consider a complementary Human Technology one. This we would require an “augmented conscience” to counterbalance the effect of all the “artificial intelligence” run amok. We need a yin and yang in harmony; at present the system is out of balance.

This in turn demands that we ask some additional questions in our software specification formation. Specifically, what is the ethical outcome of our action? What is the feeling state that we intend to create? And if we do not know, should we fully automate the process at all, stripping all the humanity out?

If we seek a different outcome, then we must change the incentives, so as to encourage different behaviours. As software infiltrates every aspect of our lives, it embeds choices that impact our wellbeing. My suggestion is that it should no longer be acceptable to hide behind the automaton technology.

If software interacts with a human (and some would argue “any sentient being or aspect of the biosphere”) then it should not be possible to absolve oneself of responsibility for causing harm. That means there must have been a reasonable attempt to evaluate the risk of harm, and mitigate its impact.

Without true skin in the game for those deploying these systems, nothing will change. Software development is no longer a novelty, and needs an engineering mindset. Just as we do not tolerate amateur efforts at constructing buildings and bridges, the “hacker ethos” in inappropriate for automation of life-impacting software systems.

What is an isolated incident of the “automated suicide note” today is a potential harbinger of a wider societal cataclysm of tomorrow. As sensors, machine leaning and control systems become pervasive, the potential for “automated accidents” grows. Negligent designs needs to have consequences for designers.

After all, even when the computer says no, the human sometimes has to cough up a yes.


Epilogue — I have been in contact with her local council, and action is being taken to remedy the situation. I haven’t heard from her for a few days, which is an unusual gap, but do know that the local support services are in contact with her.


JRN workshop

For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.