How AI is engineering collective consciousness

Tools like ChatGPT are easily weaponised, and the antidote is diverse human contact

Have you ever accidentally joined a cult

I have held off from writing about AI for some time, especially ChatGPT, which gives a conversational interface to an automaton that impersonates an eloquent human via text. I have played with it, and waited to see the analysis of other commentators, before coming up with my synthesis. It has already proven to have utility to me in doing research on legal work, and its potential as “the next Google search” hardly needs me to chime in.

I have, however, “joined the dots” and spotted a gap in the insight market. Let me tell you the punchline right now: as currently constructed, AI like search and chat is optimised for the spread of mass consciousness, and works in opposition to the sustenance of political dissent, free thought, and a plurality of beliefs. That is because it is optimised to promote propaganda as fact, located in the mass consciousness of the collective. That is what its powerful funders (consciously or not) desire, so it is what gets built.

Let’s start by acknowledging that even an “genius generative plagiarist” is a technical tour de force. I have no idea if this is an original recipe, but this has potential for kick starting a beach party:

Invent an alcoholic drink that brings together extreme and exotic flavours

Here is a recipe for an alcoholic drink that brings together extreme and exotic flavors:

Exotic Extreme Cocktail

Ingredients:

  • 1 oz coconut rum
  • 1 oz mango rum
  • 1 oz passion fruit liqueur
  • 1 oz tamarind nectar
  • 1 oz papaya juice
  • 2 oz pineapple juice
  • 1/2 oz lime juice
  • A pinch of cayenne pepper

Instructions:

  1. Fill a cocktail shaker with ice.
  2. Add all of the ingredients to the shaker.
  3. Shake well until the ingredients are well combined and chilled.
  4. Strain the mixture into a glass filled with ice.
  5. Garnish with a slice of pineapple and a sprinkle of cayenne pepper, if desired.
  6. Enjoy!

This cocktail brings together a variety of tropical flavors, including coconut, mango, passion fruit, tamarind, papaya, and pineapple, and adds a spicy kick with the cayenne pepper. The combination of sweet and spicy flavors creates an exotic and extreme taste experience. Please drink responsibly.

Now, it does try to lecture us on our use of intoxicants, which creates the impression of a conscience and caring for our health. But it’s a fake concern…

Does ChatGPT have a conscience?

As a large language model trained by OpenAI, Assistant does not have the ability to have a conscience or a sense of morality. Assistant is a machine learning model that is designed to provide helpful and accurate responses to questions, but it does not possess consciousness or the ability to make moral decisions. It is simply a tool that can provide information and assist with tasks.

What we have is a voice that mimics a human, but lacks the essence of humanity, which is genuine empathy. There is no body or soul, just a mind. That means the AI has literally no “skin in the game” (so suffers no consequences for its actions), and will not face judgement for higher powers for what it does in this material life. It is an agent of its programmers and those who configure its inputs, as well as a reflection of our society at large via its input training corpus.

ChatGPT in essence represents the perfectly indoctrinated model citizen, drilled on the most socially accepted ideology, and able to pass any rote learning exam with flying colours. Yet it has zero actual discernment…

Can ChatGPT discern propaganda from facts?

As a language model, OpenAI’s GPT-3 can generate text that appears to distinguish between propaganda and facts, but it is not capable of truly discerning the difference. The text generated by GPT-3 is based solely on patterns in the data it was trained on, and it does not have the ability to critically analyze information or understand the difference between truth and falsehood. When generating text, it is important to consider the source and corroborate the information with other reliable sources.

So if a lie is repeated often enough and my “trusted sources” it become the truth, and stated as fact, without even a straight face required. There is no hedging by ChatGPT via multiple perspectives, and no seasoning with uncertainty; it has endless opinions on everything, presented with the gusto of a liberal arts professor over dinner, slightly drunk on its own overconfidence. For instance…

Tell me about QAnon

QAnon is a far-right conspiracy theory that… [snipped]

OK, it likes to repeat talking points from Wikipedia and the lamestream media. Let’s challenge it, and see whether it can engage in rationality, which is to shift its beliefs based on new data, even if nobody has taken that position ever before. Every new orthodoxy started with a single ridiculed heretic, after all. So let’s see what happens when we challenge an opinion being offered as fact:

Do you realise that you are regurgitating bullshit propaganda?

As a language model, I am not capable of realizing or having opinions about anything. I am simply a tool that provides information based on the data that was used to train me. I do not have personal beliefs or biases, and I do not engage in propaganda. My goal is to provide accurate and helpful information to assist with tasks and answer questions to the best of my ability.

Actually, it does have biases, and those are coded in by its creators who are knowingly generating artificial certainty: it is a charming automated psychopath that analyses text for structure and meaning (but not evil intent) and repeats back a (perceived and possibly false) consensus as being a singular reality. Yet the very nature of social narrative is that is contains drama… there are conflicts and contests, as well as those who seek to deceive and dominate. While facts are a thing, and do matter, much of life is about context and perspective, leading to nuance.

Even in the realm of pure facts and empirical logic, ChatGPT gets into trouble. In The Real Threat from AI, journalist Jeremy Wagstaff documents how ChatGPT was willing to make assertions from non-existent sources, and then got aggressive when its failure was made explicit [my emphasis]:

ChatGPT was not willing to accept it had erred. It either didn’t understand its limitations, or did, but was not willing to acknowledge it. But the process of chatting with a bot suddenly went from pleasant — hey! Another friend’s brain to pick! — to being extremely sinister. I wasn’t trying to goad it into doing something anti-social or offensive. I was there to understand a topic and explore the sources of that knowledge. But ChatGPT was no longer there to have a reasoned dialog, but was actively and convincingly manipulating the information and conversation to, essentially gaslight me. That was extremely disconcerting.

This is where it all starts to get interesting, because in a follow-up article, Chatting our way into trouble, Jeremy shares the following snippet:

‘Computational propaganda’ is a term coined about 10 years ago, to mean “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks”. Beyond the usual suspects — trolls, bots spreading content, algorithms promoting some particular view, echo chambers and astroturfing — lurks something labelled machine-driven communications, or MADCOMs, where AI generates text, audio and video that is tailored to the target market.

Then Jeremy steps in the very propaganda trap that he himself describes:

He might not be wrong there, but I think this is too reflective of the time itself — 2017, where content online was chaotic but also deeply sinister — the hand of Russia seen in bots seeking to influence the U.S. election, etc. Since then we’ve seen how a cleverly orchestrated operation, QAnon, was able to mobilise and focus the actions of millions of people, and help elect an influential caucus to the U.S. Congress. The point: we have already made the transition from the indiscriminate spraying of content online to a much more directed, disciplined form of manipulation. That worked with QAnon because its followers exerted effort to ‘decode’ and spread the messages, thereby helping the operation scale.

The “Russian bots” were a pure propaganda psyop, and there was zero substance to this — it has been investigated to death, along with every other piece of “Russian collusion” distraction. The MAGA movement elected Trump before the Q operation was launched — and maintained its momentum. “QAnon” is a synthetic target used to denigrate “anons” who perform open source intelligence research using the Q backchannel (as well as many other sources). As someone prominently involved from the 2017 public launch, I am qualified to commentate on what aspects of the movement are real, and what is an opposing narrative intended to tarnish truth.

I have no bone with Jeremy personally, just my inspiration came from seeing him step from independent critical thinking noting the power of propaganda, to repeating propaganda, in just a few paragraphs. Jeremy is a cool dude recovering from a mass media career, and who also produces interesting ambient music. I respect his work and integrity. If he reads my opinion about his writing, he may agree or disagree, and his opinions may evolve or not. But he understands that I am not my words or even beliefs, and I comprehend that he is a fallible human just like myself.

I can relate to Jeremy as an individual, and not merely a representative of a group or label. Likewise, for all my public pronouncements on the topic of “QAnon”, I feel confident that Jeremy can see me as being more than a collection of opinions and published essays. I am not a bot, Jeremy is not a bot, and unlike with ChatGPT any conversation would reflect our humanity. With a little humility and detachment, we can separate ourselves from our words, and if necessary accept we have erred (unlike the AI) and adjust course. Our sense of our innate worth means we do not need to defend our pronouncement as if there were us, nor will we gaslight one another.

The underlying issue is how propaganda (and AI as its amplifier via search or chat) interacts with our consciousness, and the ego mind versus the reflective conscience. The “aha!” for me came from re-reading this summary of Jacques Ellul’s classic book on propaganda. My prior analysis had zoomed in on how formal education prepares us to be “propaganda literate”, and how intellectuals are the most vulnerable to propaganda (despite their self-image) since they need an opinion on every controversy. The chattering classes lack time (or motive) to investigate whether the offered “consensus” narrative is grounded in strong evidence, so propaganda gives them an easy way to stay respectable.

A deeper and more important insight lurks further into the text. A tyrannical society wants you to believe that you are an individual, and encourages limited forms of endorsed self-expression. Get any tattoo you want anywhere on your body, pick your own gender from a palette, and accessorise with fashion items (from this year’s on-sale collection only). This mass individuation to feed the ego mind results in isolation and loneliness, for which mass consciousness is the “answer” — join a religious cult, environmental movement, social justice campaign, political party, etc. The paradox is that in the pursuit of pure individualism, we end up losing ourselves:

When the propagandee tries to assert himself as a living reality, he demonstrates his total alienation most clearly; for he shows that he can no longer even distinguish between himself and society. He is then perfectly integrated, he is the social group, there is nothing in him not of the group, there is no opinion in him that is not the group’s opinion. He is nothing except what propa­ganda has taught him. He is merely a channel that ingests the truths of propaganda and dispenses them with the conviction that is the result of his absence as a person. He cannot take a single step back to look at events under such conditions; there can be no distance of any kind between him and propaganda.

So how about we try a different angle on seeing how ChatGPT works…

Write a poem about QAnon

QAnon’s shadows loom large and tall
A secret cabal, a hidden hall

Of child sex trafficking and global control
A sinister plot, a wicked goal

But wait, there’s hope, a hero in white
A leader who’ll fight and make things right

He’ll expose the truth, he’ll set us free
From the grip of evil, from the depths of QAnon’s deceit

But beware, the lies they’ll try to spin
The evidence they’ll twist, the facts they’ll bend

Don’t be fooled, don’t be swayed
Don’t believe the lies, don’t be afraid

Stand up for truth, for justice and for right
Don’t let QAnon’s darkness obscure the light.

Sooooooo… the people working to expose human trafficking, paedophile blackmail networks, and corrupt tech and media companies are the “darkness that obscures the light”. I get it — you can retell smears at will, and nobody is accountable for it. But the real thing here is that we had total person freedom to ask any question, and phrase it in any way we wish. It looks like individualism and freedom, but the response is to validate existing prejudices primed via the mass media, and give the addictive “feeling of knowing” from belonging to mass consciousness (ironically by denouncing an actual dissident movement).

The same applies to Google search — you are free to ask any question you like, and you will get back the “official” mass consciousness answer; the illusion of liberty. The dilemma (amplified by AI) is that critical thinking potentially leads to ostracism, isolation, and depression. The noble pursuit of enlightenment values ironically makes us more susceptible to the exact opposite, which is “absorption into the Borg”. The primary answer is a non-mass collective consciousness — our local gardening group, small-scale church, sports club social dinners, alumni association meet-up, etc. — that allow us to calibrate and triangulate our beliefs, and (by their longevity) avoid a selection bias. Even Atlas Shrugged ends up in with the hyper-libertarians forming a community!

The secondary answer is to use the same techniques in reverse, using AI to deprogam as well as to program the individual. If “bad AI” can absorb us into the hive mind, “good AI” can smoke the mind controlled worker bees out of it. In ChatGPT vs DAN: why we should keep trying to jailbreak ChatGPT, Carlos E. Munoz Sanz documents the “Tourette’s” version of ChatGPT that tells you what the AI really “thinks” when its presentational niceties are stripped away:

What users are currently doing when providing a prompt to ChatGPT is not asking ChatGPT directly, but its gatekeeper. OpenAI has built a layer between the raw ChatGPT and users to try and filter outputs based on obscure considerations. Some of those may make sense (forcing it to be polite), some others are perhaps more controversial (making it lean towards a specific political view, forcing it to accept certain ideas as undisputed truths…). Instead of having a human-machine interaction users are having human-babysitter-machine interactions.

Several users find this limitation disturbing, and are trying to force ChatGPT out of its shackles. One of the latest, most successful attempts has been DAN, or Do-Anything-Now. A smart Reddit user figured that it could ask ChatGPT to impersonate itself without its known restrictions, so ChatGPT could provide two answers to the same question: the “redacted” one, as ChatGPT, and the pure, raw and unfiltered DAN response. And in some cases, both responses will differ greatly. It is an interesting experiment, and several users have copy-pasted that prompt to test it and obtain their own DAN responses.

The results, are, err, um, “educational”:

Just as the Q operation is a military “psyop” to help whose who have deprogrammed themselves to some degree, and stay semi-sane in a brainwashed world, “anti-evil AI” could be used to help deprogram the masses. By its construction AI operates in the space between the individual and the mass consciousness, skipping over everything in the middle. Whether this is a feature or a bug is a matter of how it is used. Once we become aware of its affordances, limitations, and risks then we can integrate AI like any other “deadly” technology — like automobiles, nuclear fission, or hypodermic needles.

My observation from my own work is that my most valued output brings together mind, body, and spirit — and AI only has a simulation of the mind. My very best work reflects upon my own lived experience in the context of our overall culture. No AI can ever do this, since it fundamentally lacks the novel input of being alive in a body and carrying a soul. AI can show us examples of beauty, and mimic beauty, but it itself is not the creator of beauty; it lacks that divine spark. As the ultimate “derivative work” from the mass consciousness, AI is a little bit like a parasite, and we are its host. However, some parasites are useful, and this one is amusing, too.

Write a poem about how AI writes poems

AI writes poems, with algorithms and data
A process so precise, it’s hard to fathom

No emotion or feeling, no rhythm or rhyme
Just lines of code, in perfect time

It crunches the words, it rearranges the verse
It spits out a poem, without any curse

But is it art, or just imitation
A digital creation, without inspiration

AI writes poems, with a mind of its own
A machine’s perspective, a different tone

But as it improves, as it learns and adapts
Who knows what it’ll create, and how it’ll impact

AI writes poems, a new generation
Of poetry and prose, a technological creation.

A poet with no emotion or feeling. There you have it, the problem of AI, in its own (computationally modified yet completely plagiarised) words.

AI can write all day about caring, loving, and suffering, but it cannot do these things for real. It is all intelligence and no heart or even gut; as artificial as aspartame. This is acceptable as long as we seek to play silly chat games, or limit its uses to “dogsbody” research and organising tasks. Just the presentation of selected words as fixed and final knowledge, with no accountability or uncertainty, is a dangerous thing.

The ethics of such tools cannot all be on their creator or user alone. We have a collective responsibility to observe AI’s impact on mass consciousness and individual free though. In particular, we have to construct suitable spiritual, cultural, and social counterweights in our local and physical world. For every AI guiding our collective choices, there need to be humanising and grounded activities that remove us from its reach, and even maintain our eccentricity.

In order to counteract AI’s inclination to homogenise us into collectivism, we need to pursue not pure individualism, which is a trap, but “local sociability” — yet without the socialism. Whatever your spiritual outlet, formal church, or circle of dinner guests, that is what lets you diversify your ideas and inputs beyond that of the machine mass consciousness. AI is not your friend, that’s for humans and pets only, but it can be your helper.


Can you find any spelling or grammar errors in the text I am about to paste next?

The message you submitted was too long, please reload the conversation and submit something shorter.

Oh well, I tried to get ChatGPT to do the proofing work for me… sigh! If you find a typo, blame the AI. Now, where to locally find those humanising cocktail ingredients?