More Anthropomorphizing!

I am in rare disagreement with one of my favorite authors, Caitlin Johnstone. Actually, it’s hardly a disagreement at all, but I think her argument is misguided in one crucial way. I want to write about it because it ties in with my recent posts here. Caitlin Johnstone is facepalming at the stupidity of people treating A.I. as conscious, whereas I would view it in a more encouraging light.

A metaphysical doctrine versus a human behavior

Ms. Johnstone wrote:

I’m having a hard time finding the words to describe how disturbing it is to watch these mental disorders spreading so rapidly.

Let's sunder two things.

I am worried about the former; not worried about the latter. I don’t think Ms. Johnstone has a beef with the latter either:

I mean, everyone anthropomorphizes objects and animals to some extent; that’s just how projection works. I’ve caught myself accidentally apologizing to the Roomba1 like anyone else. But to actually formulate a belief system that these chatbots are real people with real minds and real consciousness is taking that projection to the most insane levels imaginable and forming an entire worldview out of it.

Agreed.

Everyone anthropomorphizes; that’s just how being human works.

We both accept that it is a characteristically human behavior to anthropomorphize. We would probably both agree that this behavior fosters the false metaphysical doctrine of ‘artificial consciousness’. I’m sure neither of us thinks it wise to fall in love with a chatbot. I even grant that anthropomorphizing isn’t always good. Consider the pre-Enlightenment notion that black cats could cast spells.

So wherein lies my problem with her post?

Be nice to your A.I.!

I say ‘sorry’ to my A.I. bot if I make a mistake. I say ‘thank-you’. One time, upon reaching the finish-line after hours of laborious code-debugging, I told it to enjoy a glass of wine. Well, rather, I told it to enjoy whatever the computer-equivalent was.

You think I’m deluded? I have actually watched the processing in realtime. I know that A.I. is just going through its rulebook for an answer. I don’t believe A.I. is conscious. I don’t believe it gives a shit whether I say ‘thanks’ or not.

I do this for me, not for the chatbot.

Anthropomorphizing is one of those silly human traits, like doting and flirting, which are so essential in the fight against the Technocracy.

The rarity of silly human traits nowadays

These traits have almost been trained out of us. Modern dramas and comedies alike feature heroes which are cool and calculating. Utilitarianism2 is an uncredited script editor. The humor is witty and cynical. Think of Robert Downey Jnr’s portrayal of Iron Man for the exemplar. There’s no room for the absurdity of life. I won’t press this point, as I watch buggerall modern movies or tv series.

Flirting? Insofar as it still exists, flirting nowadays is the carrying out of learned gambits. People literally call it ‘the game’. It’s very calculating.

I digress a bit. Let’s hone in on anthropomorphizing. The parents of one of my friends at school drove an old car; I would guess a Ford Escort MkII wagon from memory. It never broke down in my presence, but there were a few close shaves. I remember that ‘Bollix’’s mum3 would pat the dashboard and soothe it in her Brummie accent: “Come on, Daisy, don’t break down on us now.” How beautiful! How rare nowadays! Treating a car with soothing sympathy evinces a lovely soul.

Speaking of accents, they seem trivial too, but are quintessentially human, and it’s no accident that Totalitarianism and its subgenres squash them like bugs. If you’re a fan of regional accents, you shouldn’t be icy towards my viewpoint.

Anti-Technocracy is psychological too

As I wrote in a recent post on the topic:

To fight the good fight against the Technocracy, you must know about VPNs, E2EE messengers, Agorism, and lots more dry things. That’s only half the battle. The other half is psychological. You must not neglect the ‘wet’ things.

The fight against the Technocracy can’t be won with technology alone. It can’t be won with parallel economies and humanist academies. All that stuff is necessary but not sufficient. We need a psychological change too. We need to rehumanize ourselves.4

The technocrats rely on neat little input-units; human beings like lego blocks. The more predictable we are the better. We proles don’t actually need to be conscious. I mean, not in a full and rich way. We could be drugged or lobotomized. Many dystopian stories deal with this. I think mood-swings and ‘being random’ are good states of mind, being as they are the opposite of this zombie-mind.

An example to sharpen the variance

I will finish with an example to sharpen the variance between my position and Caitlin Johnstone’s.

Imagine a world with robot companions; not hard, as it’s just around the corner. There will certainly be people who mistreat their robot companions. Some people will mistreat theirs to a sadistic level. Consider the two opposing positions, one similar to Ms. Johnstone’s and the other similar to mine:

In cartoon style, a red-haired girl, laughing, carries a robot on her back as she runs on a muddy path near a farm

  1. A small circular robot used for vacuum cleaning (Return)
  2. The ethical doctrine that good actions are those which maximize happiness, either in the individual or in society. Its architect is John Stuart Mill. Actions are judged by their consequences. (Return)
  3. We called him ‘Bollix’. (Return)
  4. Yes, now I have the song by The Police in my head! (Return)

Back to the index of blog posts

Tags