A while back I wrote about memes and how they can ‘hack’ past our consciousness (here). I suggested that some concepts, like religion, are particularly effective at getting past our conscious defences because, as ‘sticky’ memes, they are able to trigger a wide range of fundamental human needs, making them seem and feel ‘right’. Once ‘infected’ we try and spread the meme to others, meaning that memes can be thought of both as hacks and viruses (although, in effect, viruses are a form of self-propagating hack). One of the most successful memes throughout human history has been religion – not only is it ‘sticky’, getting to us in just the right places, it has the ability to rewrite as it infects, overwriting our existing beliefs to make them more consistent with the meme’s payload. The consequent behaviour can vary (think mild C of E versus rampant fundamentalist) but, invariably, the damage is done – once accepted the results are, for all intents and purposes, permanent.
This week, I’d like to extend on the idea of the transmissible meme from a reverse standpoint – can we prevent ourselves from being infected by memes in the first place? Is there a way of shielding ourselves from their effects? How can we avoid the ZOMBIE APOCALYPSE?
The reason it’s taken me such a long time to write this particular article is that, after writing about how we can easily be infected by memes, I simply wasn’t sure what we could do to prevent (or even be aware of) this infection. I had some ideas about how our obsession with our ‘self’ can make us vulnerable to memes that activate areas complementary to those obsessions (more on this later), but I needed some time to let the ideas form. The last few posts on the illusion of reality and the self (here and here) helped me crystallise my ideas – hopefully they’ll make some sense today!
So let’s start with an extension of last week’s ideas. As I mentioned, one of the big problems with the notion of self that most of us carry around is that it’s illusory. Not only can’t we be sure that what we see, feel and touch is real, we can’t trust that the “I” that interprets these sensations is anything more than a construct to help us sort out the various streams of information we’re presented with. Nevertheless, we all cling to the notion of a consistent self, a stable personality, and a relatively consistent viewpoint of the world. When our actions contradict this illusion we work hard to rationalise to ourselves that we ‘weren’t ourselves’, that we were ‘acting out of character’, or that it was the stress or the wine talking for us. Combined with our extremely poor and malleable memories, it’s relatively easy for us to self-delude, assuring ourselves that things happened a certain way, even if we need to edit our memories to reinforce this standpoint.
So, one of the main reasons we’re so vulnerable to the invasive effects of memes, is because of this self-illusion. We struggle to maintain the appearance of a consistent self and, in doing so, deliberately allow rewrite access to important areas (like memory) so that (to us) we feel consistent. This is important, so let me expand. To maintain the illusion of a self we’re constantly arranging sensations, experiences and memories in a way that sustains the illusion. We don’t like ‘acting out of character’ so we (for the most part) deliberately avoid attending to things that dispute our notion of consistency (hence the rationales like “it was the wine talking”). From a hacking security viewpoint, this is a major flaw. Imagine that a security system could be easily manipulated so that you only saw what you wanted to see, rather than what actually happened. It would be extremely easy to hack so that almost anything could happen while you remained blissfully unaware, sure that nothing had been stolen or replaced. This crappy security system is, unfortunately, your self concept. The trade-off for a sense of self, is a system that’s blind to pretty much anything that contradicts the existing self-illusion.
Now, to revisit the transmissible meme and the reasons they’re so fecund. Memes take root extremely easily when they can bypass conscious thought and activate simple reward systems in the human brain. Religion, for example, makes many people feel safe and comfortable. To them, the notion of a god or religious structure feels ‘right’, so much so that they need to proselytise to others (in order to spread the meme). Nevertheless, the religion meme has been so successful because it activates our sense of belonging (needing to be part of a tribe), soothes our fear of the unknown (e.g., death), reduces our need to make uncomfortable decisions, and allows us to go along with everyone else. Few of these things happen at conscious level, and so it can get in and rewrite before most people are aware of what’s going on – because it feels good they surrender and just let it happen.
Most importantly, because of our insistence that the self be consistent and trustworthy (even though this is an illusion), we will rationalise, post hoc, pretty much anything that’s out of character. If we find ourselves infected by a meme most of us, rather than recognising the infection and attempting to remove it, will assume that we must have decided consciously to ‘change our minds’. The notion that we might have little control over how we think is untenable so, instead, we construct a narrative that explains how we ‘deliberately decided’ to embrace a meme’s content. This is scary – the thought that we can be infected by alien memes and, because of a major design flaw in our information processing system, not only welcome the infection, but convince ourselves that it was actually our idea in the first place. It’s like we’re preprogrammed to become zombies at the drop of a hat.
OK – yup, we’re a bit crap. It’s not our fault though. As I’ve said throughout these posts, we’re the product of evolution, and evolution is a slow process. Unfortunately, our big brains have allowed us to develop way faster than we are able to compensate for through evolution – so we get a whole load of left-over (redundant) systems that used to work really well and now simply screw us up (like our fear centres being over activated in the modern world, resulting in two out of three of us suffering from some sort of debilitating anxiety). So to, our sense of self (which probably evolved to help us make sense of the large amount of competing information coming into our heads) leaves us wide open to memetic viruses with rewrite access – they can get in and change who we are, all the while making us thing we chose to change…
At the start of this post, I mentioned that there might be a way of avoiding or, at least, being aware of memetic hacking. It’s obvious that the danger of viral memes is their ability to manipulate our sense of self, so that we think that we were the ones to come up with the idea. In order to counter this sort of infection, we need to be less trusting of ourselves. In other words, it’s our trust in a consistent self that gets us in trouble, so learning to distrust the self when it decides to manipulate our behaviour is a great start.
Here’s what I propose: Instead of assuming that you’re behaving consistently and with directed purpose, work on the assumption that you’ve already been hacked and will be again. Get into the habit of observing your behaviours (as objectively as you can) and determining whether you behave rigidly in a given situation, or can develop a more flexible repertoire of options. For example, instead of lashing out next time you feel angry, you could choose to ‘ground’ yourself (come back to reality) and act in an alternative manner, preferably in a way that’s congruent with your values (for instance, being able to be compassionate). By distrusting your urges, and observing yourself acting in a variety of contexts, you’ll be substantially better able to recognise when you’ve been exposed to a memetic hack/virus. Most importantly, you’ll be better able to choose your actions – admittedly “you” are still illusory, but at least there’s a better chance that your actions will be based on an element of choice, rather than just acting like a zombie (“man, I could really go some brains right now, grrrr arrrghh…”).
I like to think of this self-observation routine as a regular back-up. Comparing your current self to the back-up by evaluating actions (and distrusting your internal security footage) lets you determine whether you’ve been hacked. I’m not sure if there’s a ‘restore from backup’ feature in this analogy but, hey, analogies only get us so far…
One more thing. We can make it a lot easier to identify the likelihood of memetic infection if we keep a track of what we’ve been exposed to (this is sometimes very difficult, and requires a lot of conscious (mindful) attention of the world around us – something most of us simply don’t do and which increases our chances of infection). Using another analogy, this is a bit like running a virus-checker, with a checklist of the things that represent the highest risk. My virus checker list includes: technology (especially the stuff that makes us feel connected or that reduces our need to process human relationships), religion and its variants, politics, opinions, relationships, work, body-image, social media, media, trends, and advertising (this is by no means a comprehensive list). Of course, we can’t avoid any of these things, they’re all part of the socio-cultural blend that helps us be human. We can, however, observe what they do to us, and distrust the little internal voices that whisper seductive lies. Anything that makes you think it was your idea in the first place should be setting of klaxon bells in your head – run before the zombies get you!
3 Replies to “Memetic inoculation – how to develop immunity to memetic hacking (and avoid the Zombie apocalypse)”