Over the last six months I’ve written a fair bit on how our brains get us in trouble. In particular, I’ve focused on what I like to call the ‘inner monkey‘, our limbic system: the primitive part of our brain that tries to stop us from being eaten, and that makes us think that our emotions are important.
Today, I’m going to write about some of the other ways our brains get us in trouble – in particular, our neocortex, the relatively recently evolved part of the brain that allows us to be human. Why? Because it’s not just the primitive parts of our brain that have the potential to fuck us up – our ability to think abstractly and to develop complex imaginings, amazing as they are, get us in trouble a lot of the time.
First, although I’ve said this before, we need to understand that we are an evolutionary side effect of complex patterning.
Sorry, that came across as a bit cryptic. What I mean is, our brains are astoundingly complex. Each neuron has the equivalent processing power of an average laptop, and we have billions of neurons, all wired in parallel. With that much raw computing power, it’s almost inevitable that some sort of consciousness would emerge – as the system becomes more complex it self-organises. The result is ‘us’ (or the software module that identifies itself as ‘you’).
So, at our essence, the ‘us’ that we think we are is just a manifestation of a large biological computer. We are not special (apart from the fact that we live in a beautifully evolved meat computer).
In fact, ‘we’ are composed of a large number of hardware and software modules that developed in order to enhance survival in an increasingly complex environment – ironically our increasing intelligence was the cause of this increasing complexity – as we developed language and the ability to transmit complex information we needed to develop increased intelligence to parse the complexity of relationships with other, difficult to predict humans. What this means is that we use the information presented to us by our senses to create complex predictive models about the world – we attempt to determine what’s going to happen before it does. This makes awesome sense from a survival perspective – if we can predict what’s going to happen and get it right at least some of the time (let’s say more than 50%), we’re able to modify our behaviour to increase our likelihood of survival. This system worked really well when our worlds were simple and involved determining where the antelope were and whether Thag was into us or not. As our societies became more complex it probably continued to work, to a point. But as the number and complexity of the variables in our environment and, consequently, our mental models, also increased, our models became less and less accurate. But here’s real the clincher: we still think our mental models are sound! We kid ourselves into thinking we are good at predicting others’ behaviour – just like we kid ourselves they we’re in the driving seat. This self-delusion is a problem because these days, when we do attempt to predict behaviour in others (which is basically what we spend most of our time doing) we get it wrong. That’s right, most of the time our models of the world around us are broken, but instead of recognising that the model has failed, we blame the environment rather than the faulty system. We even rationalise the error and insist to ourselves that the model’s correct; a bit like a computer consistently fucking up but insisting that everything’s fine – no error messages here!
Because of this system flaw, we’re fundamentally irrational creatures who are convinced that we act rationally! Combined with our integrated limbic systems, we’re more like feeling creatures who can think, than thinking creatures who feel. The real bummer is that, as evolved systems, we can’t remove the irrational stuff (especially the feelings) – bad things happen when these bits of our brains break or are removed. Being human comes with this (somewhat annoying) paradox. We need to remember that, at its basis, our brain evolved to fight and fuck in a simple environment – we’re stuck with that, whether we like it or not. There is some good news though – we didn’t evolve to drive a car at 100 km/h or pilot a mountain bike downhill – so luckily we can adapt, especially when it comes to programming our own brains. The best programming we can achieve (and our only real hope): learning to become aware of our system limitations. (Rant begins) (Oh, don’t forget about the soup of neurotransmitter and hormones that we use to indicate our so-called emotions. We treat our ‘feelings’ as if they’re something special, a unique part of us that makes us different and that no one else really understands – but it’s just chemistry! Once we stop thinking of ourselves as more than complex machines we have the chance to be more than complex machines.) (Rant ends).
OK – we’re irrational creatures with primitive warning systems, faulty reality modelling, and a penchant for believing the warnings and predictions of our brains even though they’re wrong most of the time. Worse, we seldom recognise our mistakes and go to great lengths to rationalise our behaviour. Are we completely fucked?
I don’t think so. Like I’ve said, it is possible to recognise the various system errors and to modify our behaviour. In previous posts (here and here), I’ve talked about how to recognise our emotions and to use a values system to choose an alternate behaviour (even though it doesn’t ‘feel’ right at the time). But learning to recognise predictive modelling errors is a little harder. Luckily, I’ve come up with a hack.
Let’s accept (for a minute) that we have limited models for predicting the world around us. We’re just not very good at predicting outcomes in the modern world (the variables are too complex and, mostly, there’s a complete lack of antelopes). How do we know when our model has failed? One word: frustration. What you and I call ‘frustration’ is actually an error message. That is, frustration equals violation of a model; it signifies that something has gone wrong. Now, what usually happens when we feel frustrated? We certainly don’t say “Ooh, I’ve just experienced a model violation error. It appears that my ability to predict the outcome of this complex situation has failed, perhaps I should attempt an alternate behaviour?”. No, we certainly don’t do that (this sort of on-the-fly adaptation could be called an adaptive or dynamic heuristic). Instead, we stick with a static model (sometimes known as banging your head against a brick wall). We keep applying the same crappy model to the same situation, each time expecting it to work, and getting more and more frustrated when it doesn’t. Even worse, we do it repeatedly across situations. Again, no wonder we’re so fucked up…
If you haven’t read it already, I thoroughly recommend Daniel Kahneman’s “Thinking Fast and Slow”. Kahneman won a Nobel prize for his work on human decision-making – so he’s probably worth a bit of your time, and he has some great ideas that help to explain our modelling systems and their consistent failure. He suggests that we’ve evolved two types of modelling – call them Type I and Type II. Type I thinking is the stuff we do effortlessly, like calculating 2+2. Type II thinking requires a lot more effort (if you have to calculate 67×43, you’ll probably need to stop and think it through; you probably couldn’t do this while driving and not have an accident). Through most of our evolution we favoured Type I thinking because: (i) it’s often right (or it used to be when our environments were simpler), (ii) we can use it and still monitor the environment for danger, and (iii) it doesn’t consume a lot of resources (Type II thinking uses a lot of processing power (i.e., our brains) and the brain requires a lot of blood sugar to operate – because accessing replacement blood sugar used to be hard, we tend to reserve Type II for really important stuff). But Type II thinking is important, especially when things get really complicated. Here’s the real problem: a broken model forces us to try to engage Type II thinking, which is not preferential from a survival perspective when we feel under pressure (because it limits the ability for Type I to monitor our environment for danger, and it burns precious resources (we didn’t always have supermarkets) in the form of blood sugar). So, the brain will often substitute a Type I answer for a Type II problem, but we kid ourselves into thinking that this answer is, in fact, Type II. Problems ensue…
There is an up-side. With enough practice, we can make most Type II processes into Type I. Daniel Kahneman uses the example of a chess master walking along the street and observing a chess game in progress. Without stopping he says “White mate in 3 moves” and he’s right. The temptation is to explain this using a Type I answer (he must have some amazing, almost magical ability). In reality (despite his possible natural ability to pattern match), he’s spent literally thousands of hours training his pattern recognition systems so that when he encounters a pattern on a board, he’s able to predict the outcome. He’s taken what started as complex Type II processing, and automated it into Type I so that it no longer takes large amounts of effort.
So (up-side aside), put it all together and we get that our instinct is to keep applying the same mental model whether it works or not, instead of adapting. This tends to increase our frustration (i.e., it sets off an error code). And, in our oh-so-human way, we ignore the error code (or misinterpret it), and things keep getting worse.
Believe it or not, it does actually get worse. One of the most common Type I errors is the fundamental attribution error: the assumption that we are actually at the centre of the universe and, therefore, that others will act the way we want them to, or that the environment will bend to our will. Bugger. This means that our models are virtually guaranteed to break on a very regular basis. In other words we use inherently flawed models that are often only built around what we expect to happen based on our evolutionary preparation (fighting and fucking, remember?). And instead of learning to adapt, we get frustrated about the frustration! Personally, I believe that it’s this fixation on frustration from constantly broken models results in pretty much every other fuck up (insert human fallibility here). It’s Type I thinking all round…
Allow me to present an alternative. At the risk of sounding like a broken record, it’s (you guessed it!): Mindfulness.
Well, let’s call it something different in this context, maybe system monitoring or meta-awareness. Basically, it’s the development of a system to (or subroutine) to notice when modelling errors have occurred (i.e., pay attention not only to frustration but to its precursors) and to consciously engage an alternate behaviour. There is a catch, however. This takes a lot of practice. But we know that, with enough practice, Type II becomes Type I. With enough practice, mindfulness becomes the ability to notice and alternate with an expanded repertoire of behaviours. The outcome is psychological flexibility, itself the recognition that, when a model is inaccurate, the most useful solution is the application of an adaptive heuristic (rather than throwing a tantrum).
So next time you feel frustrated do me a favour. Recognise it for the error message that it is, and attempt an alternate behaviour. Your brain will thank you for it. Oh, and if you want a refresher on mindfulness and how to do it, have a look here.
One Reply to “The brain that lied: Limited modelling systems, system errors and frustration…”