Brexiters’ remorse and the complex logic of failure
Complex forces combined with our cognitive weaknesses often lead smart people to make very bad decisions
“The greatest griefs are those we cause ourselves.” — Sophocles
One of the more fascinating phenomena I have been following lately is the litany of woes from former Brexit supporters who now wish they could go back in time and vote to remain. As a recent New European article noted:
In Benidorm, retired former Brexiteer Joe told website GCTN that the new 180-day restriction on visits to the EU means “People are panicking to sell their properties purely because they’re going to be away from their properties for 90 days.” He added: “Brexit itself, when it was first started, was going to be brilliant because I honestly believe that Britain should stay Britain and we run by ourselves. There’s no way I would vote to go for it (now), because it’s harming me and my future, what I want to do with what time we have left, me and my wife together, in this sort of surroundings.” Meanwhile, former Brexit Party MEP June Mummery retweeted an Express article about fishermen in Brixham that included this verdict on Brexit from former Leave voter Ian Perkes:“Disappointed, very very disappointed - love to have my time again and vote to stay in.”
As the number of Brexigreters increases — and irrespective of what was the right choice in this case — the more I think about a topic that has intrigued me for decades: the process by which otherwise smart and capable people make terrible or even disastrous decisions.
Of course, the study of bad decisions is an ancient one, going back to Greek tragedy and the role of hamartia (i.e., the “tragic error”) in the downfall of great men. However, modern writers and thinkers have also tackled this question in interesting ways. Back in college, I read Barbara Tuchman’s excellent book, The March of Folly, which explores how governments are able to make clearly self-destructive decisions. Another memorable book in this field is The Logic of Failure by the German theoretical psychologist Dietrich Dorner, which is a fascinating analysis of “error in complex situations.”
Back in 2016, MIT Tech Review highlighted research by Ashton Anderson (Microsoft Research), Jon Kleinberg (Cornell), and Sendhil Mullainathan (Harvard). The researchers created a database of 200 million chess games and divided the games into two classes: those played by amateurs of all levels and those played by grandmasters. The database not only recorded the outcome but also the factors that surrounded any loss-causing mistakes, which the researchers analyzed to try to understand the factors that drive major errors. As the authors noted:
We have used chess as a model system to investigate the types of features that help in analyzing and predicting error in human decision-making. Chess provides us with a highly instrumented domain in which the time available to and skill of a decision-maker are often recorded, and, for positions with few pieces, the set of optimal decisions can be determined computationally.
The researchers drew three major conclusions from their work. First, they found that decision time is a factor in errors but “only up to a point.” As expected, hasty decisions lead to many mistakes, but after a specific time threshold (10 seconds in their model) the duration of decision making is no longer a factor. In other words, whether a player takes two or ten minutes to decide, the probability in both cases is not that she is conducting a complex analysis of the position and/or possible moves but that she simply does not know what to do.
The second finding was that the complexity of the position is also an important factor, which is to be expected. The more complex the position, the more the likelihood of error.
The third finding was that the skill of the player did not impact results in the way most of us would expect, which is that the better a player gets the fewer mistakes she makes. Instead, the authors described a model that has three skill-related outcomes: skill-monopole, in which skill does improve outcome all of the time; skill-neutral, in which skill level makes no difference; and, surprisingly, skill-anomalous, in which increasing skill level actuality increases the error rate. This last finding baffled the researchers:
The existence of skill-anomalous positions is surprising, since there is no a priori reason to believe that chess as a domain should contain common situations in which stronger players make more errors than weaker players. Moreover, the behavior of players in these particular positions does not seem explainable by a strategy in which they are deliberately making a one-move blunder for the sake of the overall game outcome.
The authors suggested that further research on skill-anomalous situations was warranted, and as I read their conclusion it reminded me of an analysis Dorner wrote about in his book. After analyzing errors in complex situations, Dorner found that two major factors influenced the outcome.
The first factor is what he called dynamics, which refers to the volatility of the decision factors that must be understood and analyzed correctly in any given problem. The more dynamic the situation, the easier it is to make a terrible mistake. The second factor is what he called intransparence, or the degree to which the reality of a situation cannot be ascertained correctly. The more intransparent the position (i.e., the less its surrounding reality is seen), the higher chance there is of a major error. Using a chess analogy, Dorner noted:
If we want to capture this ... in a visual image, we could liken a decision maker in a complex situation to a chess player whose set has many more than the normal number of pieces, several dozen, say. Furthermore, these chessmen are all linked to each other by rubber bands, so that the player cannot move just one figure alone. Also, his men and his opponent’s men can move on their own and in accordance with rules the player does not fully understand or about which he has mistaken assumptions. And, to top things off, some of his own and his opponent’s men are surrounded by a fog that obscures their identity.
If we combine both sets of insights, it is interesting to apply them to executives and politicians who, though possessing great skill and experience, find themselves in new or newly complex positions, and then fail, sometimes spectacularly. We often see analysts pondering how it is that so-and-so failed as CEO in Company X, even though he had years of success as CEO of Company Y. An understanding of the dynamics of failure suggest that success or failure in these instances may have less to do with the skill of the executive than with the complexity and intransparence of the situation in which he finds herself. Put simply: the executive reaches a high enough level of skill that he reduces the amount of situational analysis required to make the right choice and trusts mainly in his instincts, thus causing the failure.
I think of recent cases such as the pandemic or responses to the #BLM movement where experts and skilled political leaders struggled to succeed not so much because of a lack of skill but, perhaps, because the dynamics and opaqueness of the right strategic position were not analyzed sufficiently and/or correclty. Indeed, Dorner notes that in response to such intransparence we all build what he calls reality models, which are inherently flawed:
An individual’s reality model can be right or wrong, complete or incomplete. As a rule, it will be both incomplete and wrong, and one would do well to keep that probability in mind.
Dorner makes another point relevant to Brexit regret, and it is that humans are good at some kinds of analyses and bad at others. For example, we are generally good at dealing with spatial configurations. Even children can spot shapes that do not belong in certain groups, and adults are generally skilled at understanding visual patterns and anomalies. However, we are generally bad at what Dorner calls temporal configurations, i.e., understanding the sequence in which events have unfolded or will unfold in the future. As he writes:
Even when we think in terms of time configurations, our intuition is very limited. In particular, our ability to guess at missing pieces (in this case, future developments) is much less than for space configurations. In contrast to the rich set of spatial concepts we can use to understand patterns in space, we seem to rely on only a few mechanisms of prognostication to gain insight into the future.
The results of this cognitive weakness, Dorner notes, is that people tend to have two reactions to problems involving temporal configurations: “first, limited focus on a notable feature of the present and, second, an extension of the perceived trend in a more or less linear and ‘monotone’ fashion (that is, without allowing for any change in direction).” It does not take a lot of imagination to see Dorner’s description at work in the Brexit decision. A British voter did not like what was happening in the EU at the time of the referendum, so he concluded that (a) this present negative EU state outweighed all pre-EU realities and post-EU possibilities and that (b) the future of the EU would be a linear extrapolation of the then-current (disliked) state. Rather than consider with equal weight the negative aspects of life outside the EU (which was not his reality then) or that the EU’s flaws could be remedied, this voter behaved just as Dorner predicted and voted only on the basis of the “here and now.”
“We human beings are creatures of the present,” Dorner believes, and it is very likely that many people feeling Brexit regret now are doing so because they failed to recognize, in his fascinating phrasing, “shapes in time.” Of course, five years later, that same voter wakes up to see the endless (negative) aspects of his new post-EU reality and remorse arrives just as Dorner would have predicted. My guess is that now that more and more people see the “temporal shape” of life outside the EU, Brexit regret will only increase.
After considering all of the above, what I take away from the study of failure, and the Brexit example is a reminder to focus just as much on situational analysis — and the models we build in response to reality — as on our own skills in a given situation. The latter are sometimes no match for a devastatingly complex position in many decision scenarios. Of course, this is a difficult challenge. Indeed, with characteristic German understatement, Dorner notes that keeping this last point in mind is “easier said than done.” My guess is that there are a lot of people in the UK just now who have discovered just how true that statement really is.