By Professor Mark Fenton-O’Creevy
Share post
Dr. Fenton-O’Creevy is a member of the Essentia Advisory Board and Associate Dean of Strategy at The Open University Business School. As part of his work into decision-making by investors and professional traders, he is currently working with David Tuckett on developing new approaches to decision support which build on Conviction Narrative Theory.
“The heart has its reasons which reason knows not of… We know the truth not only by reason, but by the heart.” – Blaise Pascal
“There have been great societies that did not use the wheel, but there have been no societies that did not tell stories.” – Ursula K. LeGuin
We’re still here
To listen to some scholars of decision-making, it is a great wonder that humans have survived as long as they have.
We are constantly prone to biases, rely on simplistic and flawed heuristics and, even when trained in statistics, we fail to apply even the most basic rules of probability effectively.
Despite this, humans have been a remarkable evolutionary success story and have adapted to many different environments. One of the keys to that success is not so much our ability to adapt physically but instead our skill in adaptive reasoning.
To better understand this apparent paradox it is worth exploring recent developments within the psychology discipline. They reveal that the human brain is capable of navigating highly complex and uncertain environments with a sophistication that is beyond more artificial forms of intelligence.
This raises interesting and timely questions about the optimal balance between man and machine.
The respective strengths of each may mean that, in complex fields such as investment management, superior decision-making performance is best achieved through combining human and machine intelligence, rather than choosing between them.
Is the brain really a computer?
Back in the mid-20th century, cognition re-emerged as a key field of study in psychology.
In a reaction against the then dominant behaviorist perspective, psychologists once again began to speculate about, and research, processes going on within the person, rather than simply relying on observation of the person’s external behavior.
This development in psychological thinking was strongly influenced by the simultaneously emerging fields of information and computer science. As a result, a dominant metaphor in cognitive science became the brain-as-computer.
This analogy left little room for the role of emotion, except as a disturbance of optimal cognitive function, or, at best, as a signalling system to indicate the gap between goals and outcomes.
Given this history, it is therefore unsurprising that many contemporary efforts to improve the quality of financial decision-making have focused on replacing humans with machine intelligence.
However, this approach may turn out to have significant limitations – especially given the growing evidence that humans and computers think in very different ways, and have very different strengths.
The idealized rational approach falls short
There are many accounts of what the ideal, rational approach to decision-making should look like. A lot of them have a good deal in common with the six steps described by psychologist, Max Bazerman:
- Perfectly define the problem
- Identify all criteria
- Accurately weigh all of the criteria according to your preferences
- Know all relevant alternatives
- Accurately assess each alternative based on each criterion, and
- Accurately calculate and choose the alternative with the highest perceived value [1].
This seems a plausible, even sensible approach. After all, it is very close to what were taught at school, where typically we worked with defined problems in contexts where there was a right answer and a known set of potential outcomes. However, it turns out to be an idealised approach, giving a poor account of much human decision-making, and is best-suited to what have been described as ‘small world problems’.
‘Small world’ vs. ‘large world’ problems
Small world problems can be characterized as having the following attributes:
- A well-defined task and goal
- A known set of choices and potential outcomes
- Highly replicable processes
- Known (or at the very least knowable) probability distributions associated with outcomes given any choice.
Because small world problems are very tractable to study in the laboratory, they have dominated judgement and decision-making research. However, other fields of study have been more interested in what can be described as ‘large world problems’. These are characterised by:
- Ill-structured problems
- Deep uncertainty (unknown and sometimes unknowable probability distributions)
- Complex and dynamic environments
- Little replicability.
In many cases, large world problems are also reflexive – ie they depend on estimating the likely actions of other people who are themselves basing their actions on the likely actions of others. They may also be characterised by time stress, shifting goals, and multiple stakeholders with differing perspectives.
Large world problems are much less tractable to laboratory study, and have been much more studied in the field, notably by naturalistic decision-making researchers [2].
As technology and algorithms improve, computers are becoming highly effective tools for tackling small world problems, as they are less subject to failures of probabilistic reasoning. However, computers are poorly suited to solving large world problems. This is a domain where humans seem to have a distinct advantage.
Storytelling as a solution to uncertainty
Humans are storytellers.
One of our key evolutionary adaptations to a complex, uncertain world has been the ability to weave stories, which we use as a device for making and sharing meaning. These narratives give us a basis for action in the face of uncertainty, as well as a tool for persuading others to work with us.
Stories are inextricably connected with our emotions; they enable us to draw on the past and to project ourselves forward into the future. Through stories we experience possibilities as if we were there, experiencing the emotions evoked by possible outcomes.
Without stories and emotions to guide us, and in a world in which available sensory information always massively exceeds our capacity to process it, we have no way to decide what to pay attention to. In this way, stories and emotions are an important tool of rational decision-making. (Though, of course, just as we can make mistakes of calculation, we may make mistakes of narrative and emotion).
Like the changing world in which we live, these stories have consistencies over time, but we also update them to reflect our perceptions of change around us.
Karl Weick [3] describes this process as ‘sensemaking’: the ongoing process through which we make meaning of our lived experience. Sensemaking is an emotional process because emotions are an inescapable (and often useful) element of human decision-making.
Using in-depth research into investment manager decision-making, David Tuckett and colleagues [4, 5] have developed a new theory of human decision-making which places what we know about the role of emotions and story-making at its heart: Conviction Narrative Theory.
Conviction Narrative Theory
Conviction Narrative Theory proposes that, faced with uncertainty, humans construct narratives about the future outcomes of their actions. They develop these to the point where they have a subjective sense of conviction about a course of action.
These narratives both invoke and manage emotions. They could be pleasurable emotions about future gain (‘approach emotions’) or anxious / fearful emotions about future loss (‘avoidance emotions’).
Narrative resources brought to bear in this process include the full panoply of statistical and probabilistic techniques that humans have devised. However, in this way of understanding human thinking, such techniques are just one of many narrative resources available. Others include subjective judgements about the trustworthiness of information, an unfolding sense of ways in which the world may be changing, and so on.
Placing narrative and meaning-making at the heart of understanding human action gives us a route to understanding the particular advantages that humans have over machines – especially in the conditions of deep uncertainty (ie ‘large world problems’) which humans have evolved to confront.
A particular insight of Conviction Narrative Theory is to distinguish between the different narrative and emotion configurations employed in resilient decision-making and that decision-making which is driven by the avoidance of anxiety:
- Resilient decision-making is characterised by a mixture of ‘approach’ and ‘avoidance’ emotions and is open to new information and the possibility of being wrong.
- Avoidant decision-making is characterised by a polarisation to either ‘approach emotions’ (anxiety is avoided by discounting all information which does not support a preferred action), or ‘avoidance emotions‘ (anxiety is avoided through the relief of avoiding an action and discounting information relevant to its benefits). Thus anxious decision-making is insensitive to information that conflicts with a preferred course of action.
The efficacy of Conviction Narrative Theory is beginning to be demonstrated through studies in which the balance of ‘approach’ and ‘avoidance’ emotions in financial news sources is used to predict market crises [6].
Joining forces
So where does this leave the role of computers and big data in improving financial decision-making?
It suggests that the role of machine intelligence should not be to replace humans but to complement them, recognizing the particular strengths and weaknesses of each.
This would effectively be a decision-support approach, but not one in which technology is being used to try and make people think like computers. Instead, the role of machine intelligence should be twofold:
- Computers should be used to ensure consistent, rapid, accurate and bias-free comparison of different action-options in domains which approximate to small world problems.
- For large world problems, computers can play a role in supporting and enhancing the human capacity to engage in resilient approaches to decision-making (which we know are well-suited to managing complexity, ambiguity, and rapidly changing conditions).
Those working with financial markets often use computers effectively to support decisions about small world problems, such as calculating the value of an asset given certain assumptions. However, these problems are embedded in a context of deep uncertainty and complexity.
An important potential role of technology in this large world context, therefore, is to help monitor and support human conviction narratives. A very valuable part of this will be to signal when the human side of the partnership is falling prey to anxiety-driven avoidance of relevant information.
The question, then, is not whether human or machine intelligence is better, but rather how these very different forms of intelligence may best be used to complement each other.
See how a large, long-term fundamental equity manager used Essentia’s behavioral reporting and intelligent nudges to mitigate bias and unlock 68bps of behavioral alpha per year.
Post citations:
- Bazerman, M.H., Judgment in managerial decision making. 2002: Wiley New York.
- Klein, G., Streetlights and Shadows: Searching for the Keys to Adaptive Decision-Making. 2009, Cambridge, MA: MIT Press.
- Weick, K.E., Sensemaking in organizations (Foundations for organizational science). Thousands Oaks: Sage Publications Inc, 1995.
- Chong, K. and D. Tuckett, Constructing Conviction through Action and Narrative: How Money Managers Manage Uncertainty and the Consequences for Financial Market Functioning. Socio-Economic Review, 2014: p. 1-26.
- Tuckett, D., Minding the Markets: An Emotional Finance View of Financial Instability. 2011, London: Palgrave Macmillan.
- Nyman, R., D. Tuckett, and e. al., News and narratives in financial systems: Exploiting big data for systemic risk assessment. 2015 Bank of England Working papers series.