The middle of the twentieth century was an optimistic time in the study of human rationality. The newly rigorized science of economics proposed a unified decision-theoretic story of how humans ought to think and act and how humans actually think and act. For the first time, we had good scientific evidence that humans were by-and-large rational creatures.
In the 1970s-1980s, the story changed. A new wave of social scientists, led by Amos Tversky and Daniel Kahneman, proposed that humans decide how to think and act by using a simple toolbox of heuristic strategies. Heuristics are fast, frugal, and reasonably reliable, but they also have biases: systematic ways in which their outputs come apart from the requirements of leading decision-theoretic ideals. For example, we might judge that Linda is more likely to be a feminist bank teller than a bank teller, though that is clearly impossible, since all feminist bank tellers are bank tellers. Soon, hundreds of systematic biases in human cognition had been discovered and cataloged. As a result, many scholars came to believe that humans are not very rational after all.
In the past few decades, a new wave of scholars have urged that the 1970s and 1980s were too harsh in their judgment of humanity. Humans are, it is increasingly held, indeed fairly rational creatures. But this is not because we always do what mid-century decision theory says we should. It is instead because mid-century decision theory did not tell the full story about what it means to be rational, while also being human. Recent work has urged that traditional decision-theoretic standards of rationality leave out a number of relevant factors that rational humans should and do respond to, then wrongly treat humans as irrational when in fact we are responding correctly to factors that decision-theoretic standards have traditionally neglected.
In particular, theories of bounded rationality urge that humans are bounded agents. We are bounded by our internal structure, as agents with limited cognitive abilities and who must pay costs to exercise those abilities. We are also bounded by our external environment, which shapes the problems we are likely to face and the consequences that our actions will have on the world. Theories of bounded rationality aim to show how factors such as limited cognitive abilities, cognitive costs, and the structure of the environment shape how it is rational for humans to think and act. Bounded rationality theorists propose that what looks like irrationally biased cognition is often a rational response to cognitive and environmental bounds.
For example, suppose I ask you to estimate the date on which George Washington was first elected president of the United States. If you are like many people, you will answer this question by beginning with a relevant anchor value, such as 1776, the year in which the Declaration of Independence was signed. You will adjust your estimate upwards a few times to account for the length of the Revolutionary War and the interval between the Revolutionary War and the election of the first president. You may also adjust downwards to account for factors such as the need to quickly form a stable government. If you are like many people, you will arrive at an estimate in the low- or mid-1780s.
This process is called anchoring and adjustment: judgments begin by taking a relevant fact as an anchoring value, then iteratively adjusting the estimate upwards or downwards from the anchor to incorporate new items of information. In this case, anchoring and adjustment performs remarkably well: Washington was first elected in 1788. But anchoring and adjustment shows an anchoring bias: estimates tend to be skewed towards the anchor. In this case, they tend to be several years too low, because the anchor value of 1776 is lower than the true value of 1788. Classic theories of rationality would treat this as an irrational type of cognitive bias, but bounded rationality theorists are not so sure.
Anchoring bias happens because agents do not make enough adjustments to wash out the effect of the initial anchor. There is strong evidence that this happens because adjustments are effortful: we could, if we wished, wash out the anchor by making thousands of tiny adjustments, but this may not be worth the effort. That is, we face an accuracy-effort tradeoff between increased accuracy and increased effort from each additional adjustment. Bounded rationality theorists propose that in cases such as the above, humans make an optimal compromise between competing goals such as accuracy and effort. In this case, we choose a number of adjustments sufficient to yield a highly accurate estimate while keeping the number of effortful adjustments low. In this way, what looks like an irrational cognitive bias may actually be a rational response to our bounded human condition.
Not all human cognition is rational. For example, there is not much to be said for someone who prefers to draw from an urn containing 7/100 prizes instead of an urn containing 3/10 prizes, on the basis that 7 is larger than 3. But careful theorizing about bounded rationality aims to show that many cases which look as biased and irrational as this urn draw are in fact fully rational when viewed in the proper light.
Featured image by Sasha Freemind via Unsplash.
Recent Comments
There are currently no comments.