This post is about how we make decisions – in particular, it’s about how we make major life decisions, such as whether or not to start a family, whether to leave your home country and start a life elsewhere, or whether to join a revolution and fight for a cause. These are what philosophers Edna Ullman-Margalit and L. A. Paul call “big decisions” or “transformative decisions.”
But let’s start small. How do I make the little decisions in life? I am about to walk to a friend’s house, and I am deciding whether or not to take an umbrella. How do I go about making this decision? Well, I think first about what I can expect things to be like if I take the umbrella. Either it will rain or it won’t. I think about what value I assign to the situation in which it rains and I have the umbrella (a low value because rain’s a drag, as is carrying an umbrella, but not very low because at least I’m mostly dry) and the situation in which it doesn’t rain and I have the umbrella (still a low value, but a bit higher, because now I’m only carrying the umbrella and I’m still dry); then I weigh the first value by how likely I think it is to rain and I weigh the second value by how likely I think it is to stay dry; and I add those two weighted values together. That gives me what economists call my “expected utility” for taking the umbrella. Then I do the same to figure out my expected utility for not taking the umbrella. And I compare the two. I should pick whichever of the options has the higher expected utility (or, if they’re equal, pick either). This is the main claim of “expected utility theory.”
Expected utility theory looks very general, and indeed economists take it to be so, using it to predict how people will act in a wide variety of different situations, from buying fruit to choosing how to vote in an election. Surely, then, it will work also for those major life decisions that I mentioned at the beginning? When I decide whether or not to become a parent, I think about how likely various possible outcomes are and how much value I’d attach to those outcomes: for instance, I become a parent and my child is happy and I retain a close-knit group of friends; I become a parent and I don’t bond with my child and resent them for restricting what I can do; and so on. Surely, as in the case of the umbrella, I weigh the values I assign to these outcomes by how likely the outcomes would be should I choose to become a parent; and I calculate my expected utility as before? And then I do the same should I not become a parent; and I compare the expected utilities.
But there’s a problem. In the case of the umbrella, I could reasonably expect not to change the values I assign to the possible outcomes between the time I make the decision and the time it would have its effect, namely, when I am outside, walking to my friend’s house. But the distinctive feature of major life decisions is that choosing one of the options might very well change something about you as a person – it might change the values that you assign to things. Before becoming a parent, I might assign a rather low value to the outcome in which I become a parent who doesn’t bond with their child and resents them, and I might prefer not becoming a parent at all to this outcome; but, once a parent, I might still wish I bonded more with my child and wish I had more choice in what I could do with my time; but now I might nonetheless prefer this situation to one in which I am not a parent at all. That is, by becoming a parent, I might have raised the value I assign to that state so much that it outweighs other negative components of my experience, whereas it did not before. So we face a problem: when making major life decisions, whose values matter? The values I assign at the time when I’m making the decision to begin the adoption process? Or the values assigned by my possible future selves once the decision is made one way or the other? Or perhaps I should effect a compromise between these two conflicting assignments of values? But how? This is the problem that Ullmann-Margalit and Paul raise for the standard account of decision-making that we introduced above under the title of expected utility theory.
The problem is that every answer has troubling consequences. I’m offered a year’s pass to Disney World that will become valid only when I reach 80 years of age. If I pay attention only to the values of my current self, I’ll pay a pretty high price for this, because currently I love Disney World. But, given that I can reasonably expect my 80-year-old self to place a pretty low value on what I now view as the glories of Splash Mountain, this seems money poorly spent. Or suppose I must decide now whether to make an unchangeable will that bequeaths my entire estate to an effective health charity working in very poor areas of the world. Having watched so many people tread what the playwright Alan Bennett describes as that “dreary safari” from the left of the political spectrum to the right as they grow older, I might reasonably expect myself to do the same. But I should certainly not now give full weight in the decision about my will to that future right-wing self – I currently abhor his views! Perhaps, then, we should give full weight to neither self, but rather attempt to affect a compromise between them, taking account of both of their values, just as a committee might attempt to make a decision by weighing up the attitudes and judgments of all of its members. Perhaps – but, if that is right, how are we to weigh the two selves, current and future? The example above of choosing how to make my unchangeable will suggests that the values of my future self should be given less weight the further those values lie from my current self. But this strategy results in parochialism – if I follow it, I will never make a choice that I can expect to lead me to assign value in a way that is very different from the way I currently do; I will be stuck in a small circle of my current values. And this seems most unwelcome. There are plenty of other ways of assigning value that I think are perfectly legitimate and rational and reasonable, but that differ greatly from mine. Our theory of decision-making is impoverished if it prevents me from ever making decisions that will lead me to change my values in that direction. This, then, is the problem of making major life decisions.
Featured image: decision by Thomas Haase. CC BY 2.0 via Flickr.