Imagine that, while walking along a pier, you see two strangers drowning in the sea. Lo and behold, you can easily save them both by throwing them the two life preservers located immediately in front of you. Since you can’t swim and no one else is around, there is no other way these folks will survive.
To throw in just one life preserver would be to save one person while allowing the other to die pointlessly. That would be morally wrong. Though you’d have saved someone you could have allowed to die, that does not make it permissible to let the other person die. You would be morally blameworthy, not praiseworthy, for throwing in one rather than both life preservers.
Now imagine you are in a different situation. Again you are on a pier. There is one stranger drowning to your left, and two others drowning to your right. You have no life preservers to throw in, only a single life raft. You can easily push the raft into the water to your left, saving the one person, or to your right, saving the other two. There is no other way any of these folks will survive, and, tragically, there is no way all three will.
Suppose that, in full awareness of the situation, and without prejudice towards anyone in particular, you decide to help the one stranger over the two – you are an “innumerate altruist” in that you simply want to help someone, but without regard for who or how many. In saving the one over the other two you are not letting the two die pointlessly. You are saving someone who would have died if you had instead saved the two to your right.
What you do is nonetheless wrong. Since you cannot save all three strangers, you must balance their competing claims to your help. The left person’s claim to be helped is balanced by one claim on the right, leaving one other claim on the right unbalanced. You are thereby morally required to save those on the right side of the pier, and are morally blameworthy for failing to give due weight to each person’s claim. Even if your heart is in the right place, morality also requires you to use your head.
Similar claims hold in other cases. Suppose two strangers are stuck on a subway track, and the train is approaching. If you do nothing, one person will be killed and the other will lose a leg. You cannot pull both out of the way in time, but can easily rescue one or the other. The one person’s life must take precedence over the other’s leg. (This example may remind some of the well-known “Trolley Problem” in which you can decide whether to redirect a runaway trolley so that it kills one person instead of five – the present example is different both with respect to the harms at stake and your causal relation to them.)
In another scenario, two strangers would each face a significant chance of dying were they struck by the oncoming train. One would face a one in two chance of dying, the other only one in a hundred. You must help the former over the latter, other things being equal. These cases call not only for altruism, but numerate altruism.
Why should the numbers matter only when easily rescuing people who are nearby? It seems they should matter, in some shape or form, in many other contexts too: when rescue is costly to us in terms of money, time, or effort; when those we can rescue are distant and unidentifiable; when there are many potential rescuers; when the plights of those we can help are the result of social injustice rather than natural accidents; when those we can help are nonhuman animals; and when we can affect the quality of life of future individuals, whoever in particular they may be.
This is not to say there are no moral differences between these various contexts. But in all of them it is a moral mistake not to take account of the number of individuals we can help, the degree to which we can help them, and the probability our acts will actually help.
Taking account of all these numbers leads to increasingly challenging questions. Must we save two lives or prevent a thousand people from losing their sight? Can avoiding tiny chances of future catastrophes take precedence over helping folks right here and now? Because resources are limited, we cannot respond fully to every morally serious consideration. As with deciding whether to push the only raft left or right, we must face our situation of scarcity head-on.
The trillion-dollar question is how to balance all the morally relevant numbers that come into competition in the real world, be it when giving to charities, deciding between careers, or adopting public policies. This question is what drives effective altruism. It is unclear to what extent such a grand question is answerable, but we can make some progress.
Featured Image: Photo by Joshua Earle on Unsplash
[…] Source link […]
Suppose that there was limited space on the internet, and we could save one more presentation of the trolley problem (oh, but it is about health care this time!) or three articles about how being ponderous about simple problems (No, you can’t have a new heart, you are 70 years old. Obviously.) is a kind of system-justifying obfuscation. What should we do? Oh, you are saying that I’m suggesting there is a false scarcity here? Oh, good point.
Conclusion: Spread it thin
Should the above logic apply to immigration policy
Of course there’s a false scarcity here. It’s a thought experiment, intended to illuminate the fundamental laws in a simple environment, training wheels for the real thing. It’s the same as starting physics students on infinite frictionless planes.
In the real world, you are of course able to find “third options,” and of course you are morally obligated to do so. However, this *doesn’t change anything important about the problem*. You now have more options to consider and weigh against each other —
— but not all scarcities are false.
With a finite amount of money, you can take a finite number of opportunities, and you almost certainly can’t save everyone. *Everyone* who desires to be altruistic is a triage nurse, choosing which lives to save with their finite funds. And though their problems are far more complicated, with far more variables and nonlinearities than in these simple white-room artificial problems, the moral principles by which the decision must be made are the same.