Oxford University Press's
Academic Insights for the Thinking World

Is free will required for moral accountability?

By Joshua Knobe

Imagine that tomorrow’s newspaper comes with a surprising headline: ‘Scientists Discover that Human Behavior is Entirely Determined.’ Reading through the article, you learn more about precisely what this determinism entails. It turns out that everything you do – every behavior, thought and decision – is completely caused by prior events, which are in turn caused by earlier events… and so forth, stretching back in a long chain all the way to the beginning of the universe.

A discovery like this one would naturally bring up a difficult philosophical question. If your actions are completely determined, can you ever be morally responsible for anything you do? This question has been a perennial source of debate in philosophy, with some philosophers saying yes, others saying no, and millennia of discussion that leave us no closer to a resolution.

As a recent New York Times article explains, experimental philosophers have been seeking to locate the source of this conundrum in the nature of the human mind. The key suggestion is that the sense of puzzlement we feel in response to this issue arises from a conflict between two different psychological processes. Our capacity for abstract, theoretical reasoning tells us: ‘Well, if you think about it rationally, no one can be responsible for an act that is completely determined.’ But our capacity for immediate emotional responses gives us just the opposite answer: ‘Wait! No matter how determined people might be, they just have to be responsible for the terrible things they do…’

To put this hypothesis to the test, the philosopher Shaun Nichols and I conducted a simple experiment. All participants were asked to imagine a completely deterministic universe (‘Universe A’). Then different participants were given different questions that encouraged different modes of thought. Some were given a question that encouraged more abstract theoretical reasoning:

In Universe A, is it possible for a person to be fully morally responsible for their actions?

Meanwhile, other participants were given a question that encouraged a more emotional response:

In Universe A, a man named Bill has become attracted to his secretary, and he decides that the only way to be with her is to kill his wife and three children. He knows that it is impossible to escape from his house in the event of a fire. Before he leaves on a business trip, he sets up a device in his basement that burns down the house and kills his family.

Is Bill fully morally responsible for killing his wife and children?

The results showed a striking difference between the two conditions. Participants in the abstract reasoning condition overwhelmingly answered that no one could ever be morally responsible for anything in Universe A. But participants in the more emotional condition had a very different reaction. Even though Bill was described as living in Universe A, they said that he was fully morally responsible for what he had done. (Clearly, this involves a kind of contradiction: it can’t be that no one in Universe A is morally responsible for anything but, at the same time, this one man in Universe A actually is morally responsible for killing his family.)

Of course, it would be foolish to suggest that experiments like this one can somehow solve the problem of free will all by themselves. Still, it does appear that a close look at the empirical data can afford us a certain kind of insight. The results help us to get at the roots of our sense that there is a puzzle here and, thereby, to open up new avenues of inquiry that might not otherwise have been possible.

Joshua Knobe is an experimental philosopher affiliated both with the Program in Cognitive Science and the Department of Philosophy at Yale University. He is editor with Shaun Nichols of Experimental Philosophy. Watch a video introduction featuring the comedian Eugene Mirman here.

View more about this book on the

Recent Comments

  1. […] via OUPblog » Blog Archive » Is free will required for moral accountability?. […]

  2. Lisbeth Jardine

    I gather, then, that it is also pre-determined that the only conclusion Bill can come to about how to get to be w/the object of his desire is to murder his family and that he has no possibility of coming to any other solution to his dilemma or even choosing to forgo the object of his desire?

    I never felt quite so intellectually morally compromised as I did the semester I took a graduate seminar in ethics. Why is it professional ethicists are always finding scenarios to do away with the maximum number of people in the most gruesome ways?

    –lsj, port angeles, wa

  3. […] Is Free Will Required for Moral Responsibility? […]

  4. Brian Westley

    I’ve never really understood the idiotic idea that, if determinism is real, that this means we can no longer punish people for what they do.

    For example, suppose a large rock is headed at the earth from space. The fact that this rock will hit the earth is deterministic, given its path, the sun and planets’ gravity, etc.

    Also say we have the means to send up a rocket to set off an atomic blast near the rock, to break it up and push the fragments off to miss the earth.

    Would anyone object to us “punishing” the rock in this way? The rock can’t help itself, so is it “fair” to destroy it?

    I don’t CARE if people commit crimes due to determinism, I’m still going to lock them in cages to remove them from the rest of us. And, of course, if they have no choice in committing crimes, I have no choice in putting them in cages, either.

  5. brkev


    I don’t think the point is whether or not we can punish people for what they do, though some do say that (Clarence Darrow famously argued against the death penalty for two of his clients because they had no choice but to murder).

    I think the point is whether or not people are responsible for their actions if determinism is true. We put to death rabid animals that maul human beings, but no one says that the rabid animal is evil or immoral or unjust, because the dog had no choice. So the question is can you call a human any of those things if the human had no choice.

Comments are closed.