Oxford University Press's
Academic Insights for the Thinking World

Reality check: the dangers of confirmation bias

A professional hazard that nearly all experts must learn to overcome is “confirmation bias,” the tendency to favor and recall information that confirms their hypothesis.

In this excerpt from The Death of Expertise, Tom Nichols discusses a 2014 study on public attitudes towards gay marriage to illustrate the dangers of confirmation bias.

“Confirmation bias” is the most common—and easily the most irritating—obstacle to productive conversation, and not just between experts and laypeople. The term refers to the tendency to look for information that only confirms what we believe, to accept facts that only strengthen our preferred explanations, and to dismiss data that challenge what we already accept as truth. We all do it, and you can be certain that you and I and everyone who’s ever had an argument with anyone about anything has infuriated someone else with it.

Scientists and researchers tussle with confirmation bias all the time as a professional hazard. They, too, have to make assumptions in order to set up experiments or explain puzzles, which in turn means they’re already bringing some baggage to their projects. They have to make guesses and use intuition, just like the rest of us, since it would waste a lot of time for every research program to begin from the assumption that no one knows anything and nothing ever happened before today. “Doing before knowing” is a common problem in setting up any kind of careful investigation: after all, how do we know what we’re looking for if we haven’t found it yet?

Even though every researcher is told that “a negative result is still a result,” no one really wants to discover that their initial assumptions went up in smoke.

Researchers learn to recognize this dilemma early in their training, and they don’t always succeed in defeating it. Confirmation bias can lead even the most experienced experts astray. Doctors, for example, will sometimes get attached to a diagnosis and then look for evidence of the symptoms they suspect already exist in a patient while ignoring markers of another disease or injury.

Even though every researcher is told that “a negative result is still a result,” no one really wants to discover that their initial assumptions went up in smoke.

This is how, for example, a 2014 study of public attitudes about gay marriage went terribly wrong. A graduate student claimed he’d found statistically unassailable proof that if opponents of gay marriage talked about the issue with someone who was actually gay, they were likelier to change their minds. His findings were endorsed by a senior faculty member at Columbia University who had signed on as a coauthor of the study. It was a remarkable finding that basically amounted to proof that reasonable people can actually be talked out of homophobia.

The only problem was that the ambitious young researcher had falsified the data. The discussions he claimed he was analyzing never took place. When others outside the study reviewed it and raised alarms, the Columbia professor pulled the article.

Why didn’t the faculty and reviewers who should have been keeping tabs on the student find the fraud right at the start? Because of confirmation bias. As the journalist Maria Konnikova later reported in The New Yorker, the student’s supervisor admitted that he wanted to believe in its findings. He and other scholars wanted the results to be true, and so they were less likely to question the methods that produced their preferred answer. “In short, confirmation bias—which is especially powerful when we think about social issues—may have made the study’s shakiness easier to overlook,” Konnikova wrote in a review of the whole business.  Indeed, it was “enthusiasm about the study that led to its exposure,” because other scholars, hoping to build on the results, found the fraud only when they delved into the details of research they thought had already reached the conclusion they preferred.

This is why scientists, when possible, run experiments over and over and then submit their results to other people in a process called “peer review.” This process—when it works—calls upon an expert’s colleagues (his or her peers) to act as well-intentioned but rigorous devil’s advocates. This usually takes place in a “double- blind” process, meaning that the researcher and the referees are not identified to each other, the better to prevent personal or institutional biases from influencing the review.

This is an invaluable process. Even the most honest and self-aware scholar or researcher needs a reality check from someone less personally invested in the outcome of a project.

Featured image credit: Untitled by Pixabay. CC0 via Pexels.

Recent Comments

There are currently no comments.