Rarely do esoteric academic debates, especially those concerning methodology, make their way into the popular press. But, for the past two years, a major controversy on the replication of psychological research has spilled into public view and shows few signs of abating. However, the debate is silent about the far more problematic conceptual crisis that challenges the core principles of scientific psychology.
The controversy erupted with the publication of the Open Science Collaboration’s (OSC) findings that a mere 39 out of 100 experimental and correlational studies published in psychology’s most competitive and highly-vetted journals demonstrated consistent results when the same design and procedures were repeated. There are hundreds of journals publishing psychological research, with less stringent review processes, and it is likely that the percentage of replicable studies is actually much lower.
The most common way of interpreting these results is that scientific psychology needs to become more rigorous. Additional checks and balances should be institutionalized in order to insure the quality of published research. Nosek et al. argue that journals should require visibility, posting data, code, and research materials to public access archives as well as pre-registering analysis plans and valorize studies replicating previous findings.
Psychology’s conceptual problem is much more profound and entrenched than the current controversy imagines.
Although these ideas should be embraced, they neglect a larger point. Even if psychologists manage to produce more replicable research, we will continue to misinterpret our data and to avoid the most fundamental and pressing problems of psychology. Indeed, psychology’s conceptual problem is much more profound and entrenched than the current controversy imagines.
Although psychology is fragmented into an eclectic mix of theoretical perspectives, there is a near universal acceptance that it emulates the natural sciences in its ambition to discover the timeless and universal laws about human beings. Psychology is held together by methodological unity based upon the reduction of observations into quantifiable variables that can be submitted to statistical analysis. Psychologists might disagree about the nature of human nature or the explanation for basic psychological phenomena, but there is little disagreement about what the appropriate scientific methods should be. For the majority of psychologists, psychology is not the study of persons or of psychological processes but the study of variables and how these variables relate to one another. This is what makes psychology a science.
Variable-centered research is a poor method for understanding persons or how psychological processes work.
There is nothing inherently problematic about reductive statistical analysis. In fact, such methods might be the most appropriate choice for studying certain questions such as the distribution of attitudes or beliefs in a particular population. But, beyond demographic facts, variable-centered research is a poor method for understanding persons or how psychological processes work. This is the crux of the problem.
It is a strange paradox that the mainstream’s method of choice can tell us nothing about the workings of thought processes, interpretations, or meaning making. Of course, psychologists often use their research findings to speak about how psychological processes function. But, such inferences represent a basic misinterpretation of our analyses. In both experimental and correlational research, observations are made on variables and we determine the statistical relationship between variables in the group. For example, perceptions of national groups as conscientious were correlated with observable demographic variables (such as walking speed and longevity) but not with aggregated self or peer reported scores of personality, a result which was replicated by the OSC. Persons primed with the belief that free will is an illusion were more likely to take advantage of a flawed computer program to answer difficult math questions; a result that was not replicated by the OSC. These are statistical relationships but they tell us nothing about how persons think or how these processes, now variables, relate on the human level. But, researchers talk about them as if they do. Interjecting insights from their life experience, researchers tell a story about what must be happening on the psychological level of analysis.
James Lamiell has called this misinterpretation of the data the “Thorndike maneuver,” in which statistical results observed between persons are used as evidence about what must be happening at the level of the person. We might know that two variables go together, the same person has or expresses them, but we don’t know anything about how or why they work together in the way that they do. We don’t understand the process that puts them into a dynamic relationship because we haven’t studied it. This basic error is endemic to the system and cannot be corrected with more complex models or the inclusion of mediating and moderating variables.
The soundness of our research to produce reliable results is obviously a major concern. However, we should be even more concerned that our methods faithfully direct us to the phenomena themselves and allow us to make trustworthy interpretations about what the data mean. In order to get beyond the replication crisis, psychology needs a deep reflection on how the discipline operates to produce knowledge. From my perspective, this must include welcoming innovative strategies for understanding persons and processes into the canon of psychological science. Psychology needs to recognize that, even if they are replicable, variable-centered methods are ineffective tools for getting to those problems most fundamental to our understanding of psychology.
Featured image credit: Brain, skull, science, anatomy and teaching by Jesse Orrico. CC0 public domain via Unsplash.