Recent years have seen amazing growth in the development of new tools that can be used to make causal claims about complex social phenomenon. Social scientists have been at the forefront of developing many of these new tools, in particular ones that can give analysts the ability to make causal inferences in survey research.
An example of an important contribution in this area is the paper “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments,” published in Political Analysis in 2014. This paper, co-authored by Jens Hainmueller, Daniel J. Hopkins, and Teppei Yamamoto, was recently given the Miller Prize, recognizing it as the best paper appearing in Political Analysis in the previous year. In recognition of receipt of this award, this paper is available for free online access.
I recently had an opportunity to ask Jens, Daniel, and Teppei a series of questions about their paper, their research project, scientific collaborations, and about their strategies for publishing technical papers.
Your paper uses a survey experiment design, called conjoint analysis, to analyze causal effects of multiple treatments. What is conjoint analysis, and how does your paper develop a new way to use this technique in survey research?
Conjoint analysis is a survey method to measure preferences about objects that are characterized by many different attributes of interest, such as products, policy packages, and candidates in elections. The method is commonly used in marketing research to assess consumers’ demand for particular types of products and services, but until recently, political scientists had rarely taken advantage of this powerful method despite the recent boom in survey experiments. In our paper, we analyze statistical underpinnings of this method from a causal inference perspective. The framework is helpful in identifying what quantities conjoint designs can actually allow us to estimate. In a nutshell, we show that conjoint analysis can be used to estimate valid causal effects of multiple attributes and compare them to one another, and that estimation can be done with methods that are both simple and familiar to political scientists such as linear regression. We also provide statistical tools to implement what we propose: a GUI-based application that allows users to generate PHP scripts for web-based conjoint experiments and an R package that implements all of the proposed estimation methods and beyond.
In the conclusion of your paper, you note that the methodology you develop can be used for public policy research. What types of policy design problems do you think your methodology would be well-suited for?
When designing a public policy, one critical question is what the public prefers. In modern society any policy involves multiple dimensions and trade-offs to consider, meaning that it is crucial to assess how people would react to the proposed policy as a bundle. For example, consider a proposal to build a national stadium for hosting the next summer Olympic games (an ongoing, thorny issue in Japan). An ideal stadium would accommodate audiences of tens of thousands, have a fully retractable roof for multi-purpose use, cutting-edge design, and so forth. But such a stadium would also cost billions of dollars of taxpayers’ money and potentially have negative impacts on the environment. How should the government balance those multiple considerations and come up with a proposal that is going to be backed by the public? If some corners have to be cut because of the budget constraint, where should that be made, and what other aspects should be prioritized? Conjoint analysis is a natural and effective means to answer such questions. Similar issues arise when designing many other types of multidimensional policies such as: what should a health care reform look like, what should a welfare reform look like, what should a bailout package look like, and many more.
How are you and your colleagues working to apply or extend the results in your Political Analysis paper?
We recently published a study that examines the external validity of vignette and conjoint survey experiments against real-world choice behavior. We found that the paired conjoint design, in contrast to single profile and vignette designs, is remarkably effective at replicating this real-world benchmark. But there are are still many open questions about how to optimize conjoint experiments for causal inference. How many attributes should you include in the conjoint table–and how many profiles? How many choice tasks? Is it better to use a forced-choice or a rating design? We are currently working on several projects that examine these key questions to better understand how one get the most out of conjoint experiments. We are also updating our software package and survey design tool to make it easier for researchers to implement and analyze conjoints.
Your Political Analysis paper is the product of a collaboration that across different time zones, and involves methodologists at different universities. How do collaborations like this work? How did you and your co-authors come together to work on this problem? Do you have any advice for other scholars who are considering collaborations like these?
There is no secret sauce for success in collaborations we know of, and if anybody has figured one out we would love to hear about it. What helps for us is that we share a preference for working on methodological problems that we deem important for applied research. In fact, many of the methods problems we get excited about are the ones we encounter in our substantive work when we or one of our colleagues hits a roadblock. In terms of workflow, we do regular conference calls and work jointly on our drafts and analysis and we also benefit from our great colleagues who give us feedback on our work. But at least for us progress in research is far from a linear or even a monotonic process. We generate ideas for new methods but often toss them out once we figure that they only work under too stringent assumptions or produce bogus results. Fortunately, every once in a while something turns out to work and then we write it up and submit.
Publishing technical papers like yours is often difficult; it’s common to hear that authors spend months, even years, trying to get papers like yours published in a good journal. What advice to you have for other authors about finding the right journal, crafting your paper for submission, and dealing with the criticisms of anonymous reviewers and journal editors?
It’s very true that in political science, there are not a ton of outlets for technically oriented work. In addition to Political Analysis, one might target the American Journal of Political Science‘s Workshop section or sometimes the American Political Science Review or Political Science Research and Methods, but the options can be limited. Given that, it does seem especially useful to put some effort into tailoring a manuscript for the specific journal and audience in question. By the time we submit a piece, we have probably written and discarded more text than is in the manuscript itself. A related challenge we’ve encountered is how to get the most out of reviewer feedback without being overwhelmed by it; pay particular attention to any comments that come up multiple times during the review process.
Featured image: Olympic stadium in Japan. CC0 via Pixabay.