Oxford University Press's
Academic Insights for the Thinking World

Improving the quality of surveys: a Q&A with Daniel Oberski

Empirical work in political science must be based on strong scientifically-accurate measurements. However, the problem of measurement error hasn’t been sufficiently addressed. Recently, Willem Saris and Daniel Oberski’s Survey Quality Prediction software was developed to better predict reliability and method variance, and is receiving the 2014 Warren J. Mitofsky Innovators Award from the American Association for Public Opinion Research. I sat down with Political Analysis-contributor Daniel Oberski, a postdoctoral researcher in latent variable modeling and survey methodology at Tilburg University’s department of methodology and statistics, to discuss the software, surveys, latent variables, and interdisciplinary research.

Your “Survey Quality Prediction” (SQP) software (developed with Willem Saris of Pompeu Fabra University in Spain) is receiving the 2014 Warren J. Mitofsky Innovators Award from the American Association for Public Opinion Research. What motivated you and Willem to develop this software?

Survey questions are important measurement instruments, whose design and use we need to approach scientifically. After all, even though nowadays we have “big data” and neuroimaging, one of the most effective ways of finding things out about people remains to just ask them (see also this keynote lecture by Mick Couper). But survey questions are not perfect: we must recognize and account for measurement error. That is the main motivation for SQP.

Willem started working on estimating measurement error together with the University of Michigan’s Frank Andrews in the late 1980s and has made it his life’s work to gather as much information as possible about the quality of different types of survey questions. In physics, it is quite customary to dedicate one’s career to measurement; my father, for example, spent the better part of his measuring how background radiation might interfere with CERN experiments – just so this interference could later be corrected for. In the social sciences, this kind of project is rare. Thanks to Willem’s decades of effort, however, we were able to perform a meta-analysis over the results of his experiments, linking questions’ characteristics to their reliability so that this could be accounted for.

We then created a web application that allows the user to predict reliability and method variance from a question’s characteristics, based on a meta-analysis of over 3000 questions. The goal is to allow researchers to recognize measurement error, choose the best measurement instruments for their purpose, and account for the effects of errors in their analyses of interest.

iStock_000011386808XSmall

How can survey researchers use the SQP software package, and why is it an important tool for survey researchers?

People who use surveys are often (usually?) interested in relationships between different variables, and measurement error can wreak havoc on such estimates. There are two possible reactions to this problem.

The first is hope: maybe measurement error is not that bad, or perhaps bias will be in some known direction, for example towards zero. Unfortunately, we see in our experiments that this hope is, on average, unfounded.

The second possibility is to estimate measurement error so that it can be accounted for. Accounting for measurement error can be done using any of a number of well-known and easily available methods, such as structural equation modeling or Bayesian priors. The tricky part lies in estimating the amount of measurement error. Not every researcher will have the resources and opportunity to conduct a multitrait-multimethod experiment, for example. Another issue is how to properly account for the additional uncertainties involved with the measurement error correction itself.

This is where SQP comes in.

Anybody can code the survey question they wish to analyze on a range of characteristics and obtain an estimate of the reliability and amount of common method bias in that question. An estimate of the prediction uncertainty is also given. This information can then be used to correct estimates of relationships between variables for measurement error.

The SQP software package seems to be part of a general trend, where survey researchers are developing tools and technologies to automate and improve aspects of survey design and implementation. What other aspects of survey research do you see being improved by other tools in the near future?

SQP deals with measurement error, but there is also nonresponse error, coverage error, editing/processing error, etc. Official statistics agencies as well as commercial survey companies have been developing automated tools for survey implementation since the advent of computer-assisted data collection. More recent is the incorporation of survey experiments to aid decisions on survey design. This is at least partly driven by more easily accessible technology, by the growth of survey methodology as it is being discovered by other fields, and by some US institutions’ insistence on survey experiments. All of this means that we are gathering evidence at a fast rate on a wide range of quality issues in surveys, even if that evidence is not always as accessible as we would like it to be.

I can very well imagine that meta-analyses over this kind of information are eventually encoded into expert systems. These would be like SQP, but for nonresponse, noncoverage, and so on. This could allow for the kind of quality evaluations and predictions on all aspects of survey research that is necessary for social science. It would be a large project, but it is doable.

Political Analysis has published two of your papers that focus on “latent variables.” What is the connection between your survey methodology research and your work on latent variables?

I have heard it claimed that all variables of interest are observed and latent variables are useful as an intermediary tool at best. I think the opposite is true. Social science is so difficult because the variables we observe are almost never the variables of actual interest: there is always measurement error. That is why we need latent variable models to model relationships between the variables of actual interest.

On the one hand, latent variable models have been mostly developed within fields that traditionally were not overly concerned with representativeness, although that could be changing now. On the other hand, after some developments in the early 1960s at the US Census Bureau, the survey methodology field has some catching up to do on advances in measurement error modeling since those times. Part of what I do involves “introducing” different fields’ methods to each other so both are improved. Another part, which concerns the papers in PA, is about solving some of the unique, new problems that come up when you do that.

For example, when correcting measurement error (misclassification) by fixing an estimate of the error rates in maximum likelihood analysis, how does one properly account for the fact that this error rate is itself only an estimate, as it would be when obtained from SQP? We dealt with this for linear models in a paper with Albert Satorra in the journal Structural Equation Modeling, while a PA paper, on which Jeroen Vermunt and I are co-authors with our PhD student Zsuzsa Bakk, deals with categorical latent and observed variables. Another problem in survey methodology is how to decide that groups are “comparable enough” for the purposes at hand (a.k.a. “comparability”, “equivalence”, or “invariance”). I introduced a tool for looking at this problem using latent variable models in this PA paper.

Your research is quite interdisciplinary. What advice would you give to graduate students who are interested in work on interdisciplinary topics like you have? Any tips for how they might seek to publish their work?

I never really set out to be interdisciplinary on purpose. To some extent it is because I am interested in everything, and to another it is all part of being a statistician. Tukey supposedly said, “you get to play in everyone’s backyard,” and whether or not he really said it, that is also what I love about it.

I am only in the beginning of my career (I hope!) and not sure I am in a position to hand out any sage advice. But one tip might be: don’t assume something is not related to what you know about just because it sounds different. Ask yourself, “how would what this person is saying translate into my terms?” It helps to master, as much as possible, some general tool for thinking about these things. Mine is structural equation models, latent variable models. But any framework should work just as well: hierarchical Bayesian inference, counterfactuals, missing data, experimental design, randomization inference, graphical models, etc. Each of these frameworks, when understood thoroughly, can serve as a general language for understanding what is being said, and once you get something down in your own language, you usually have some immediate insights about the problem or at least the tools to get them.

As for publishing, I imagine my experience is rather limited relative to a typical graduate student’s advisors’. From what I can tell so far it is mostly about putting yourself in the place of your audience and understanding what makes your work interesting or useful for them specifically. An “interdisciplinary” researcher is by definition a bit of an outsider. That makes it all the more important to familiarize yourself intimately with the journal and with the wider literature in that field on the topic and closely related topics, and show how your work connects with that literature. This way you do not just “barge in” but can make a real contribution to the discussion being held in that field. At Political Analysis I received some help from the reviewers and editor in doing that and I am grateful for it; this ability to welcome work that borrows from “outside” and see its potential for the field is a real strength of the journal.

Daniel Oberski is a postdoctoral researcher at the Department of Methodology and Statistics of Tilburg University in The Netherlands. His current research focuses on applying latent variable models to survey methodology and vice versa. He also works on evaluating and predicting measurement error in survey questions, on variance estimation and model fit evaluation for latent class (LCM) and structural equation models (SEM), and is interested in the substantive applications of SEM and latent class modeling, for example to the prediction of the decision to vote in elections. His two Political Analysis papers will be available for free downloading 12-19 May 2014 to honor his AAPOR award. They are “Evaluating Sensitivity of Parameters of Interest to Measurement Invariance in Latent Variable Models” (2014), and “Relating Latent Class Assignments to External Variables: Standard Errors for Correct Inference” (2014).

R. Michael Alvarez is a professor of Political Science at Caltech. His research and teaching focuses on elections, voting behavior, and election technologies. He is editor-in-chief of Political Analysis with Jonathan N. Katz.

Political Analysis chronicles the exciting developments in the field of political methodology, with contributions to empirical and methodological scholarship outside the diffuse borders of political science. It is published on behalf of The Society for Political Methodology and the Political Methodology Section of the American Political Science Association. Political Analysis is ranked #5 out of 157 journals in Political Science by 5-year impact factor, according to the 2012 ISI Journal Citation Reports. Like Political Analysis on Facebook and follow @PolAnalysis on Twitter.

Subscribe to the OUPblog via email or RSS.
Subscribe to only politics and political science articles on the OUPblog via email or RSS.
Image credit: Image via iStockphoto.

Recent Comments

There are currently no comments.