Today’s data scientist must know how to write good code. Regardless of whether they are working with a commercial off-the-shelf statistical software package, R, Python, or Perl, all require the use of good coding practices. Large and complex datasets need lots of manipulation to wrangle them into shape for analytics, statistical estimation often is complex, and presentation of complicated results sometimes requires writing lots of code.
At the American Political Science Association meetings earlier this year, Gary King (Albert J. Weatherhead III University Professor at Harvard University) gave a presentation on Dataverse (here are his slides). Dataverse is an important tool that many researchers use to archive and share their research materials; as many readers of this blog may already know, the journal that I co-edit, Political Analysis, uses Dataverse to archive and disseminate the replication materials for the articles we publish in our journal.
Throughout my career, there have been many times when advice, support, and criticism were critical for my own professional development. Sometimes that assistance came from people who were formally tasked with providing advice; a good example is a Ph.D. advisor (in my case, John Aldrich of Duke University, who has been a fantastic advisor and mentor to a long list of very successful students).
Improving the transparency of the research published in Political Analysis has been an important priority for Jonathan Katz and I as co-editors of the journal. We spent a great deal of time over the past two years developing and implementing policies and procedures to insure that all studies published in Political Analysis have replication data available through the journal’s Dataverse.
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work. This is why we have started up a new type of research publication at Political Analysis, Letters.
Research transparency is a hot topic these days in academia, especially with respect to the replication or reproduction of published results. There are many initiatives that have recently sprung into operation to help improve transparency, and in this regard political scientists are taking the lead.
Despite what many of my colleagues think, being a journal editor is usually a pretty interesting job. The best part about being a journal editor is working with authors to help frame, shape, and improve their research. We also have many chances to honor specific authors and their work for being of particular importance.
One of the most common questions that scholars confront is trying to find the right journal for their research papers. When I go to conferences, often I am asked, “how do I know if Political Analysis is the right journal for my work?” This is an important question, in particular for junior scholars who don’t have a lot of publishing experience — and for scholars who are nearing important milestones (like contract renewal, tenure, and promotion).
I recently had the opportunity to talk with Lonna Atkeson, Professor of Political Science and Regents’ Lecturer at the University of New Mexico. We discussed her opinions about improving survey methodology and her thoughts about how surveys are being used to study important applied questions
Empirical work in political science must be based on strong scientifically-accurate measurements. However, the problem of measurement error hasn’t been sufficiently addressed. Recently, Willem Saris and Daniel Oberski’s Survey Quality Prediction software wasdeveloped to better predict reliability and method variance, and is receiving the 2014 Warren J. Mitofsky Innovators Award from the American Association for Public Opinion Research.
In a matter of months, federal elections in the United States will enter full-swing. I recently asked Costas Panagopoulos, a professor at Fordham University and an expert on political campaigns, a few questions about the important elections recently conducted in the United States and what we might learn from those recent campaigns.
Polling data is ubiquitous in today’s world, but it is is often difficult to easily understand the accuracy of polls. In a recent paper published in Political Analysis, Kai Arzheimer and Jocelyn Evans developed a new methodology for assessing the accuracy of polls in multiparty and multi-candidate elections.
Not a day passes when I don’t see something in the news about big data. Sometimes the stories will be about some interesting new big data application. For example I recently read about the WeatherSignal app that is collecting weather data from smartphones. And of course there has been a lot in the news lately about the big data and privacy,
In anticipation of Argentina’s mid-term elections to be held on Sunday, 27 October 2013, Political Analysis co-editor R. Michael Alvarez (Caltech) discussed some of the most important things that we need to know about this contest with Francisco Cantu (University of Houston) and Sebastian Saiegh (UCSD).
After my co-editor, Jonathan N. Katz, and I took over editorship of Political Analysis in January 2010, one of our primary goals was to extend the readership and intellectual reach of our journal. We wished to grow our readership internationally, and to also deepen our reach outside of political science, into other social sciences.