Nigel Bradley is a senior lecturer in marketing at the University of Westminster in London. This year we published the second edition of his textbook: Marketing Research: Tools and Techniques. This edition explained the value of sample surveys, Audience Response Systems and Word Clouds – three techniques being used to monitor progress in the run up to the election. In the below original post Nigel explains how such techniques are being used.
In this pre-election period television plays a big role. On Sky News, at the bottom of the screen, you will see four colours: red, blue, yellow and grey. These represent Labour, Conservative, Liberal Democrat and other. There are numbers in each colour and these represent the latest survey results of voting intention. Watch carefully and the percentages change. The numbers change because they show results according to Mori, then YouGov, then ICM, then ComRes, then Populus. The results are not “true” because they are samples – the only true result would be a full count. They also differ by agency because the method differs, some are face to face, some by phone and the Internet is also used. Moreover “secret” formulae are used to arrive at a result. Let us just take three results for the Lib Dems: these were 27% for ComRes, 30% for ICM and 32% for Mori. Not a big difference but Mori also had 32% for the Conservatives.
Audience Response Systems
Switch over to ITV News at Ten to find the “Worm”. With tongue in cheek, in my book I call this the ARS, an acronym for Audience Response System. ARS or Forum Voting is becoming more common at workshops, seminars or meetings where respondents are restricted to one room. This restriction in location is important because radio keypads are given to each person attending. The keypads may have buttons to press or a radio dial. The audience is required to “vote”. These responses are then sent to a receiver so results can be immediately collated for display to the researcher or indeed to the audience as a whole. The problem with ARS is that if the sampling is done badly it will make a poor attempt at quantifying what is really a qualitative group discussion. This can be misleading for this election as ARS has been applied to the three debates featuring Gordon Brown, Nick Clegg and David Cameron.
The term Worm has been used because the opinion summary twists and turns across a visual display that depicts the passage of 90 minutes. We have three worms in three colours: red, yellow and blue. ITV use a firm called ComRes who recruit a panel of 20 undecided voters in two key marginal constituencies in Bolton. This was planned for the three live debates.
Not to be outdone BBC News at Ten used Ipsos Mori to recruit 36 undecided voters to watch one of the three leaders, to give their reactions as they watched the debate unfold live.
Switch over to BBC2 and Newsnight to find Word Clouds. Word clouds are visual representations of the vocabulary used by people in response to some stimulus. The visual display gives greater prominence to words that occur most frequently in the text provided. They offer viable alternatives to qualitative analysis and coding open-ended questions. Word clouds were divided by leader to see the emphasis of each person.
The advantage is that precise quantification is avoided, yet dominant positions can be identified, the disadvantage is that word cloud interpretation is extremely subjective.
In my opinion many of the words spoken in such debates will be misattributed by the television audience, they may think Gordon Brown said something when it was really Nick Clegg. Therefore I have created a word cloud of all vocabulary used in the first debate by all speakers and questioners. This is surely the full picture received by the voting audience.
Last week I listened to presentations from undergraduates at my University. Despite researching different topics, coincidentally three groups all asked whether students had a part time job. This is an important issue for lecturers because work experience can enhance studies but can also leave less time. I noted the results; one said 56 per cent, another 61 per cent and then the third gave 73 per cent. Perhaps unfairly, I challenged the last group and asked them to defend their figure. The response correctly deserved praise, the spokesman said – “all three methods were different; the question posed was slightly different and the sampling was not the same.” This was the answer I wanted; the mantra I chant to my students is that “results are only as good as the sample chosen.”
The above Word Cloud was made on Wordle.
[…] This post was mentioned on Twitter by Candice Nicholson. Candice Nicholson said: Much Ado About Voting: OUPblog (blog) This year we published the second edition of his textbook: Marketing Researc… http://bit.ly/cfQdyl […]
[…] of Nigel’s textbook: Marketing Research – Tools and Techniques. He has previously written this post for […]
Comments are closed.