Skip to main content

Defective and false opinion polls need scientific rigour

Skewed sampling, leading questions and no psychology — it’s no wonder pollsters get it wrong so often, writes ED GRIFFITHS

WHEN we talk about politics, we often find ourselves talking about opinion polling.
 
Which party is ahead? How have the various leaders’ ratings changed since last week, last month, last year? What do people think about renationalising the railways, cutting benefits, bombing Syria or bringing back hanging? The market research industry thinks it knows.
 
Unfortunately, its answers have proved to be highly unreliable.
 
Most polls during the general election campaign showed the two main parties roughly neck-and-neck; in fact the Conservatives took 37 per cent of the vote, against 30 per cent for Labour. Ahead of the Greek bailout referendum, the polls again said the result was too close to call: a crude average of the last 10 polls put Yes on 45 per cent and No on 46 per cent. The actual vote was 39 per cent to 61 per cent.
 
A research method that cannot detect a 60-40 landslide on a simple yes-or-no question in surveys conducted within days or hours of the event is a method that cannot reliably detect anything.
 
And this is a problem. Parties and voters use polls as a guide to their own behaviour; if the polls are inaccurate, the political process may be distorted. But with elections and referenda we do at least eventually get an actual result. With other polls — on particular issues, for instance — we never do. We never find out whether the poll was accurate or not.
 
How much, in fact, of what we think we know about public opinion and social attitudes rests on the same methodology that told us the Greek referendum was on a knife-edge?
 
Sometimes, of course, polling falls down because of specific mistakes in sampling or in how the question is worded. Lord Ashcroft’s recent Project Red Dawn purports to investigate Labour’s election defeat. Two groups of voters were polled: “loyalists” who voted Labour in 2010 and again in 2015, and “defectors” who voted Labour in 2010 but switched to another party in 2015.
 
In fact, the defectors are a pretty small group: Labour’s vote actually rose in 2015, from 8.6 million to 9.3 million.
 
And Ashcroft takes no account of people who voted Labour in 1997 (when 13.5 million people did so), but had already abandoned the party by 2010 — although these millions, not the handful of 2015 defectors, were the voters Labour needed to win back.
 
However scrupulously Ashcroft analyses his data, the skewed sample means no worthwhile conclusions can be drawn. Badly worded questions can be even more destructive. After the general election, the TUC commissioned a poll from Greenberg Quinlan Rosner Research. Here is the key question: Which two of the following most put you off voting Labour?
 
- They would spend too much and can’t be trusted with the economy
 
- They would make it too easy for people to live on benefits
 
- They would be bossed around by Nicola Sturgeon and the Scottish Nationalists
 
- They would not hold a referendum on Europe
 
- They would raise taxes
 
- They are hostile to aspiration, success and people who want to get on
 
- I prefer David Cameron to Ed Miliband
 
- Other
 
- Don’t know
 
It really makes no difference which answers people chose — because every single option is a variant of “they aren’t Conservative enough.” Polls like these can tell us nothing of value. The polling agencies were not necessarily being consciously dishonest; indeed, the fact that Ashcroft published details of his sample and GQR published their question suggests they did not realise how disastrously these facts would compromise their conclusions.
 
But, even when mistakes like these are avoided, polling can still produce misleading results.
 
I expect most readers of this newspaper have relatively firm views on major political questions.
 
Many people, however, do not: and, if they are polled, they will partly be making up their opinions as they go along (and partly trying to be consistent with answers they have already given). Their responses need not express any underlying conviction; they are, to a great extent, an artefact of the poll itself. At best, they indicate not what other people believe — but how they might answer the same poll.
 
Take the ongoing campaign for the Republican presidential nomination in the United States.
 
The success enjoyed by Donald Trump, the fascist television personality, has been driven to a great extent by his strong poll ratings. The candidate himself refers to them incessantly.
 
But at the outset he was a well-known celebrity in a field of nearly 20 potential nominees — many of them quite obscure. If you were asked to pick your favourite breakfast cereal from a list of 15 or 20, and if you were not much of a connoisseur of breakfast cereals anyway, you might well just choose something you recognised.
 
An opinion poll is really an experiment — but it is an experiment with none of the safeguards that make other experiments’ findings more-or-less dependable.
 
In most experiments, there is a control group of people who are given, say, a placebo instead of the real drug that is being tested.
 
And experiments on human subjects are usually “blind,” meaning the subjects do not know whether they are in the test group or the control group, and experimenters frequently mislead them about what the experiment is designed to test.
 
It is not always easy to see how these devices could work in the context of opinion polling. 
 
Perhaps, in the Trump example, the control group could have been given a partly fictitious list of candidates including a different reality star.
 
But designing controlled, blind experiments is one of the main difficulties in psychology and other fields too. The fact that it is difficult, that it takes ingenuity and skill, does not mean it can be dispensed with.
 
Opinion polling does not even try to meet the standards of rigour that are accepted for other kinds of experimental work. Until it does, it will remain prescientific and unreliable, even when it is done with perfect honesty, and even when obvious mistakes are avoided.
 
  • Ed Griffiths is the author of Towards a Science of Belief Systems (Palgrave Macmillan, 2014).

OWNED BY OUR READERS

We're a reader-owned co-operative, which means you can become part of the paper too by buying shares in the People’s Press Printing Society.

 

 

Become a supporter

Fighting fund

You've Raised:£ 10,282
We need:£ 7,718
11 Days remaining
Donate today