Archive


When media don’t add up

Volume 33, Number 1, March 2022

Paul Foster

Despite the rise in data journalists, the industry is terrible with numbers, particularly on poll results

Every year I like to carry out a snap survey of my journalism students. I ask them one simple question: who’s good at maths? As you might expect, just a smattering of hands go up. Journalists and maths have never been good bedfellows. Over the years, some have been heard to claim they only went into reporting as they “don’t do numbers”.

Those who love words tend not to take an interest in numbers. So it’s not surprising journalists have long had a problem with the reporting of polls. Aside from basic journalistic innumeracy, poor reporting can often be blamed on simple ignorance of polling practices, a lack of training, time pressures, or limited editorial oversight.

But there are times when the news media’s ability to deliberately frame a poll to suit its favoured angle stands out. “98 per cent SAY NO TO EU DEAL,” shouts the Daily Express splash. The strapline goes on: “Forget talks with Brussels and quit now, urges new poll.” As poll reporting goes, it was a gold standard example of bad practice. The figures were accurate – yes, 98 per cent of respondents didn’t want a deal with the EU. But the Express failed to make clear to the public at first glance that this was, in fact, a poll of its own readers – it was not a representative sample of the population. It was highly misleading. The press regulator Ipso agreed, saying the Express had given the impression it was reporting the results of a representative poll and not of a premium phoneline survey. It was later forced to publish a full adjudication.

While there is nothing wrong with reporting cheap and cheerful reader surveys or Twitter polls – sometimes referred to as straw polls – the findings cannot be presented as being representative of the wider population. That Express story was from 2016, but it is still a trap journalists can fall into, either accidentally or as a result of a deliberate editorial decision.

Does it really matter? Well, yes. Nothing is known about the people who have responded to such a straw poll. They may well not be representative of voters in general, even if tens of thousands of people have answered. Presenting a poll as such is highly misleading – and would breach the editors’ code of practice accuracy clause.

The average journalist must get a frightening number of emails a day from PRs, and I have no doubt that a high percentage of the press releases they receive contain some kind of survey or poll data. The sheer deluge of these polls means it’s vital reporters are on the ball when turning them into stories. They need to adopt the same level of critical scrutiny as they would when writing about a local council meeting, court hearing or government report.

This is not about forcing a maths-evasive reporter to become a polling expert; it’s about doing basic journalistic checks as to whether the poll stands up. How have people been selected? How many were polled? Who has paid for it, and is there bias in the wording of the question or questions? These are just some of the questions reporters should ask themselves.

Sometimes, it’s difficult for journalists to get their heads around what exactly is a valid sample. Clearly, it’s not practical to poll every person in the country – that’s why pollsters choose a sample of the public that represents the whole population, usually between 1,000 and 2,000 people. The analogy used by some is that you don’t have to eat an entire bowl of soup to know if it tastes good: if it’s properly stirred, one spoonful is enough.

This means a big survey is not necessarily better than a small one. If you send out a million questionnaires and get 10,000 back, this poll will almost certainly be less reliable than a more scientifically conducted survey of 1,000-2,000 people. While a poll of 1,000 people can be representative, journalists sometimes place too much importance on a minority within that sample – trying to claim, for instance, that the views of a small number of Asian respondents are representative of that entire population.

Likewise, local journalists have to be on their guard over poll results in regional press releases. They might be tempted to view these stories as “easy wins” which can drive traffic to their websites, but often the sample sizes involved are too small to justify separate analysis and reporting. In a nutshell, they are just not reliable.

And just like they do for other stories, journalists need to interrogate the source of the information, which means making clear whether the commissioner of that poll has got a particular interest in the outcome. Even if they have, the poll can still be reported, as long as it has been carried out correctly – and it’s made clear who has paid for it.

It’s the same basic editorial judgment required when examining any press release. The concern is, these checks are not always performed by reporters and, importantly, the full facts surrounding how the poll has been carried out are not regularly made clear to the reader. It’s not all bad. The good news is that there has been a steady rise in data journalists embedded in newsrooms who are more experienced at interrogating and analysing polls and surveys. Journalism training centres have also developed data journalism modules and courses in response to industry demand. Meanwhile, the National Council for the Training of Journalists has set up a short online course for working journalists and students on how to accurately report polls, in response to concerns.

Never ignore the margin of error in polls

It’s never been so important to get it right as there has been an explosion in political polling in recent years, driven largely by online polls which are cheaper and easier to produce. There were about 3,500 political opinion polls over the 65-year period between 1945 and 2010. In the five years from 2010 to 2015, there were nearly 2,000. And in 2019, 80 voting intention polls were carried out between October 30 – when the House of Commons voted for an early general election – and December 12, the date of the election.

Just like those on Twitter who jump on every poll lead, journalists get carried away with political poll results. While many polls are generally accurate, they are never 100 per cent correct. Even the best conducted poll is subject to margins of error, but journalists routinely fail to make this clear when reporting, an issue which has been raised by the House of Lords as a particular “cause for concern”. That small print should not be ignored.

A typical voter intention poll has a margin of error of +/- 3 per cent. This would mean a poll showing Conservatives on 36 per cent and Labour on 33 per cent could also read Conservatives 39 per cent and Labour 30 per cent, or that they are neck and neck – Labour 33 per cent and Conservatives 33 per cent. Next time you see a report of a voter intention poll, check to see if the margin of error has been included in the copy or graphic. I’d be surprised if it was.

Journalists are sometimes accused of over-simplifying important issues for the sake of an angle. They want – no, need – people to have views. It’s what makes the story. But not everyone has a view on a subject and the public are often given the chance to say they “don’t know” to a poll question.

For reporters, it can sometimes seem more straightforward, simple even, to ignore them and just focus on those who express a view. Failure to include the don’t knows – or make clear they have been excluded – skews results and ends up misleading the reader. If the don’t knows have been left out, it’s a journalist’s job to make this clear.

Doing this, of course, could destroy the potential hook of their story and journalists are well aware of this. The Daily Telegraph was guilty of misleading readers over a Brexit-related poll after repeating a press release’s claim that 54 per cent of the public thought Parliament might have to be prorogued to deliver Brexit. But this only reflected those who had answered yes or no and excluded a significant proportion who said don’t know.

In fact, 44 per cent said yes, 37 per cent no, and 19 per cent didn’t know. This obviously weakens the angle but it should have been made clear. Even if the press release contains the headline-grabbing figure, the journalist needs to examine the full poll results to ensure they are not misleading their readers. One thing journalists should also bear in mind when analysing polls is if a lot of people answer don’t know, then maybe the question should not have been asked in the first place.

It’s not as if newspapers are strangers to polls. For decades, nationals – particularly the Sundays – have commissioned polls themselves or in collaboration with others. They can be costly, but often lead to front page news and drive online traffic. Polls handled carefully can provide a valuable insight into what people think on an issue. Journalists would do well to ignore the temptation to manipulate them – or be manipulated by others.

Paul Foster is a journalism lecturer at the University of Portsmouth and a senior examiner with the NCTJ. (The NCTJ online course is supported by the British Polling Council and Market Research Society.)

@Paul_A_Foster

From the same issue

Nothing to say

Tom Mangold Now that the talented and BBC luggage-free Deborah Turness is taking over BBC TV news,...

read more

Tin ears

Christopher Howse Amol Rajan, the BBC’s busy media editor and Today programme presenter, has an...

read more