When Writers Take On Numbers

Science writers, don’t be afraid to question the statistical interpretations you read in scientific journal articles—researchers are as intimidated by them as you are.

AsianScientist (Dec. 18, 2017) – When I left the lab bench for science writing at the beginning of the year, I knew I would have a lot to learn. I wasn’t wrong—this year I dealt with all sorts of new challenges, including writing about fields far removed from my own training, practising the art of conducting a good interview, and learning to write faster than I ever have before.

One of the biggest challenges I think science journalists face, however, is this: We live in an era of short attention spans, where clickbait, sound bites, listicles and fake news now make up a significant portion of the average reader’s diet. In this environment, how can journalists communicate science in a manner that appeals to readers, while still conveying information accurately and with its subtleties intact?

Striking this balance involves a good command of something that many journalists (and indeed, many scientists) dread: statistics. In order to separate fact from fiction and hype from the truth, journalists should be able to turn a critical eye on the statistical analyses presented in scientific journal articles; as writers, we need to venture out of our comfort zone of words and into the realm of numbers. But this is easier said than done—as Mark Twain wrote: “Facts are stubborn things, but statistics are pliable.”

Taking on an impossible task

Demystifying statistics was the subject of a panel discussion at the 2017 World Conference of Science Journalists in San Francisco, which I had the privilege of attending.

Because of the many subtleties involved in statistical analyses, journalists have an uphill task, said Andrew Gelman, a professor of statistics and political science at Columbia University. They are, in some cases, being asked to evaluate claims that even experienced scientists and statisticians themselves have trouble with, he added.

“A lot of the research that in recent years has been discredited or laughed at is stuff that ten years ago I would have said was fine,” he said. “We’re asking you to catch the mistakes that the field was making ten years ago, and that a lot of leading scientists and statisticians are still making today. It’s like giving you an impossible task.”

Don’t be intimidated

But this shouldn’t stop journalists from critically examining the claims put forth in scientific papers and press releases, said Kristin Sainani, an associate professor of health research and policy at Stanford University. Many statistical misinterpretations, she said, can be caught by reporters just by using common sense.

For example, a 2010 JAMA paper looking at the effect of exercise on weight gain in middle-aged women found a tiny yet statistically significant difference between women who exercised the most versus those who exercised the least—a finding that was only applicable to women of normal weight, not to overweight women. Yet, because the authors emphasized this finding in their conclusions, the media followed suit—resulting in numerous headlines asserting that women should be exercising for at least an hour a day.

In this case, the minute difference in weight gain was statistically significant because of the large sample size of the study, which involved more than 30,000 women. However, the more important question (which both the scientists and the reporters should have focused on) was whether the difference had any real clinical significance, said Sainani.

It is the duty of the journalist, therefore, not to blindly accept scientists’ take on their data.

“Researchers are, for the most part, just as intimidated by statistics as all of you,” said Sainani. “Don’t be afraid to ask basic questions if the numbers don’t match the conclusions of the paper.”

An incremental process

One aspect of the scientific process that the general public may not fully appreciate is that it is based on the sum total of a body of evidence, not on the statistically significant findings of a single study.

As Christie Aschwanden, lead science writer at data journalism site FiveThirtyEight, put it: “Science is an incremental process of uncertainty reduction.”

Journalists, therefore, need to convey the uncertainty inherent in the process, instead of reinforcing the notion that science is a magic wand which turns everything it touches into truth, she said.

Here at Asian Scientist, our editors make the effort every day to read and evaluate the claims made in journal articles and press releases, in a bid to bring our readers balanced, non-sensational coverage.

I will freely admit that statistics was never my strong suit as a scientist. But in the current media climate, I think that science writers in particular have a duty to take a harder, more critical look at what we are asked to believe—even if it has gone through extensive peer review and comes from researchers at esteemed institutions. If this comes at the expense of clicks, views and likes on social media, well then, we can live with being a little, or a lot, less popular.

This article is from a monthly column called The Bug Report. Click here to see the other articles in this series.


Copyright: Asian Scientist Magazine; Photo: Shutterstock.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Shuzhen received a PhD degree from the Johns Hopkins Bloomberg School of Public Health, USA, where she studied the immune response of mosquito vectors to dengue virus.

Related Stories from Asian Scientist