Scientists Unknowingly Tweak Experiments

Although fairly common, p-hacking nonetheless probably does not drastically alter scientific consensus, scientists say.

AsianScientist (Mar. 25, 2015) – A study published in PLOS Biology has found some scientists are unknowingly tweaking experiments and analysis methods to increase their chances of getting results that are easily published.

The study conducted by Australian National University (ANU) scientists is the most comprehensive investigation into a type of publication bias called p-hacking.

P-hacking happens when researchers either consciously or unconsciously analyse their data multiple times or in multiple ways until they get a desired result. If p-hacking is common, the exaggerated results could lead to misleading conclusions, even when evidence comes from multiple studies.

“We found evidence that p-hacking is happening throughout the life sciences,” said lead author Dr. Megan Head from the ANU Research School of Biology.

The study used text mining to extract p-values—a number that indicates how likely it is that a result occurs by chance—from more than 100,000 research papers published around the world, spanning many scientific disciplines, including medicine, biology and psychology.

“Many researchers are not aware that certain methods could make some results seem more important than they are. They are just genuinely excited about finding something new and interesting,” Head said. “I think that pressure to publish is one factor driving this bias. As scientists we are judged by how many publications we have and the quality of the scientific journals they go in.”

“Journals, especially the top journals, are more likely to publish experiments with new, interesting results, creating incentive to produce results on demand.”

Head said the study found a high number of p-values that were only just over the traditional threshold that most scientists call statistically significant.

“This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold,” she said.

“They might look at their results before an experiment is finished, or explore their data with lots of different statistical methods, without realizing that this can lead to bias.”

The concern with p-hacking is that it could get in the way of forming accurate scientific conclusions, even when scientists review the evidence by combining results from multiple studies.

For example, if some studies show a particular drug is effective in treating hypertension, but other studies find it is not effective, scientists would analyse all the data to reach an overall conclusion. But if enough results have been p-hacked, the drug would look more effective than it is.

“We looked at the likelihood of this bias occurring in our own specialty, evolutionary biology, and although p-hacking was happening it wasn’t common enough to drastically alter general conclusions that could be made from the research,” she said.

“But greater awareness of p-hacking and its dangers is important because the implications of p-hacking may be different depending on the question you are asking.”

The article can be found at: Head et al. (2015) The Extent and Consequences of P-Hacking in Science.

—–

Source: Australian National University.
Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

Asian Scientist Magazine is an award-winning science and technology magazine that highlights R&D news stories from Asia to a global audience. The magazine is published by Singapore-headquartered Wildtype Media Group.

Related Stories from Asian Scientist