skip navigation nih record
Vol. LXIV, No. 10
May 11, 2012
cover

previous story

next story


‘Pressure To Get It Right’
Biases Rife in Research, Ioannidis Says

On the front page...

Dr. John Ioannidis

Dr. John Ioannidis

Don’t believe everything you read—whether the too-good-to-be-true promises of an ad in the back of a magazine or the research published in a highly respected journal.

“Most statistically significant findings are not real at all,” said Dr. John Ioannidis, director of the Stanford Prevention Research Center. “They’re just false positives.”

In a recent seminar sponsored by the Office of Disease Prevention, NHLBI, NIAAA and NCI, Ioannidis discussed the biases in published biomedical research and suggested several potential solutions to the problem.

Many of these false positives are revealed when larger-scale studies attempt to replicate the findings of smaller studies. This is true even for the gold standard of biomedical research, the randomized, controlled trial, said Ioannidis. One of every four such trials is refuted when a larger trial is conducted, he has found.

Continued...

Ioannidis said that journal editorial policies are responsible for much of this trend. Editors want to publish research that is novel and will have a large impact on the field, which generally means papers that report very large, statistically significant effects.

“We need to move away from the requirement to make big promises,” Ioannidis said. “Very little of what we do will be so lucky as to break new ground.”

“We need to move away from the requirement to make big promises,” Ioannidis said. “Very little of what we do will be so lucky as to break new ground.”

Despite this overall trend, Ioannidis and colleagues have found a window of opportunity for publishing papers whose results don’t fit this mold. He coined the phrase “Proteus phenomenon,” after the ever-changing deity of Greek mythology, to describe this pattern: The first study published on a given topic typically shows a very large, significant effect; within the next year or so, a second paper is published showing the opposite. Usually, the large positive effects and neutral or negative effects in these papers are not seen again in subsequent publications on the topic.

Journals are also influenced by newsworthiness of results, which is linked to timeliness. Only a quarter of registered trials on the 2009 H1N1 flu outbreak are published, Ioannidis found. Those published in 2009 are in high-impact journals; as time passed, equal-quality studies on the outbreak appeared in more and more obscure journals.

In fact, Ioannidis and colleagues themselves had difficulty publishing their own paper describing this phenomenon. Ioannidis recalled what reviewers told them: “There is no need to publish that research. The sponsors know about it; they will tell the experts and they will make the right recommendations.” He said, “I have a serious problem with that.”

Ioannidis made nine recommendations for ways to overcome this and other problems of bias in biomedical research. Registration, which is already mandated for clinical trials such as those on H1N1, is a good first step. Registration ensures that trials don’t become “lost” if they are never published. In this way, their results are potentially available to anyone interested in reviewing a more complete set of data on a topic, beyond the mostly positive published reports.

Among those attending Ioannidis’ lecture were Dr. Susan Shurin (at microphone), acting NHLBI director, and behind her, Dr. Alan Schechter, chief of the Molecular Medicine Branch, NIDDK.

Among those attending Ioannidis’ lecture were Dr. Susan Shurin (at microphone), acting NHLBI director, and behind her, Dr. Alan Schechter, chief of the Molecular Medicine Branch, NIDDK.

Photos: Bill Branson

“Registration of clinical trials is one of the best ideas ever to appear in clinical research,” said Ioannidis. He suggested expanding registration requirements to other types of biomedical research, as well.

Ioannidis also emphasized repeatability. “We have to find ways to reward repeatability,” he said. Although most of the high-impact journals have data availability policies, they generally do not follow them. Working with Nature Genetics, one of the most high-quality and transparent of all biomedical journals, Ioannidis and several independent teams of researchers attempted to reproduce the results of 18 published microarray studies. Despite months of effort, only two of the papers were fully reproducible. Most of the problem was due to incomplete data availability.

Ioannidis noted that researchers are under pressure to produce earth-shaking results before they even begin a project, starting with their grant applications. To get funding, researchers have to play up the novelty and importance of the work they plan to do. Under the weight of the great promises they make, investigators often abandon studies whose results seem uninteresting or selectively report only statistically significant portions of their results.

“We need to move away from the requirement to make big promises,” he said. “Very little of what we do will be so lucky as to break new ground.” He suggested that instead of funding specific research projects, NIH and other funding bodies should support individual researchers with a track record of excellence. “Maybe we should promise instead just to do our best,” he said. NIHRecord Icon


back to top of page