Health

How to Tell a Good Scientific Study From a Bad One

July 21st 2017

There are a lot of studies out there.

According to a 2015 report from the International Association of Scientific, Technical and Medical Publishers, there are as many as 28,000 English-language journals, publishing 2.5 million articles every year, covering new advances in every field from astronomy to zoology.

It's easy to assume that all of these journals are top-quality publishing outfits, working with only the best scientists to disseminate the best research. But the quality of scientific studies varies greatly. Some are excellent, but countless others have grievous design flaws or reach unsupported conclusions.

And unfortunately, when the media gets hold of a supposedly earthshaking new study, the findings tend to be reported on with little skepticism or investigation. Studies are complex by design, and the reporting on them by non-scientists is simplistic and sensational. Because of that, it can be exceedingly difficult for the average reader to differentiate between a sound finding and pure hype.

"You should always assume that any surprising new study results you read about are exaggerated or spun to sound sensational and newsworthy," Brian Dunning, science writer and host of the podcast Skeptoid, told ATTN:. "Sometimes a charlatan is doing it deliberately to sell something, [and] sometimes a reporter is doing it for a good headline. But almost never does good, careful science turn up something revolutionary that's suddenly sprung upon the public."

How often do bad studies get published? Far more often than you might think.

"There is increasing concern that most current published research findings are false," Stanford researcher Dr. John Ioannidis wrote in a widely-cited 2005 paper. He claimed that because of bias, poor design, and financial relationships held by scientists, false findings "may be the majority or even the vast majority of published research claims."

So how does a non-scientist find the truth of what a study is really saying, divorced from the hype? Here are some questions you can ask yourself to determine if a study and its findings are reliable.

What is the study saying?

Any study that trumpets a conclusion that seems designed to get attention or clicks is something to be looked at carefully. Does the conclusion pass the smell test? Does it contradict everything we know about a subject? "The most common warning sign of bad science is a sensational and shocking finding," Dunning said. "Good science is meticulous, gradual, and careful."

Was it properly designed?

Most high-quality scientific studies are indexed by the National Center for Biotechnology Information on their clearinghouse website PubMed, an invaluable tool for lay people who want to get details on a research finding. The entire text often isn't available, but you can usually find a summary of the findings, called an abstract. You'll also be able to find the methodology, which will give you a clue as to whether the study was well-designed.

The gold standard of medical research design is the randomized controlled trial. It's a way to study a drug or treatment that eliminates as much bias on the part of the researchers as possible.

Clinical trials should have a control group of people not getting the drug, but a placebo. The assignment of the new drug and placebo should be random, and it should be double-blinded, with neither the researchers nor the participants knowing who's getting what. You should also confirm the the sample size is appropriately large enough to reach the conclusion the researchers say they've reached.

Was it peer-reviewed?

Another important trait to look for in a study is whether it's been read and critiqued by other researchers in that field, a process known as peer review. On his blog, Science Based Medicine, neurologist Steven Novella writes that "peer-review is a critical part of the functioning of the scientific community, of quality control, and the self corrective nature of science."

At minimum, the peer review process should act as a quality control for a journal. Reading and critique by peers can validate the study's findings, point out errors or bias, and determine whether a study is worth publishing or should be rejected. Most of the studies published on PubMed will have been peer-reviewed by other scientists.

But what if you can't access a study, can't find details on it, or can't make sense of what you do find? In that case, you'll want to go one level higher, to learn about the journal that published it.

Is it in a quality journal?

As Dunning discussed in a Skeptoid episode about journal hoaxing—"submitting a deliberately bad paper to journals, hoping to get it approved and published"—"the idea of having scientific papers published in respected journals is a good one. Not only does it provide published access to the research, but association with a respected journal tells readers that this research paper is high-quality science."

But not all journals are created equal. To get a sense of how respected a journal is in that particular field, look for the "impact factor." This is a number that measures the frequency with which the average article in a journal has been cited. There's no consensus on what a "good" impact factor is, but the vast majority of journals have one under 10, and an extremely low one can be a red flag.

Did the researcher pay to have it published?

Because the journal business has exploded, there are a number of publications that don't charge readers, but accept submissions from researchers who pay for it. This is called "open-access."

While the idea of paying to be published seems like it would lead to poor studies getting wide exposure, open-access can be a valuable tool to help researchers just starting their careers get published. And many of the negative connotations of open-access, such as high fees and inherent low quality, are myths. As Dunning writes, "good open-access journals still employ top standards and have thorough peer review."

Did the researchers disclose their conflicts of interest?

In March 2016, a group of scientists wrote the National Library of Medicine asking to have researcher conflicts of interest and funding sources added to the publicly-available portion of a study's abstract on PubMed. This was in response to research that found a persistent pattern of studies funded by drug companies, medical device manufacturers, and food giants invariably reached results favorable to those companies.

PubMed started adding researcher conflicts to abstracts a year later, and they should be readily found for any recent study on the site. If a researcher won't disclose whether they've been paid by a test subject, it's a warning as to the study's bias.

Did anyone else reach the same conclusion?

"One study is not enough to claim enduring evidence of its finding," chief medical officer of the New York State Office of Mental Health Lloyd Sederer wrote in U.S. News and World Report. "It must be replicated by other investigators to demonstrate that the original study’s results were not just an accident or, worse, the product of poor design or inflated findings."

Sederer writes that while replicating the finding of a study can be a difficult process, and doesn't get the headlines that an initial finding does, it helps bring depth and clarity to the research. If you can't find another study that replicates what one group of researchers found, it's a sign that the initial study might have had problems.

Dunning describes the current system of study publishing as "imperfect, but still pretty good."

"Peer review, despite occasional failings, still weeds out most of the bad articles," he said. "And the best work still rarely has any choice but to go to [traditional] journals."

Unfortunately, the imperfections in the system mean that even if all the signs of a good study are there, the study still might not be good. Impact factor numbers can vary wildly depending on the discipline. Peer review can be sabotaged by bias and lack of expertise by reviewers. And open-access is plagued by predatory journals who will take money to publish almost anything.

Even theoretically good studies can be used to justify bad conclusions. An example of this is the recent controversy involving Gwyenth Paltrow's lifestyle website goop, which published several letters from scientists attacking a critic of the site, Toronto OBGYN Dr. Jennifer Gunter.

One of these scientists was Steven Gundry, a pioneer in the "lectin-free diet" craze. Gundry bragged that he had "published over 300 papers, chapters, and abstracts ... in peer-reviewed journals," singling out one study of 57 patients that Gundry claimed definitively proved that lectins (a protein found in leafy greens and beans) "cause human disease" through inflammation of the gut.

Without context, these would seem to cast Gundry as a giant of medical research. But oncologist and skeptic David Gorski pointed out a number of errors in Gundry's claims, including not having written a study indexed on PubMed in 13 years, and that Gundry's lectin paper likely invested too much meaning in preliminary findings.

The damage caused by bad studies, especially when the media overstates their findings, can be incalculable.

Probably the most infamous example was Andrew Wakefield's 1998 paper in influential British journal The Lancet, which linked autism with vaccination, using a tiny sample size to justify a conclusion that other scientists couldn't replicate.

The Lancet had an impact factor of 47.831 in 2016, and it performed its standard peer-review on the paper. All of these should have pointed to a durable research finding. But in the next decade, massive flaws cropped up in Wakefield's methodology, design, and ethics. Finally, the Lancet retracted the paper in 2010 after evidence surfaced that Wakefield had faked his conclusions, and he was disbarred for misconduct.

Sadly, the damage was done. Wakefield's research ignited an anti-vaccine movement driven by celebrities, paranoia, and bad science. Vaccination rates for measles, mumps, rubella, whooping cough, and other preventable diseases all dropped, leading to outbreaks in the U.S., Europe, and the developing world.

So even if all the signs of a study point toward it being high-quality, it might still turn out to be bad. And unfortunately, few people have the time, interest, or training to dig into the background of a study or expert.

"The best tool for the general public," Dunning said, "is to simply get in the habit of being gravely skeptical of anything that seems to make an improbably-big wave."

Share your opinion

Have you ever researched a scientific study?

No 29%Yes 71%