Reproducing Scientific Results: Challenges & Opportunities in Biology
Search For Schools
When you click on a sponsoring school or program advertised on our site, or fill out a form to request information from a sponsoring school, we may earn a commission. View our advertising disclosure for more details.
When a lab publishes unreliable results, basing future studies on those results is equivalent to not only building a house on a bad foundation but also putting in a bad foundation for your neighbors’ houses.
Anna Crist, Research Technician at the Pasteur Institute
In May 2016, Nature published a report detailing the findings of a survey conducted among professional researchers. It was found that more than 70 percent of the respondents had tried and failed to reproduce another scientist’s experiments and more than 50 percent had failed to reproduce their own published experiments. The pool consisted of 1,576 researchers who took a short online questionnaire about reproducibility and research habits.
But what exactly is reproducibility? Some think it refers to the importance of reproducing profitable results, while others immediately think of an experiment being duplicated, and using up more capital in the process. The reality is that reproducibility is critical to science because it allows the conditions under which an experiment was conducted to be replicated; this provides for results to be verified once more. This is not to say that each experiment is reproduced by a team of equivalent skill. Rather, it’s about an ethical transparency of methodology and the subsequent accuracy of hypothesized results.
The importance of reproducibility is that it helps members of the scientific community confirm the research findings of other studies, thereby contributing to science’s collective knowledge. After all, the purpose of an experiment is to show that researchers’ hypotheses are correct, which means that they must be able to prove it. This can only be done by replicating the experiment. If the conclusions that Team A reached cannot be replicated by Team B, then Team A’s results are called into question and their methodology examined.
Of course, there are many factors that influence how easily-replicated an experiment might actually be.
Pressure has been growing steadily for the past few years to create more accountability for researchers to clearly show how their results can be reproduced. Since at least 2016, a few major science publications have expressed growing concern that the academic and research sciences were experiencing a reproducibility crisis. Organizations around the world have openly shown their support for reproducibility standards revision in biological research. The influential National Institutes of Health (NIH) has called for dramatic reforms to combat the growing “reproducibility crisis.”
Featured Interviewee: Anna Crist of the Pasteur Institute
Research Technician at the Pasteur Institute Department of Genomes and Genetics
Anna Crist has been working as a laboratory and research assistant in biology research labs for the past six years. She is currently employed by the Pasteur Institute in Paris, France in the Department of Genomes and Genetics. Under Dr. Louis Lambrechts, Crist investigates the interactions between viruses and the mosquito Aedes aegypti.
Previously, Crist was a research engineer with the École Normale Supérieure, Institute of Biology. Prior to this, she was a laboratory manager and research assistant working with the nematode C. elegans in evolutionary genetics research labs at the University of Oregon’s Institute of Ecology and Evolution.
Scientific Results Under New Scrutiny?
After its initial foray into the public dialogue about reproducibility, Nature published a piece that called for an “authoritative forum to hash out collective problems” on the matter of the ease of reproducibility in science. They also put out a reference and primary sources collection so that lay media and others could make sense of what was being discussed. Still, there is confusion about what reproducibility means for professionals and savvy consumers alike. It must be understood that this is more than just whistle-blowing—it is the core of scientific research.
“Reproducibility is an integral component of science,” says Anna Crist. “If you cannot reproduce results, there is a flaw in the research. It could be a wide variety of factors. An overlooked part of the method, an unreliable material (contamination, bad batch of a chemical, etc.), a low replicate number, a misleading statistical analysis, and a great many more.”
Claims of accessible reproducibility methodologies are being challenged outside the realm of biological science, too. The fields of psychology and sociology have recently opened up about the rising difficulty in the verification and validation of research results. There is a coming watershed moment for biological research and its effects can be seen in many other disciplines at this precise moment.
“If the reproducibility crisis continues, I think we risk incurring more costs, losing credibility as scientists, and reducing public scientific funding,” warns Crist. “When a lab publishes unreliable results, basing future studies on those results is equivalent to not only building a house on a bad foundation but also putting in a bad foundation for your neighbors’ houses.”
As a professional with the utmost dedication to research flow and the publication of verifiable results—whether positive or negative—Crist’s career has been made as an experiment engineer with an eye for precision.
“This concept of a reproducibility crisis is the phenomenon that, too often, researchers in one lab are unable to obtain the same results of a published study from another lab and sometimes even from their own studies,” says Crist. One of her premiere concerns is that scientists can waste time and funding by basing future studies on these published yet flawed results. At present, her time spent as a researcher shows that, in general, there is uncompromising loyalty to experimental rigor in scientific research.
“I was involved in a large study at the University of Oregon trying to identify compounds that would increase the health- and lifespan of C. elegans and other nematode species,” says Crist. “This was part of an NIH-funded collaboration with Rutgers University and The Buck Institute for Research on Aging.”
Perhaps the most salient fact about this collaboration was that if Crist’s teams couldn’t obtain reproducible results, their grants would not be renewed. This shows a shift in focus in the wider research community from ‘results’ to ‘reproducible results.’ This is a distinction that might be lost on the lay individual, but for Crist, it means that the stakes are always high.
“We hoped to find compounds that would have a positive effect across diverse genetic backgrounds (multiple species and strains within those species)—the worm-equivalent of general instead of personalized medicine,” she says. To adhere to her department’s conditions of continued research, it was critical that Crist’s team establish standard operating procedures (SOPs) for three laboratories that were geographically separated.
“We went to extreme measures to ensure everything was as identical as possible. Each lab bought the same brand and model of incubators just for these experiments,” explains Crist. “We had individual temperature monitors in each box of Petri-dishes that the nematodes grew on to track environmental changes over their entire lives. We ordered the same reagents that were from the same brand and lot number.”
It was this attention to rigor and control over the experiments’ environments that made it possible for the team to bolster their reputation and secure funding for the coming year. They managed to almost entirely eliminate variation in results between the three labs. In fact, there was more variation from experiment to experiment within a given lab than there was between the three labs.
A study conducted by Freedman, Cockburn, and Simcoe called “The Economics of Reproducibility in Preclinical Research” has estimated the cost of flawed or irreproducible research at $28 billion per year in the United States alone, with 65 percent of the cost incurred by the pharmaceutical industry. This doesn’t mean that only the pharmaceutical industry should improve its standards, however. All scientific disciplines should heed these figures.
She identifies competition as the main obstacle when it comes to increasing research result transparency. “High competition is perpetuated by low funding,” explains Crist. “There is an enormous amount of pressure to publish amazing findings in a high-profile journal and if you don’t, the grant money soon dries up.”
It is this combination of pressure and a lack of resources that can cause scientists to consciously—or subconsciously—perform less rigorous experiments. It can even mean the purposeful adjustment of experiment or study results to support a more favorable or profitable outcome. “This could also mean rushing through assays, taking inaccurate measurements, using fewer replicates, censoring data points here and there to tailor results, or performing a different statistical analysis that gives you that precious low p-value,” says Crist. “These all add up to irreproducible results.”
How Researchers Can Improve Scientific Reproducibility
To address the reproducibility crisis, the American Society for Cell Biology (ASCB) published a whitepaper outlining the history, symptoms, and possible future solutions for making it easier to confirm researchers’ results. An additional benefit is that more transparency could mean the public’s trust in research science increases. The many innovations in business, medicine, and consumer culture afforded by rigorous research standards cannot be overstated.
So what are some of the opportunities that researchers can seize to ensure that the reproducibility of results is possible? Crist has a few ideas—all of which are standard procedure at the Pasteur Institute.
Larger Sample Size & More Detailed Descriptions of Methods
“One thing that would help is a larger sample size and a higher number of replicates of experiments,” says Crist. “A stricter or higher standard for detail in the ‘Materials and Methods’ sections of papers is critical, as well. SOPs can often lack detail or be hard to follow, requiring additional time-consuming correspondence with authors in order to accurately reproduce the published methods.”
Collaboration (Rather Than Competition) Among Researchers Studying Similar Phenomena
A correspondence with authors is just one tactic for improvement: “If there’s joint funding or agreed-upon joint publishing between two or more labs that are asking the same question in their research, collaboration reduces competition and also allows for experiments to be repeated in different hands and settings to see if the same conclusion can be reached,” says Crist. Research laboratories that collaborate with one another embody the best of the scientific method—a system of checks and balances that keeps everyone in the industry honest and ethical.
In Conclusion: Strengthening Scientific Results
“The honest representation of data is a major concern, as well,” says Crist. “Results can be biased by censoring and ‘averaging away’ a lot of noise. For example, a box and whisker plot can hide noise and outliers that a dot-plot cannot. Though I know that the standards for figures in journals are increasing, so this is being addressed.”
One thing is certain for the future of biological research: standards must not only be raised, but they must also be maintained. The ASCB’s task force on reproducibility is leading the charge on establishing a more rigorous internal methodology which researchers can use as a template for their own experiments. Additionally, the ASCB proffers many suggestions for increasing the transparency of research science in the American context. Luckily, research engineers like Anna Crist who strictly adhere to experiment and ethics standards drive up the usability of data and strengthen the intellectual weight of American research.