Updating search results...

Reproducibility

Agreement of research results repeated. Reproducibility, replicability, repeatability, robustness, generalizability, organization, documentation, automation, dissemination, guidance, definitions, and more.

185 affiliated resources

Search Resources

View
Selected filters:
The natural selection of bad science
Unrestricted Use
CC BY
Rating
0.0 stars

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling. We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

Subject:
Mathematics
Statistics and Probability
Material Type:
Reading
Provider:
Royal Society Open Science
Author:
Paul E. Smaldino
Richard McElreath
Date Added:
08/07/2020
An open investigation of the reproducibility of cancer biology research
Unrestricted Use
CC BY
Rating
0.0 stars

It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility.

Subject:
Applied Science
Biology
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
eLife
Author:
Brian A Nosek
Elizabeth Iorns
Fraser Elisabeth Tan
Joelle Lomax
Timothy M Errington
William Gunn
Date Added:
08/07/2020
The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture:
Unrestricted Use
CC BY
Rating
0.0 stars

Like many other areas of science, experimental psychology is affected by a “replication crisis” that is causing concern in many fields of research. Approaches t...

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Quarterly Journal of Experimental Psychology
Author:
Dorothy VM Bishop
Date Added:
08/07/2020
A reputation economy: how individual reward considerations trump systemic arguments for open access to data
Unrestricted Use
CC BY
Rating
0.0 stars

Open access to research data has been described as a driver of innovation and a potential cure for the reproducibility crisis in many academic fields. Against this backdrop, policy makers are increasingly advocating for making research data and supporting material openly available online. Despite its potential to further scientific progress, widespread data sharing in small science is still an ideal practised in moderation. In this article, we explore the question of what drives open access to research data using a survey among 1564 mainly German researchers across all disciplines. We show that, regardless of their disciplinary background, researchers recognize the benefits of open access to research data for both their own research and scientific progress as a whole. Nonetheless, most researchers share their data only selectively. We show that individual reward considerations conflict with widespread data sharing. Based on our results, we present policy implications that are in line with both individual reward considerations and scientific progress.

Subject:
Applied Science
Information Science
Material Type:
Reading
Provider:
Palgrave Communications
Author:
Benedikt Fecher
Marcel Hebing
Sascha Friesike
Stephanie Linek
Date Added:
08/07/2020
A test of the diffusion model explanation for the worst performance rule using preregistration and blinding
Unrestricted Use
CC BY
Rating
0.0 stars

People with higher IQ scores also tend to perform better on elementary cognitive-perceptual tasks, such as deciding quickly whether an arrow points to the left or the right Jensen (2006). The worst performance rule (WPR) finesses this relation by stating that the association between IQ and elementary-task performance is most pronounced when this performance is summarized by people’s slowest responses. Previous research has shown that the WPR can be accounted for in the Ratcliff diffusion model by assuming that the same ability parameter—drift rate—mediates performance in both elementary tasks and higher-level cognitive tasks. Here we aim to test four qualitative predictions concerning the WPR and its diffusion model explanation in terms of drift rate. In the first stage, the diffusion model was fit to data from 916 participants completing a perceptual two-choice task; crucially, the fitting happened after randomly shuffling the key variable, i.e., each participant’s score on a working memory capacity test. In the second stage, after all modeling decisions were made, the key variable was unshuffled and the adequacy of the predictions was evaluated by means of confirmatory Bayesian hypothesis tests. By temporarily withholding the mapping of the key predictor, we retain flexibility for proper modeling of the data (e.g., outlier exclusion) while preventing biases from unduly influencing the results. Our results provide evidence against the WPR and suggest that it may be less robust and less ubiquitous than is commonly believed.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Attention, Perception, & Psychophysics
Author:
Alexander Ly
Andreas Pedroni
Dora Matzke
Eric-Jan Wagenmakers
Gilles Dutilh
Joachim Vandekerckhove
Jörg Rieskamp
Renato Frey
Date Added:
08/07/2020