Updating search results...

Search Resources

215 Results

View
Selected filters:
  • reproducibility
Workflow for Awarding Badges
Unrestricted Use
CC BY
Rating
0.0 stars

Badges are a great way to signal that a journal values transparent research practices. Readers see the papers that have underlying data or methods available, colleagues see that norms are changing within a community and have ample opportunities to emulate better practices, and authors get recognition for taking a step into new techniques. In this webinar, Professor Stephen Lindsay of University of Victoria discusses the workflow of a badging program, eligibility for badge issuance, and the pitfalls to avoid in launching a badging program. Visit cos.io/badges to learn more.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Writing reproducible geoscience papers using R Markdown, Docker, and GitLab
Unrestricted Use
CC BY
Rating
0.0 stars

Reproducibility is unquestionably at the heart of science. Scientists face numerous challenges in this context, not least the lack of concepts, tools, and workflows for reproducible research in today's curricula.This short course introduces established and powerful tools that enable reproducibility of computational geoscientific research, statistical analyses, and visualisation of results using R (http://www.r-project.org/) in two lessons:1. Reproducible Research with R MarkdownOpen Data, Open Source, Open Reviews and Open Science are important aspects of science today. In the first lesson, basic motivations and concepts for reproducible research touching on these topics are briefly introduced. During a hands-on session the course participants write R Markdown (http://rmarkdown.rstudio.com/) documents, which include text and code and can be compiled to static documents (e.g. HTML, PDF).R Markdown is equally well suited for day-to-day digital notebooks as it is for scientific publications when using publisher templates.2. GitLab and DockerIn the second lesson, the R Markdown files are published and enriched on an online collaboration platform. Participants learn how to save and version documents using GitLab (http://gitlab.com/) and compile them using Docker containers (https://docker.com/). These containers capture the full computational environment and can be transported, executed, examined, shared and archived. Furthermore, GitLab's collaboration features are explored as an environment for Open Science.Prerequisites: Participants should install required software (R, RStudio, a current browser) and register on GitLab (https://gitlab.com) before the course.This short course is especially relevant for early career scientists (ECS).Participants are welcome to bring their own data and R scripts to work with during the course.All material by the conveners will be shared publicly via OSF (https://osf.io/qd9nf/).

Subject:
Physical Science
Material Type:
Activity/Lab
Provider:
New York University
Author:
Daniel Nüst
Edzer Pebesma
Markus Konkol
Rémi Rampin
Vicky Steeves
Date Added:
05/11/2018
The case for formal methodology in scientific reform
Only Sharing Permitted
CC BY-NC-ND
Rating
0.0 stars

Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, some of these reform attempts suffer from the same mistakes and over-generalizations they purport to address. Considering the costs of allowing false claims to become canonized, we argue for more rigor and nuance in methodological reform. By way of example, we present a formal analysis of three common claims in the metascientific literature: (a) that reproducibility is the cornerstone of science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects are characterized by poor statistical practice. We show that none of these three claims are correct in general and we explore when they do and do not hold.

Subject:
Social Science
Material Type:
Primary Source
Author:
Danielle J. Navarro
Erkan Ozge Buzbas
Joachim Vandekerckhove
Berna Devezer
Date Added:
11/13/2020
A checklist is associated with increased quality of reporting preclinical biomedical research: A systematic review
Unrestricted Use
CC BY
Rating
0.0 stars

Irreproducibility of preclinical biomedical research has gained recent attention. It is suggested that requiring authors to complete a checklist at the time of manuscript submission would improve the quality and transparency of scientific reporting, and ultimately enhance reproducibility. Whether a checklist enhances quality and transparency in reporting preclinical animal studies, however, has not been empirically studied. Here we searched two highly cited life science journals, one that requires a checklist at submission (Nature) and one that does not (Cell), to identify in vivo animal studies. After screening 943 articles, a total of 80 articles were identified in 2013 (pre-checklist) and 2015 (post-checklist), and included for the detailed evaluation of reporting methodological and analytical information. We compared the quality of reporting preclinical animal studies between the two journals, accounting for differences between journals and changes over time in reporting. We find that reporting of randomization, blinding, and sample-size estimation significantly improved when comparing Nature to Cell from 2013 to 2015, likely due to implementation of a checklist. Specifically, improvement in reporting of the three methodological information was at least three times greater when a mandatory checklist was implemented than when it was not. Reporting the sex of animals and the number of independent experiments performed also improved from 2013 to 2015, likely from factors not related to a checklist. Our study demonstrates that completing a checklist at manuscript submission is associated with improved reporting of key methodological information in preclinical animal studies.

Subject:
Applied Science
Biology
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Doris M. Rubio
Janet S. Lee
Jill Zupetic
John P. Pribis
Joo Heung Yoon
Kwonho Jeong
Kyle M. Holleran
Nader Shaikh
SeungHye Han
Tolani F. Olonisakin
Date Added:
08/07/2020
The citation advantage of linking publications to research data
Unrestricted Use
CC BY
Rating
0.0 stars

Efforts to make research results open and reproducible are increasingly reflected by journal policies encouraging or mandating authors to provide data availability statements. As a consequence of this, there has been a strong uptake of data availability statements in recent literature. Nevertheless, it is still unclear what proportion of these statements actually contain well-formed links to data, for example via a URL or permanent identifier, and if there is an added value in providing them. We consider 531,889 journal articles published by PLOS and BMC which are part of the PubMed Open Access collection, categorize their data availability statements according to their content and analyze the citation advantage of different statement categories via regression. We find that, following mandated publisher policies, data availability statements have become common by now, yet statements containing a link to a repository are still just a fraction of the total. We also find that articles with these statements, in particular, can have up to 25.36% higher citation impact on average: an encouraging result for all publishers and authors who make the effort of sharing their data. All our data and code are made available in order to reproduce and extend our results.

Subject:
Life Science
Social Science
Material Type:
Reading
Provider:
arXiv
Author:
Barbara McGillivray
Giovanni Colavizza
Iain Hrynaszkiewicz
Isla Staden
Kirstie Whitaker
Date Added:
08/07/2020
The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research
Unrestricted Use
CC BY
Rating
0.0 stars

The widespread use of ‘statistical significance’ as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (according to the American Statistical Association). We review why degrading p-values into ‘significant’ and ‘nonsignificant’ contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p ≤ 0.05) is hardly replicable: at a good statistical power of 80%, two studies will be ‘conflicting’, meaning that one is significant and the other is not, in one third of the cases if there is a true effect. A replication can therefore not be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgment based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to selective reporting and to publication bias against nonsignificant findings. Data dredging, p-hacking, and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that ‘there is no effect’. Information on possible true effect sizes that are compatible with the data must be obtained from the point estimate, e.g., from a sample average, and from the interval estimate, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, for example that decision rules should rather be more stringent, that sample sizes could decrease, or that p-values should better be completely abandoned. We conclude that whatever method of statistical inference we use, dichotomous threshold thinking must give way to non-automated informed judgment.

Subject:
Mathematics
Statistics and Probability
Material Type:
Reading
Provider:
PeerJ
Author:
Fränzi Korner-Nievergelt
Tobias Roth
Valentin Amrhein
Date Added:
08/07/2020
An empirical analysis of journal policy effectiveness for computational reproducibility
Only Sharing Permitted
CC BY-NC-ND
Rating
0.0 stars

A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluates the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by (i) requesting data and code from authors and (ii) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy—author remission of data and code postpublication upon request—an improvement over no policy, but currently insufficient for reproducibility.

Subject:
Social Science
Material Type:
Reading
Author:
Jennifer Seiler
Victoria Stodden
Zhaokun Ma
Date Added:
11/13/2020
A manifesto for reproducible science
Unrestricted Use
CC BY
Rating
0.0 stars

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

Subject:
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
Brian A. Nosek
Christopher D. Chambers
Dorothy V. M. Bishop
Eric-Jan Wagenmakers
Jennifer J. Ware
John P. A. Ioannidis
Katherine S. Button
Marcus R. Munafò
Nathalie Percie du Sert
Uri Simonsohn
Date Added:
08/07/2020
The natural selection of bad science
Unrestricted Use
CC BY
Rating
0.0 stars

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling. We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

Subject:
Mathematics
Statistics and Probability
Material Type:
Reading
Provider:
Royal Society Open Science
Author:
Paul E. Smaldino
Richard McElreath
Date Added:
08/07/2020
A new tool for microbiome differential abundance analysis – ZicoSeq
Unrestricted Use
CC BY
Rating
0.0 stars

This resource is a video abstract of a research paper created by Research Square on behalf of its authors. It provides a synopsis that's easy to understand, and can be used to introduce the topics it covers to students, researchers, and the general public. The video's transcript is also provided in full, with a portion provided below for preview:

"Differential abundance analysis (DAA) is a key statistical method for comparing microbiome compositions under different conditions, such as health vs. disease. However, DAA is complicated by the use of relative, rather than absolute, abundance values and by a high risk of false positives, or detection of significant effects when there aren’t any. In addition, the existing DAA tools can produce very divergent results from the same data, making it difficult to select the best tool. To provide guidance, a new study comprehensively evaluated the currently available tools with simulations based on real data. The researchers found that none of the tools were simultaneously robust, powerful, and flexible. Therefore, they concluded that none were suitable for blind application to real microbiome datasets. To build a better path forward, the researchers designed a new tool, ZicoSeq that drew on the strengths of the other available DAA methods while addressing their major limitations..."

The rest of the transcript, along with a link to the research itself, is available on the resource itself.

Subject:
Biology
Life Science
Material Type:
Diagram/Illustration
Reading
Provider:
Research Square
Provider Set:
Video Bytes
Date Added:
04/14/2023
An open investigation of the reproducibility of cancer biology research
Unrestricted Use
CC BY
Rating
0.0 stars

It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility.

Subject:
Applied Science
Biology
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
eLife
Author:
Brian A Nosek
Elizabeth Iorns
Fraser Elisabeth Tan
Joelle Lomax
Timothy M Errington
William Gunn
Date Added:
08/07/2020
The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture:
Unrestricted Use
CC BY
Rating
0.0 stars

Like many other areas of science, experimental psychology is affected by a “replication crisis” that is causing concern in many fields of research. Approaches t...

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Quarterly Journal of Experimental Psychology
Author:
Dorothy VM Bishop
Date Added:
08/07/2020
A reputation economy: how individual reward considerations trump systemic arguments for open access to data
Unrestricted Use
CC BY
Rating
0.0 stars

Open access to research data has been described as a driver of innovation and a potential cure for the reproducibility crisis in many academic fields. Against this backdrop, policy makers are increasingly advocating for making research data and supporting material openly available online. Despite its potential to further scientific progress, widespread data sharing in small science is still an ideal practised in moderation. In this article, we explore the question of what drives open access to research data using a survey among 1564 mainly German researchers across all disciplines. We show that, regardless of their disciplinary background, researchers recognize the benefits of open access to research data for both their own research and scientific progress as a whole. Nonetheless, most researchers share their data only selectively. We show that individual reward considerations conflict with widespread data sharing. Based on our results, we present policy implications that are in line with both individual reward considerations and scientific progress.

Subject:
Applied Science
Information Science
Material Type:
Reading
Provider:
Palgrave Communications
Author:
Benedikt Fecher
Marcel Hebing
Sascha Friesike
Stephanie Linek
Date Added:
08/07/2020
A test of the diffusion model explanation for the worst performance rule using preregistration and blinding
Unrestricted Use
CC BY
Rating
0.0 stars

People with higher IQ scores also tend to perform better on elementary cognitive-perceptual tasks, such as deciding quickly whether an arrow points to the left or the right Jensen (2006). The worst performance rule (WPR) finesses this relation by stating that the association between IQ and elementary-task performance is most pronounced when this performance is summarized by people’s slowest responses. Previous research has shown that the WPR can be accounted for in the Ratcliff diffusion model by assuming that the same ability parameter—drift rate—mediates performance in both elementary tasks and higher-level cognitive tasks. Here we aim to test four qualitative predictions concerning the WPR and its diffusion model explanation in terms of drift rate. In the first stage, the diffusion model was fit to data from 916 participants completing a perceptual two-choice task; crucially, the fitting happened after randomly shuffling the key variable, i.e., each participant’s score on a working memory capacity test. In the second stage, after all modeling decisions were made, the key variable was unshuffled and the adequacy of the predictions was evaluated by means of confirmatory Bayesian hypothesis tests. By temporarily withholding the mapping of the key predictor, we retain flexibility for proper modeling of the data (e.g., outlier exclusion) while preventing biases from unduly influencing the results. Our results provide evidence against the WPR and suggest that it may be less robust and less ubiquitous than is commonly believed.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Attention, Perception, & Psychophysics
Author:
Alexander Ly
Andreas Pedroni
Dora Matzke
Eric-Jan Wagenmakers
Gilles Dutilh
Joachim Vandekerckhove
Jörg Rieskamp
Renato Frey
Date Added:
08/07/2020