Updating search results...

Reproducibility

Agreement of research results repeated. Reproducibility, replicability, repeatability, robustness, generalizability, organization, documentation, automation, dissemination, guidance, definitions, and more.

185 affiliated resources

Search Resources

View
Selected filters:
Materials for the Webinar "Helping Science Succeed: The Librarian’s Role in Addressing the Reproducibility Crisis"
Conditional Remix & Share Permitted
CC BY-NC
Rating
0.0 stars

Headlines and scholarly publications portray a crisis in biomedical and health sciences. In this webinar, you will learn what the crisis is and the vital role of librarians in addressing it. You will see how you can directly and immediately support reproducible and rigorous research using your expertise and your library services. You will explore reproducibility guidelines and recommendations and develop an action plan for engaging researchers and stakeholders at your institution. #MLAReproducibilityLearning OutcomesBy the end of this webinar, participants will be able to: describe the basic history of the “reproducibility crisis” and define reproducibility and replicability explain why librarians have a key role in addressing concerns about reproducibility, specifically in terms of the packaging of science explain 3-4 areas where librarians can immediately and directly support reproducible research through existing expertise and services start developing an action plan to engage researchers and stakeholders at their institution about how they will help address research reproducibility and rigorAudienceLibrarians who work with researchers; librarians who teach, conduct, or assist with evidence-synthesis or critical appraisal, and managers and directors who are interested in allocating resources toward supporting research rigor. No prior knowledge or skills required. Basic knowledge of scholarly research and publishing helpful.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Lesson
Provider:
UMN
Author:
Amy Riegelman
Frank Sayre
Date Added:
02/13/2020
Programming with R
Unrestricted Use
CC BY
Rating
0.0 stars

The best way to learn how to program is to do something useful, so this introduction to R is built around a common scientific task: data analysis. Our real goal isn’t to teach you R, but to teach you the basic concepts that all programming depends on. We use R in our lessons because: we have to use something for examples; it’s free, well-documented, and runs almost everywhere; it has a large (and growing) user base among scientists; and it has a large library of external packages available for performing diverse tasks. But the two most important things are to use whatever language your colleagues are using, so you can share your work with them easily, and to use that language well. We are studying inflammation in patients who have been given a new treatment for arthritis, and need to analyze the first dozen data sets of their daily inflammation. The data sets are stored in CSV format (comma-separated values): each row holds information for a single patient, and the columns represent successive days. The first few rows of our first file look like this: 0,0,1,3,1,2,4,7,8,3,3,3,10,5,7,4,7,7,12,18,6,13,11,11,7,7,4,6,8,8,4,4,5,7,3,4,2,3,0,0 0,1,2,1,2,1,3,2,2,6,10,11,5,9,4,4,7,16,8,6,18,4,12,5,12,7,11,5,11,3,3,5,4,4,5,5,1,1,0,1 0,1,1,3,3,2,6,2,5,9,5,7,4,5,4,15,5,11,9,10,19,14,12,17,7,12,11,7,4,2,10,5,4,2,2,3,2,2,1,1 0,0,2,0,4,2,2,1,6,7,10,7,9,13,8,8,15,10,10,7,17,4,4,7,6,15,6,4,9,11,3,5,6,3,3,4,2,3,2,1 0,1,1,3,3,1,3,5,2,4,4,7,6,5,3,10,8,10,6,17,9,14,9,7,13,9,12,6,7,7,9,6,3,2,2,4,2,0,1,1 We want to: load that data into memory, calculate the average inflammation per day across all patients, and plot the result. To do all that, we’ll have to learn a little bit about programming.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Diya Das
Katrin Leinweber
Rohit Goswami
Date Added:
03/20/2017
Data Analysis and Visualization in R for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Data Carpentry lesson from Ecology curriculum to learn how to analyse and visualise ecological data in R. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with ecology data in R. This is an introduction to R designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about R syntax, the RStudio interface, and move through how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from R.

Subject:
Applied Science
Computer Science
Ecology
Information Science
Life Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Ankenbrand, Markus
Arindam Basu
Ashander, Jaime
Bahlai, Christie
Bailey, Alistair
Becker, Erin Alison
Bledsoe, Ellen
Boehm, Fred
Bolker, Ben
Bouquin, Daina
Burge, Olivia Rata
Burle, Marie-Helene
Carchedi, Nick
Chatzidimitriou, Kyriakos
Chiapello, Marco
Conrado, Ana Costa
Cortijo, Sandra
Cranston, Karen
Cuesta, Sergio Martínez
Culshaw-Maurer, Michael
Czapanskiy, Max
Daijiang Li
Dashnow, Harriet
Daskalova, Gergana
Deer, Lachlan
Direk, Kenan
Dunic, Jillian
Elahi, Robin
Fishman, Dmytro
Fouilloux, Anne
Fournier, Auriel
Gan, Emilia
Goswami, Shubhang
Guillou, Stéphane
Hancock, Stacey
Hardenberg, Achaz Von
Harrison, Paul
Hart, Ted
Herr, Joshua R.
Hertweck, Kate
Hodges, Toby
Hulshof, Catherine
Humburg, Peter
Jean, Martin
Johnson, Carolina
Johnson, Kayla
Johnston, Myfanwy
Jordan, Kari L
K. A. S. Mislan
Kaupp, Jake
Keane, Jonathan
Kerchner, Dan
Klinges, David
Koontz, Michael
Leinweber, Katrin
Lepore, Mauro Luciano
Li, Ye
Lijnzaad, Philip
Lotterhos, Katie
Mannheimer, Sara
Marwick, Ben
Michonneau, François
Millar, Justin
Moreno, Melissa
Najko Jahn
Obeng, Adam
Odom, Gabriel J.
Pauloo, Richard
Pawlik, Aleksandra Natalia
Pearse, Will
Peck, Kayla
Pederson, Steve
Peek, Ryan
Pletzer, Alex
Quinn, Danielle
Rajeg, Gede Primahadi Wijaya
Reiter, Taylor
Rodriguez-Sanchez, Francisco
Sandmann, Thomas
Seok, Brian
Sfn_brt
Shiklomanov, Alexey
Shivshankar Umashankar
Stachelek, Joseph
Strauss, Eli
Sumedh
Switzer, Callin
Tarkowski, Leszek
Tavares, Hugo
Teal, Tracy
Theobold, Allison
Tirok, Katrin
Tylén, Kristian
Vanichkina, Darya
Voter, Carolyn
Webster, Tara
Weisner, Michael
White, Ethan P
Wilson, Earle
Woo, Kara
Wright, April
Yanco, Scott
Ye, Hao
Date Added:
03/20/2017
Data Analysis and Visualization in Python for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in one and a half days (~ 10 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Maxim Belkin
Tania Allard
Date Added:
03/20/2017
Programming with MATLAB
Unrestricted Use
CC BY
Rating
0.0 stars

The best way to learn how to program is to do something useful, so this introduction to MATLAB is built around a common scientific task: data analysis. Our real goal isn’t to teach you MATLAB, but to teach you the basic concepts that all programming depends on. We use MATLAB in our lessons because: we have to use something for examples; it’s well-documented; it has a large (and growing) user base among scientists in academia and industry; and it has a large library of packages available for performing diverse tasks. But the two most important things are to use whatever language your colleagues are using, so that you can share your work with them easily, and to use that language well.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Gerard Capes
Date Added:
03/20/2017
Data Organization in Spreadsheets for Ecologists
Unrestricted Use
CC BY
Rating
0.0 stars

Good data organization is the foundation of any research project. Most researchers have data in spreadsheets, so it’s the place that many research projects start. We organize data in spreadsheets in the ways that we as humans want to work with the data, but computers require that data be organized in particular ways. In order to use tools that make computation more efficient, such as programming languages like R or Python, we need to structure our data the way that computers need the data. Since this is where most research projects start, this is where we want to start too! In this lesson, you will learn: Good data entry practices - formatting data tables in spreadsheets How to avoid common formatting mistakes Approaches for handling dates in spreadsheets Basic quality control and data manipulation in spreadsheets Exporting data from spreadsheets In this lesson, however, you will not learn about data analysis with spreadsheets. Much of your time as a researcher will be spent in the initial ‘data wrangling’ stage, where you need to organize the data to perform a proper analysis later. It’s not the most fun, but it is necessary. In this lesson you will learn how to think about data organization and some practices for more effective data wrangling. With this approach you can better format current data and plan new data collection so less data wrangling is needed.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Christie Bahlai
Peter R. Hoyt
Tracy Teal
Date Added:
03/20/2017
Metascience Forum 2020 - YouTube
Unrestricted Use
CC BY
Rating
0.0 stars

In his talk, Professor Nosek defines replication as gathering evidence that tests an empirical claim made in an original paper. This intent influences the design and interpretation of a replication study and addresses confusion between conceptual and direct replications.
---
Are you a funder interested in supporting research on the scientific process? Learn more about the communities mobilizing around the emerging field of metascience by visiting metascience.com. Funders are encouraged to review and adopt the practices overviewed at cos.io/top-funders as part of the solution to issues discussed during the Funders Forum.

Subject:
Education
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Brian Nosek
Date Added:
03/21/2021
The Open Science Training Handbook
Read the Fine Print
Some Rights Reserved
Rating
0.0 stars

Open Science, the movement to make scientific products and processes accessible to and reusable by all, is about culture and knowledge as much as it is about technologies and services. Convincing researchers of the benefits of changing their practices, and equipping them with the skills and knowledge needed to do so, is hence an important task.This book offers guidance and resources for Open Science instructors and trainers, as well as anyone interested in improving levels of transparency and participation in research practices. Supporting and connecting an emerging Open Science community that wishes to pass on its knowledge, the handbook suggests training activities that can be adapted to various settings and target audiences. The book equips trainers with methods, instructions, exemplary training outlines and inspiration for their own Open Science trainings. It provides Open Science advocates across the globe with practical know-how to deliver Open Science principles to researchers and support staff. What works, what doesn’t? How can you make the most of limited resources? Here you will find a wealth of resources to help you build your own training events.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Provider:
FOSTER Open Science
Author:
FOSTER Open Science
Date Added:
06/18/2020
Open Science Manual
Conditional Remix & Share Permitted
CC BY-NC
Rating
0.0 stars

About This Document: This manual was assembled and is being updated by Professor Benjamin Le (@benjaminle), who is on the faculty in the Department of Psychology at Haverford College. The primary goal of this text is to provide guidance to his senior thesis students on how to conduct research in his lab by working within general principles that promote research transparency using the specific open science practices described here. While it is aimed at undergraduate psychology students, hopefully it will be of use to other faculty/researchers/students who are interested in adopting open science practices in their labs.

Subject:
Psychology
Social Science
Material Type:
Reading
Author:
Benjamin Le
Date Added:
05/01/2018
Reproducibility Immersive Course
Conditional Remix & Share Permitted
CC BY-SA
Rating
0.0 stars

Various fields in the natural and social sciences face a ‘crisis of confidence’. Broadly, this crisis amounts to a pervasiveness of non-reproducible results in the published literature. For example, in the field of biomedicine, Amgen published findings that out of 53 landmark published results of pre-clinical studies, only 11% could be replicated successfully. This crisis is not confined to biomedicine. Areas that have recently received attention for non-reproducibility include biomedicine, economics, political science, psychology, as well as philosophy. Some scholars anticipate the expansion of this crisis to other disciplines.This course explores the state of reproducibility. After giving a brief historical perspective, case studies from different disciplines (biomedicine, psychology, and philosophy) are examined to understand the issues concretely. Subsequently, problems that lead to non-reproducibility are discussed as well as possible solutions and paths forward.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Activity/Lab
Provider:
New York University
Author:
Vicky Steeves
Date Added:
06/01/2018
Level up the reproducibility of your data and code! A 2-hour, hands-on workshop
Unrestricted Use
CC BY
Rating
0.0 stars

Purpose: To introduce methods and tools in organization, documentation, automation, and dissemination of research that nudge it further along the reproducibility spectrum.OutcomeParticipants feel more confident applying reproducibility methods and tools to their own research projects.ProcessParticipants practice new methods and tools with code and data during the workshop to explore what they do and how they might work in a research workflow. Participants can compare benefits of new practices and ask questions to help clarify which would provide them the most value to adopt.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Activity/Lab
Author:
April Clyburne-Sherin
Date Added:
10/29/2019
A Bayesian Perspective on the Reproducibility Project: Psychology
Unrestricted Use
CC BY
Rating
0.0 stars

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Alexander Etz
Joachim Vandekerckhove
Date Added:
08/07/2020
Ten Simple Rules for Reproducible Computational Research
Unrestricted Use
CC BY
Rating
0.0 stars

Replication is the cornerstone of a cumulative science. However, new tools and technologies, massive amounts of data, interdisciplinary approaches, and the complexity of the questions being asked are complicating replication efforts, as are increased pressures on scientists to advance their research. As full replication of studies on independently collected data is often not feasible, there has recently been a call for reproducible research as an attainable minimum standard for assessing the value of scientific claims. This requires that papers in experimental science describe the results and provide a sufficiently clear protocol to allow successful repetition and extension of analyses based on original data. The importance of replication and reproducibility has recently been exemplified through studies showing that scientific papers commonly leave out experimental details essential for reproduction, studies showing difficulties with replicating published experimental results, an increase in retracted papers, and through a high number of failing clinical trials. This has led to discussions on how individual researchers, institutions, funding bodies, and journals can establish routines that increase transparency and reproducibility. In order to foster such aspects, it has been suggested that the scientific community needs to develop a “culture of reproducibility” for computational science, and to require it for published claims. We want to emphasize that reproducibility is not only a moral responsibility with respect to the scientific field, but that a lack of reproducibility can also be a burden for you as an individual researcher. As an example, a good practice of reproducibility is necessary in order to allow previously developed methodology to be effectively applied on new data, or to allow reuse of code and results for new projects. In other words, good habits of reproducibility may actually turn out to be a time-saver in the longer run. We further note that reproducibility is just as much about the habits that ensure reproducible research as the technologies that can make these processes efficient and realistic. Each of the following ten rules captures a specific aspect of reproducibility, and discusses what is needed in terms of information handling and tracking of procedures. If you are taking a bare-bones approach to bioinformatics analysis, i.e., running various custom scripts from the command line, you will probably need to handle each rule explicitly. If you are instead performing your analyses through an integrated framework (such as GenePattern, Galaxy, LONI pipeline, or Taverna), the system may already provide full or partial support for most of the rules. What is needed on your part is then merely the knowledge of how to exploit these existing possibilities.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Reading
Provider:
PLOS Computational Biology
Author:
Anton Nekrutenko
Eivind Hovig
Geir Kjetil Sandve
James Taylor
Date Added:
08/07/2020
Public Data Archiving in Ecology and Evolution: How Well Are We Doing?
Unrestricted Use
CC BY
Rating
0.0 stars

Policies that mandate public data archiving (PDA) successfully increase accessibility to data underlying scientific publications. However, is the data quality sufficient to allow reuse and reanalysis? We surveyed 100 datasets associated with nonmolecular studies in journals that commonly publish ecological and evolutionary research and have a strong PDA policy. Out of these datasets, 56% were incomplete, and 64% were archived in a way that partially or entirely prevented reuse. We suggest that cultural shifts facilitating clearer benefits to authors are necessary to achieve high-quality PDA and highlight key guidelines to help authors increase their data’s reuse potential and compliance with journal data policies.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Dominique G. Roche
Loeske E. B. Kruuk
Robert Lanfear
Sandra A. Binning
Date Added:
08/07/2020
Four simple recommendations to encourage best practices in research software
Unrestricted Use
CC BY
Rating
0.0 stars

Scientific research relies on computer software, yet software is not always developed following practices that ensure its quality and sustainability. This manuscript does not aim to propose new software development best practices, but rather to provide simple recommendations that encourage the adoption of existing best practices. Software development best practices promote better quality software, and better quality software improves the reproducibility and reusability of research. These recommendations are designed around Open Source values, and provide practical suggestions that contribute to making research software and its source code more discoverable, reusable and transparent. This manuscript is aimed at developers, but also at organisations, projects, journals and funders that can increase the quality and sustainability of research software by encouraging the adoption of these recommendations.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Reading
Provider:
F1000Research
Author:
Alejandra Gonzalez-Beltran
Allegra Via
Andrew Treloar
Bernard Pope
Björn GrüningJonas Hagberg
Brane Leskošek
Bérénice Batut
Carole Goble
Daniel S. Katz
Daniel Vaughan
David Mellor
Federico López Gómez
Ferran Sanz
Harry-Anton Talvik
Horst Pichler
Ilian Todorov
Jon Ison
Josep Ll. Gelpí
Leyla Garcia
Luis J. Oliveira
Maarten van Gompel
Madison Flannery
Manuel Corpas
Maria V. Schneider
Martin Cook
Mateusz Kuzak
Michelle Barker
Mikael Borg
Monther Alhamdoosh
Montserrat González Ferreiro
Nathan S. Watson-Haigh
Neil Chue Hong
Nicola Mulder
Petr Holub
Philippa C. Griffin
Radka Svobodová Vařeková
Radosław Suchecki
Rafael C. Jiménez
Rob Hooft
Robert Pergl
Rowland Mosbergen
Salvador Capella-Gutierrez
Simon Gladman
Sonika Tyagi
Steve Crouchc
Victoria Stodden
Xiaochuan Wang
Yasset Perez-Riverol
Date Added:
08/07/2020
A test of the diffusion model explanation for the worst performance rule using preregistration and blinding
Unrestricted Use
CC BY
Rating
0.0 stars

People with higher IQ scores also tend to perform better on elementary cognitive-perceptual tasks, such as deciding quickly whether an arrow points to the left or the right Jensen (2006). The worst performance rule (WPR) finesses this relation by stating that the association between IQ and elementary-task performance is most pronounced when this performance is summarized by people’s slowest responses. Previous research has shown that the WPR can be accounted for in the Ratcliff diffusion model by assuming that the same ability parameter—drift rate—mediates performance in both elementary tasks and higher-level cognitive tasks. Here we aim to test four qualitative predictions concerning the WPR and its diffusion model explanation in terms of drift rate. In the first stage, the diffusion model was fit to data from 916 participants completing a perceptual two-choice task; crucially, the fitting happened after randomly shuffling the key variable, i.e., each participant’s score on a working memory capacity test. In the second stage, after all modeling decisions were made, the key variable was unshuffled and the adequacy of the predictions was evaluated by means of confirmatory Bayesian hypothesis tests. By temporarily withholding the mapping of the key predictor, we retain flexibility for proper modeling of the data (e.g., outlier exclusion) while preventing biases from unduly influencing the results. Our results provide evidence against the WPR and suggest that it may be less robust and less ubiquitous than is commonly believed.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Attention, Perception, & Psychophysics
Author:
Alexander Ly
Andreas Pedroni
Dora Matzke
Eric-Jan Wagenmakers
Gilles Dutilh
Joachim Vandekerckhove
Jörg Rieskamp
Renato Frey
Date Added:
08/07/2020
Reporting in Experimental Philosophy: Current Standards and Recommendations for Future Practice
Unrestricted Use
CC BY
Rating
0.0 stars

Recent replication crises in psychology and other fields have led to intense reflection about the validity of common research practices. Much of this reflection has focussed on reporting standards, and how they may be related to the questionable research practices that could underlie a high proportion of irreproducible findings in the published record. As a developing field, it is particularly important for Experimental Philosophy to avoid some of the pitfalls that have beset other disciplines. To this end, here we provide a detailed, comprehensive assessment of current reporting practices in Experimental Philosophy. We focus on the quality of statistical reporting and the disclosure of information about study methodology. We assess all the articles using quantitative methods (n = 134) that were published over the years 2013–2016 in 29 leading philosophy journals. We find that null hypothesis significance testing is the prevalent statistical practice in Experimental Philosophy, although relying solely on this approach has been criticised in the psychological literature. To augment this approach, various additional measures have become commonplace in other fields, but we find that Experimental Philosophy has adopted these only partially: 53% of the papers report an effect size, 28% confidence intervals, 1% examined prospective statistical power and 5% report observed statistical power. Importantly, we find no direct relation between an article’s reporting quality and its impact (numbers of citations). We conclude with recommendations for authors, reviewers and editors in Experimental Philosophy, to facilitate making research statistically-transparent and reproducible.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Review of Philosophy and Psychology
Author:
Andrea Polonioli
Brittany Blankinship
David Carmel
Mariana Vega-Mendoza
Date Added:
08/07/2020
Discrepancies in the Registries of Diet vs Drug Trials
Unrestricted Use
CC BY
Rating
0.0 stars

This cross-sectional study examines discrepancies between registered protocols and subsequent publications for drug and diet trials whose findings were published in prominent clinical journals in the last decade. ClinicalTrials.gov was established in 2000 in response to the Food and Drug Administration Modernization Act of 1997, which called for registration of trials of investigational new drugs for serious diseases. Subsequently, the scope of ClinicalTrials.gov expanded to all interventional studies, including diet trials. Presently, prospective trial registration is required by the National Institutes of Health for grant funding and many clinical journals for publication.1 Registration may reduce risk of bias from selective reporting and post hoc changes in design and analysis.1,2 Although a study3 of trials with ethics approval in Finland in 2007 identified numerous discrepancies between registered protocols and subsequent publications, the consistency of diet trial registration and reporting has not been well explored.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
JAMA Network Open
Author:
Cara B. Ebbeling
David S. Ludwig
Steven B. Heymsfield
Date Added:
08/07/2020
The citation advantage of linking publications to research data
Unrestricted Use
CC BY
Rating
0.0 stars

Efforts to make research results open and reproducible are increasingly reflected by journal policies encouraging or mandating authors to provide data availability statements. As a consequence of this, there has been a strong uptake of data availability statements in recent literature. Nevertheless, it is still unclear what proportion of these statements actually contain well-formed links to data, for example via a URL or permanent identifier, and if there is an added value in providing them. We consider 531,889 journal articles published by PLOS and BMC which are part of the PubMed Open Access collection, categorize their data availability statements according to their content and analyze the citation advantage of different statement categories via regression. We find that, following mandated publisher policies, data availability statements have become common by now, yet statements containing a link to a repository are still just a fraction of the total. We also find that articles with these statements, in particular, can have up to 25.36% higher citation impact on average: an encouraging result for all publishers and authors who make the effort of sharing their data. All our data and code are made available in order to reproduce and extend our results.

Subject:
Life Science
Social Science
Material Type:
Reading
Provider:
arXiv
Author:
Barbara McGillivray
Giovanni Colavizza
Iain Hrynaszkiewicz
Isla Staden
Kirstie Whitaker
Date Added:
08/07/2020
The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture:
Unrestricted Use
CC BY
Rating
0.0 stars

Like many other areas of science, experimental psychology is affected by a “replication crisis” that is causing concern in many fields of research. Approaches t...

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Quarterly Journal of Experimental Psychology
Author:
Dorothy VM Bishop
Date Added:
08/07/2020