Updating search results...

Search Resources

100 Results

View
Selected filters:
  • publishing
Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals
Unrestricted Use
CC BY
Rating
0.0 stars

Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Peixuan Guo
Victoria Stodden
Zhaokun Ma
Date Added:
08/07/2020
Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies
Unrestricted Use
CC BY
Rating
0.0 stars

A study by David Baker and colleagues reveals poor quality of reporting in pre-clinical animal research and a failure of journals to implement the ARRIVE guidelines. There is growing concern that poor experimental design and lack of transparent reporting contribute to the frequent failure of pre-clinical animal studies to translate into treatments for human disease. In 2010, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines were introduced to help improve reporting standards. They were published in PLOS Biology and endorsed by funding agencies and publishers and their journals, including PLOS, Nature research journals, and other top-tier journals. Yet our analysis of papers published in PLOS and Nature journals indicates that there has been very little improvement in reporting standards since then. This suggests that authors, referees, and editors generally are ignoring guidelines, and the editorial endorsement is yet to be effectively implemented.

Subject:
Applied Science
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Ana Sottomayor
David Baker
Katie Lidster
Sandra Amor
Date Added:
08/07/2020
Update on the endorsement of CONSORT by high impact factor journals: a survey of journal “Instructions to Authors” in 2014
Unrestricted Use
CC BY
Rating
0.0 stars

The CONsolidated Standards Of Reporting Trials (CONSORT) Statement provides a minimum standard set of items to be reported in published clinical trials; it has received widespread recognition within the biomedical publishing community. This research aims to provide an update on the endorsement of CONSORT by high impact medical journals. Methods We performed a cross-sectional examination of the online “Instructions to Authors” of 168 high impact factor (2012) biomedical journals between July and December 2014. We assessed whether the text of the “Instructions to Authors” mentioned the CONSORT Statement and any CONSORT extensions, and we quantified the extent and nature of the journals’ endorsements of these. These data were described by frequencies. We also determined whether journals mentioned trial registration and the International Committee of Medical Journal Editors (ICMJE; other than in regards to trial registration) and whether either of these was associated with CONSORT endorsement (relative risk and 95 % confidence interval). We compared our findings to the two previous iterations of this survey (in 2003 and 2007). We also identified the publishers of the included journals. Results Sixty-three percent (106/168) of the included journals mentioned CONSORT in their “Instructions to Authors.” Forty-four endorsers (42 %) explicitly stated that authors “must” use CONSORT to prepare their trial manuscript, 38 % required an accompanying completed CONSORT checklist as a condition of submission, and 39 % explicitly requested the inclusion of a flow diagram with the submission. CONSORT extensions were endorsed by very few journals. One hundred and thirty journals (77 %) mentioned ICMJE, and 106 (63 %) mentioned trial registration. Conclusions The endorsement of CONSORT by high impact journals has increased over time; however, specific instructions on how CONSORT should be used by authors are inconsistent across journals and publishers. Publishers and journals should encourage authors to use CONSORT and set clear expectations for authors about compliance with CONSORT.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Trials
Author:
David Moher
Douglas G. Altman
Kenneth F. Schulz
Larissa Shamseer
Sally Hopewell
Date Added:
08/07/2020
Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations
Unrestricted Use
CC BY
Rating
0.0 stars

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

Subject:
Applied Science
Health, Medicine and Nursing
Information Science
Life Science
Social Science
Material Type:
Reading
Author:
Carol Muñoz Nieves
Erin C. McKiernan
Juan Pablo Alperin
Lesley A. Schimanski
Lisa Matthias
Meredith T. Niles
Date Added:
08/07/2020
Version control with the OSF
Unrestricted Use
CC BY
Rating
0.0 stars

This webinar will introduce the concept of version control and the version control features that are built into the Open Science Framework (OSF; https://osf.io). The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency. This webinar will discuss how keeping track of the different file versions is important for efficient reproducible research practices, how version control works on the OSF, and how researchers can view and download previous versions of files.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
The What, Why, and How of Preregistration
Unrestricted Use
CC BY
Rating
0.0 stars

More researchers are preregistering their studies as a way to combat publication bias and improve the credibility of research findings. Preregistration is at its core designed to distinguish between confirmatory and exploratory results. Both are important to the progress of science, but when they are conflated, problems arise. In this webinar, we discuss the What, Why, and How of preregistration and what it means for the future of science. Visit cos.io/prereg for additional resources.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
What is statistical power
Unrestricted Use
CC BY
Rating
0.0 stars

This video is the first in a series of videos related to the basics of power analyses. All materials shown in the video, as well as content from the other videos in the power analysis series can be found here: https://osf.io/a4xhr/

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Workflow for Awarding Badges
Unrestricted Use
CC BY
Rating
0.0 stars

Badges are a great way to signal that a journal values transparent research practices. Readers see the papers that have underlying data or methods available, colleagues see that norms are changing within a community and have ample opportunities to emulate better practices, and authors get recognition for taking a step into new techniques. In this webinar, Professor Stephen Lindsay of University of Victoria discusses the workflow of a badging program, eligibility for badge issuance, and the pitfalls to avoid in launching a badging program. Visit cos.io/badges to learn more.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
The Writing Process
Conditional Remix & Share Permitted
CC BY-SA
Rating
0.0 stars

This is a writing resource for the primary grades (K-5). It is a simple layout of how to go through the writing process with brief descriptions off to the side. It is a visual aid and guide to help Writers remember the order and to remind them they will apply this whenever they write, whether it is a narrative, informational, persuasive, etc.

Subject:
Education
Elementary Education
Material Type:
Teaching/Learning Strategy
Author:
Christina Wallwork
Date Added:
10/31/2024
Writing reproducible geoscience papers using R Markdown, Docker, and GitLab
Unrestricted Use
CC BY
Rating
0.0 stars

Reproducibility is unquestionably at the heart of science. Scientists face numerous challenges in this context, not least the lack of concepts, tools, and workflows for reproducible research in today's curricula.This short course introduces established and powerful tools that enable reproducibility of computational geoscientific research, statistical analyses, and visualisation of results using R (http://www.r-project.org/) in two lessons:1. Reproducible Research with R MarkdownOpen Data, Open Source, Open Reviews and Open Science are important aspects of science today. In the first lesson, basic motivations and concepts for reproducible research touching on these topics are briefly introduced. During a hands-on session the course participants write R Markdown (http://rmarkdown.rstudio.com/) documents, which include text and code and can be compiled to static documents (e.g. HTML, PDF).R Markdown is equally well suited for day-to-day digital notebooks as it is for scientific publications when using publisher templates.2. GitLab and DockerIn the second lesson, the R Markdown files are published and enriched on an online collaboration platform. Participants learn how to save and version documents using GitLab (http://gitlab.com/) and compile them using Docker containers (https://docker.com/). These containers capture the full computational environment and can be transported, executed, examined, shared and archived. Furthermore, GitLab's collaboration features are explored as an environment for Open Science.Prerequisites: Participants should install required software (R, RStudio, a current browser) and register on GitLab (https://gitlab.com) before the course.This short course is especially relevant for early career scientists (ECS).Participants are welcome to bring their own data and R scripts to work with during the course.All material by the conveners will be shared publicly via OSF (https://osf.io/qd9nf/).

Subject:
Physical Science
Material Type:
Activity/Lab
Provider:
New York University
Author:
Daniel Nüst
Edzer Pebesma
Markus Konkol
Rémi Rampin
Vicky Steeves
Date Added:
05/11/2018
Your Questions Answered: How to Retain Copyright While Others Distribute and Build Upon Your Work
Unrestricted Use
CC BY
Rating
0.0 stars

In this webinar, a panel discusses licensing options, fundamentals in choosing a license for your research, and answers questions about licensing scholarship. The panel consists of moderator Joanna Schimizzi, Professional Learning Specialist at the Institute for the Study of Knowledge Management in Education, along with panelists Brandon Butler, Director of Information Policy, University of Virginia Library and Becca Neel, Assistant Director for Resource Management & User Experience, University of Southern Indiana for an informative discussion on licensing your research. Accessible and further resources for this event are available on OSF: https://osf.io/s4wdf/

Subject:
Education
Material Type:
Lesson
Provider:
Center for Open Science
Date Added:
11/30/2021
The citation advantage of linking publications to research data
Unrestricted Use
CC BY
Rating
0.0 stars

Efforts to make research results open and reproducible are increasingly reflected by journal policies encouraging or mandating authors to provide data availability statements. As a consequence of this, there has been a strong uptake of data availability statements in recent literature. Nevertheless, it is still unclear what proportion of these statements actually contain well-formed links to data, for example via a URL or permanent identifier, and if there is an added value in providing them. We consider 531,889 journal articles published by PLOS and BMC which are part of the PubMed Open Access collection, categorize their data availability statements according to their content and analyze the citation advantage of different statement categories via regression. We find that, following mandated publisher policies, data availability statements have become common by now, yet statements containing a link to a repository are still just a fraction of the total. We also find that articles with these statements, in particular, can have up to 25.36% higher citation impact on average: an encouraging result for all publishers and authors who make the effort of sharing their data. All our data and code are made available in order to reproduce and extend our results.

Subject:
Life Science
Social Science
Material Type:
Reading
Provider:
arXiv
Author:
Barbara McGillivray
Giovanni Colavizza
Iain Hrynaszkiewicz
Isla Staden
Kirstie Whitaker
Date Added:
08/07/2020
A consensus-based transparency checklist
Unrestricted Use
CC BY
Rating
0.0 stars

We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Nature Human Behaviour
Author:
Agneta Fisher
Alexandra M. Freund
Alexandra Sarafoglou
Alice S. Carter
Andrew A. Bennett
Andrew Gelman
Balazs Aczel
Barnabas Szaszi
Benjamin R. Newell
Brendan Nyhan
Candice C. Morey
Charles Clifton
Christopher Beevers
Christopher D. Chambers
Christopher Sullivan
Cristina Cacciari
D. Stephen Lindsay
Daniel Benjamin
Daniel J. Simons
David R. Shanks
Debra Lieberman
Derek Isaacowitz
Dolores Albarracin
Don P. Green
Eric Johnson
Eric-Jan Wagenmakers
Eveline A. Crone
Fernando Hoces de la Guardia
Fiammetta Cosci
George C. Banks
Gordon D. Logan
Hal R. Arkes
Harold Pashler
Janet Kolodner
Jarret Crawford
Jeffrey Pollack
Jelte M. Wicherts
John Antonakis
John Curtin
John P. Ioannidis
Joseph Cesario
Kai Jonas
Lea Moersdorf
Lisa L. Harlow
M. Gareth Gaskell
Marcus Munafò
Mark Fichman
Mike Cortese
Mitja D. Back
Morton A. Gernsbacher
Nelson Cowan
Nicole D. Anderson
Pasco Fearon
Randall Engle
Robert L. Greene
Roger Giner-Sorolla
Ronán M. Conroy
Scott O. Lilienfeld
Simine Vazire
Simon Farrell
Stavroula Kousta
Ty W. Boyer
Wendy B. Mendes
Wiebke Bleidorn
Willem Frankenhuis
Zoltan Kekecs
Šimon Kucharský
Date Added:
08/07/2020
The effect of publishing peer review reports on referee behavior in five scholarly journals
Unrestricted Use
CC BY
Rating
0.0 stars

To increase transparency in science, some scholarly journals are publishing peer review reports. But it is unclear how this practice affects the peer review process. Here, we examine the effect of publishing peer review reports on referee behavior in five scholarly journals involved in a pilot study at Elsevier. By considering 9,220 submissions and 18,525 reviews from 2010 to 2017, we measured changes both before and during the pilot and found that publishing reports did not significantly compromise referees’ willingness to review, recommendations, or turn-around times. Younger and non-academic scholars were more willing to accept to review and provided more positive and objective recommendations. Male referees tended to write more constructive reports during the pilot. Only 8.1% of referees agreed to reveal their identity in the published report. These findings suggest that open peer review does not compromise the process, at least when referees are able to protect their anonymity.

Subject:
Applied Science
Information Science
Material Type:
Reading
Provider:
Nature Communications
Author:
Bahar Mehmani
Emilia López-Iñesta
Flaminio Squazzoni
Francisco Grimaldo
Giangiacomo Bravo
Date Added:
08/07/2020
An excess of positive results: Comparing the standard Psychology literature with Registered Reports
Unrestricted Use
CC BY
Rating
0.0 stars

When studies with positive results that support the tested hypotheses have a higher probability of being published than studies with negative results, the literature will give a distorted view of the evidence for scientific claims. Psychological scientists have been concerned about the degree of distortion in their literature due to publication bias and inflated Type-1 error rates. Registered Reports were developed with the goal to minimise such biases: In this new publication format, peer review and the decision to publish take place before the study results are known. We compared the results in the full population of published Registered Reports in Psychology (N = 71 as of November 2018) with a random sample of hypothesis-testing studies from the standard literature (N = 152) by searching 633 journals for the phrase ‘test* the hypothes*’ (replicating a method by Fanelli, 2010). Analysing the first hypothesis reported in each paper, we found 96% positive results in standard reports, but only 44% positive results in Registered Reports. The difference remained nearly as large when direct replications were excluded from the analysis (96% vs 50% positive results). This large gap suggests that psychologists underreport negative results to an extent that threatens cumulative science. Although our study did not directly test the effectiveness of Registered Reports at reducing bias, these results show that the introduction of Registered Reports has led to a much larger proportion of negative results appearing in the published literature compared to standard reports.

Subject:
Psychology
Social Science
Material Type:
Reading
Author:
Anne M. Scheel
Daniel Lakens
Mitchell Schijen
Date Added:
08/07/2020
The influence of journal submission guidelines on authors' reporting of statistics and use of open research practices
Unrestricted Use
CC BY
Rating
0.0 stars

From January 2014, Psychological Science introduced new submission guidelines that encouraged the use of effect sizes, estimation, and meta-analysis (the “new statistics”), required extra detail of methods, and offered badges for use of open science practices. We investigated the use of these practices in empirical articles published by Psychological Science and, for comparison, by the Journal of Experimental Psychology: General, during the period of January 2013 to December 2015. The use of null hypothesis significance testing (NHST) was extremely high at all times and in both journals. In Psychological Science, the use of confidence intervals increased markedly overall, from 28% of articles in 2013 to 70% in 2015, as did the availability of open data (3 to 39%) and open materials (7 to 31%). The other journal showed smaller or much smaller changes. Our findings suggest that journal-specific submission guidelines may encourage desirable changes in authors’ practices.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
David Giofrè
Geoff Cumming
Ingrid Boedker
Luca Fresc
Patrizio Tressoldi
Date Added:
08/07/2020
A proposal for the future of scientific publishing in the life sciences
Unrestricted Use
CC BY
Rating
0.0 stars

Science advances through rich, scholarly discussion. More than ever before, digital tools allow us to take that dialogue online. To chart a new future for open publishing, we must consider alternatives to the core features of the legacy print publishing system, such as an access paywall and editorial selection before publication. Although journals have their strengths, the traditional approach of selecting articles before publication (“curate first, publish second”) forces a focus on “getting into the right journals,” which can delay dissemination of scientific work, create opportunity costs for pushing science forward, and promote undesirable behaviors among scientists and the institutions that evaluate them. We believe that a “publish first, curate second” approach with the following features would be a strong alternative: authors decide when and what to publish; peer review reports are published, either anonymously or with attribution; and curation occurs after publication, incorporating community feedback and expert judgment to select articles for target audiences and to evaluate whether scientific work has stood the test of time. These proposed changes could optimize publishing practices for the digital age, emphasizing transparency, peer-mediated improvement, and post-publication appraisal of scientific articles.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Bodo M. Stern
Erin K. O’Shea
Date Added:
08/07/2020
A reputation economy: how individual reward considerations trump systemic arguments for open access to data
Unrestricted Use
CC BY
Rating
0.0 stars

Open access to research data has been described as a driver of innovation and a potential cure for the reproducibility crisis in many academic fields. Against this backdrop, policy makers are increasingly advocating for making research data and supporting material openly available online. Despite its potential to further scientific progress, widespread data sharing in small science is still an ideal practised in moderation. In this article, we explore the question of what drives open access to research data using a survey among 1564 mainly German researchers across all disciplines. We show that, regardless of their disciplinary background, researchers recognize the benefits of open access to research data for both their own research and scientific progress as a whole. Nonetheless, most researchers share their data only selectively. We show that individual reward considerations conflict with widespread data sharing. Based on our results, we present policy implications that are in line with both individual reward considerations and scientific progress.

Subject:
Applied Science
Information Science
Material Type:
Reading
Provider:
Palgrave Communications
Author:
Benedikt Fecher
Marcel Hebing
Sascha Friesike
Stephanie Linek
Date Added:
08/07/2020
A study of the impact of data sharing on article citations using journal policies as a natural experiment
Unrestricted Use
CC BY
Rating
0.0 stars

This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

Subject:
Economics
Social Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Allan Dafoe
Andrew K. Rose
Don A. Moore
Edward Miguel
Garret Christensen
Date Added:
08/07/2020