The Open Science movement is rapidly changing the scientific landscape. Because exact …
The Open Science movement is rapidly changing the scientific landscape. Because exact definitions are often lacking and reforms are constantly evolving, accessible guides to open science are needed. This paper provides an introduction to open science and related reforms in the form of an annotated reading list of seven peer-reviewed articles, following the format of Etz et al. (2018). Written for researchers and students - particularly in psychological science - it highlights and introduces seven topics: understanding open science; open access; open data, materials, and code; reproducible analyses; preregistration and registered reports; replication research; and teaching open science. For each topic, we provide a detailed summary of one particularly informative and actionable article and suggest several further resources. Supporting a broader understanding of open science issues, this overview should enable researchers to engage with, improve, and implement current open, transparent, reproducible, replicable, and cumulative scientific practices.
Poor research reporting is a major contributing factor to low study reproducibility, …
Poor research reporting is a major contributing factor to low study reproducibility, financial and animal waste. The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines were developed to improve reporting quality and many journals support these guidelines. The influence of this support is unknown. We hypothesized that papers published in journals supporting the ARRIVE guidelines would show improved reporting compared with those in non-supporting journals. In a retrospective, observational cohort study, papers from 5 ARRIVE supporting (SUPP) and 2 non-supporting (nonSUPP) journals, published before (2009) and 5 years after (2015) the ARRIVE guidelines, were selected. Adherence to the ARRIVE checklist of 20 items was independently evaluated by two reviewers and items assessed as fully, partially or not reported. Mean percentages of items reported were compared between journal types and years with an unequal variance t-test. Individual items and sub-items were compared with a chi-square test. From an initial cohort of 956, 236 papers were included: 120 from 2009 (SUPP; n = 52, nonSUPP; n = 68), 116 from 2015 (SUPP; n = 61, nonSUPP; n = 55). The percentage of fully reported items was similar between journal types in 2009 (SUPP: 55.3 ± 11.5% [SD]; nonSUPP: 51.8 ± 9.0%; p = 0.07, 95% CI of mean difference -0.3–7.3%) and 2015 (SUPP: 60.5 ± 11.2%; nonSUPP; 60.2 ± 10.0%; p = 0.89, 95%CI -3.6–4.2%). The small increase in fully reported items between years was similar for both journal types (p = 0.09, 95% CI -0.5–4.3%). No paper fully reported 100% of items on the ARRIVE checklist and measures associated with bias were poorly reported. These results suggest that journal support for the ARRIVE guidelines has not resulted in a meaningful improvement in reporting quality, contributing to ongoing waste in animal research.
The aim of this toolkit is to support early career researchers in …
The aim of this toolkit is to support early career researchers in finding a journal that publishes their paper and optimally promotes the visibility of their research. How can they find a journal with a good journal ranking score that is perceived in the respective research community? How can they find a journal that perfectly matches their topic? Should they consider publishing open access? What are predatory journals and how can they detect them?
This course offers analysis and practice of various forms of scientific and …
This course offers analysis and practice of various forms of scientific and technical writing, from memos to journal articles, in addition to strategies for conveying technical information to specialist and non-specialist audiences. Comparable to 21W.780 Communicating in Technical Organizations, but methods in this course are designed to deal with special problems of advanced ELS or bilingual students. The goal of the workshop is to develop effective writing skills for academic and professional contexts. Models, materials, topics and assignments vary from term to term.
Ongoing technological developments have made it easier than ever before for scientists …
Ongoing technological developments have made it easier than ever before for scientists to share their data, materials, and analysis code. Sharing data and analysis code makes it easier for other researchers to re-use or check published research. These benefits will only emerge if researchers can reproduce the analysis reported in published articles, and if data is annotated well enough so that it is clear what all variables mean. Because most researchers have not been trained in computational reproducibility, it is important to evaluate current practices to identify practices that can be improved. We examined data and code sharing, as well as computational reproducibility of the main results, without contacting the original authors, for Registered Reports published in the psychological literature between 2014 and 2018. Of the 62 articles that met our inclusion criteria, data was available for 40 articles, and analysis scripts for 37 articles. For the 35 articles that shared both data and code and performed analyses in SPSS, R, Python, MATLAB, or JASP, we could run the scripts for 31 articles, and reproduce the main results for 20 articles. Although the articles that shared both data and code (35 out of 62, or 56%) and articles that could be computationally reproduced (20 out of 35, or 57%) was relatively high compared to other studies, there is clear room for improvement. We provide practical recommendations based on our observations, and link to examples of good research practices in the papers we reproduced.
To increase transparency in research, the International Committee of Medical Journal Editors …
To increase transparency in research, the International Committee of Medical Journal Editors required, in 2005, prospective registration of clinical trials as a condition to publication. However, many trials remain unregistered or retrospectively registered. We aimed to assess the association between trial prospective registration and treatment effect estimates. Methods This is a meta-epidemiological study based on all Cochrane reviews published between March 2011 and September 2014 with meta-analyses of a binary outcome including three or more randomised controlled trials published after 2006. We extracted trial general characteristics and results from the Cochrane reviews. For each trial, we searched for registration in the report’s full text, contacted the corresponding author if not reported and searched ClinicalTrials.gov and the International Clinical Trials Registry Platform in case of no response. We classified each trial as prospectively registered (i.e. registered before the start date); retrospectively registered, distinguishing trials registered before and after the primary completion date; and not registered. Treatment effect estimates of prospectively registered and other trials were compared by the ratio of odds ratio (ROR) (ROR <1 indicates larger effects in trials not prospectively registered). Results We identified 67 meta-analyses (322 trials). Overall, 225/322 trials (70 %) were registered, 74 (33 %) prospectively and 142 (63 %) retrospectively; 88 were registered before the primary completion date and 54 after. Unregistered or retrospectively registered trials tended to show larger treatment effect estimates than prospectively registered trials (combined ROR = 0.81, 95 % CI 0.65–1.02, based on 32 contributing meta-analyses). Trials unregistered or registered after the primary completion date tended to show larger treatment effect estimates than those registered before this date (combined ROR = 0.84, 95 % CI 0.71–1.01, based on 43 contributing meta-analyses). Conclusions Lack of trial prospective registration may be associated with larger treatment effect estimates.
Objectives Prospective registration of animal studies has been suggested as a new …
Objectives Prospective registration of animal studies has been suggested as a new measure to increase value and reduce waste in biomedical research. We sought to further explore and quantify animal researchers’ attitudes and preferences regarding animal study registries (ASRs). Design Cross-sectional online survey. Setting and participants We conducted a survey with three different samples representing animal researchers: i) corresponding authors from journals with high Eigenfactor, ii) a random Pubmed sample and iii) members of the CAMARADES network. Main outcome measures Perceived level of importance of different aspects of publication bias, the effect of ASRs on different aspects of research as well as the importance of different research types for being registered. Results The survey yielded responses from 413 animal researchers (response rate 7%). The respondents indicated, that some aspects of ASRs can increase administrative burden but could be outweighed by other aspects decreasing this burden. Animal researchers found it more important to register studies that involved animal species with higher levels of cognitive capabilities. The time frame for making registry entries publicly available revealed a strong heterogeneity among respondents, with the largest proportion voting for “access only after consent by the principal investigator” and the second largest proportion voting for “access immediately after registration”. Conclusions The fact that the more senior and experienced animal researchers participating in this survey clearly indicated the practical importance of publication bias and the importance of ASRs underscores the problem awareness across animal researchers and the willingness to actively engage in study registration if effective safeguards for the potential weaknesses of ASRs are put into place. To overcome the first-mover dilemma international consensus statements on how to deal with prospective registration of animal studies might be necessary for all relevant stakeholder groups including animal researchers, academic institutions, private companies, funders, regulatory agencies, and journals.
Scientific data and tools should, as much as possible, be free as …
Scientific data and tools should, as much as possible, be free as in beer and free as in freedom. The vast majority of science today is paid for by taxpayer-funded grants; at the same time, the incredible successes of science are strong evidence for the benefit of collaboration in knowledgable pursuits. Within the scientific academy, sharing of expertise, data, tools, etc. is prolific, but only recently with the rise of the Open Access movement has this sharing come to embrace the public. Even though most research data is never shared, both the public and even scientists in their own fields are often unaware of just much data, tools, and other resources are made freely available for analysis! This list is a small attempt at bringing light to data repositories and computational science tools that are often siloed according to each scientific discipline, in the hopes of spurring along both public and professional contributions to science.
Experienced Registered Reports editors and reviewers come together to discuss the format …
Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.
Beyond the Horizon: Broadening Our Understanding of OER Efficacy is a concise …
Beyond the Horizon: Broadening Our Understanding of OER Efficacy is a concise yet comprehensive resource designed to provide insight into the current state of research and reporting on Open Educational Resources (OER) efficacy. This guide explores existing frameworks, delves into key themes and gaps, and highlights emerging opportunities in the realm of OER efficacy.
These slides were used for a CAUL OER Collective Library Staff Community …
These slides were used for a CAUL OER Collective Library Staff Community of Practice discussion on faciliating peer review in the publication of open textbooks.
Many clinical trials conducted by academic organizations are not published, or are …
Many clinical trials conducted by academic organizations are not published, or are not published completely. Following the US Food and Drug Administration Amendments Act of 2007, “The Final Rule” (compliance date April 18, 2017) and a National Institutes of Health policy clarified and expanded trial registration and results reporting requirements. We sought to identify policies, procedures, and resources to support trial registration and reporting at academic organizations. Methods We conducted an online survey from November 21, 2016 to March 1, 2017, before organizations were expected to comply with The Final Rule. We included active Protocol Registration and Results System (PRS) accounts classified by ClinicalTrials.gov as a “University/Organization” in the USA. PRS administrators manage information on ClinicalTrials.gov. We invited one PRS administrator to complete the survey for each organization account, which was the unit of analysis. Results Eligible organization accounts (N = 783) included 47,701 records (e.g., studies) in August 2016. Participating organizations (366/783; 47%) included 40,351/47,701 (85%) records. Compared with other organizations, Clinical and Translational Science Award (CTSA) holders, cancer centers, and large organizations were more likely to participate. A minority of accounts have a registration (156/366; 43%) or results reporting policy (129/366; 35%). Of those with policies, 15/156 (11%) and 49/156 (35%) reported that trials must be registered before institutional review board approval is granted or before beginning enrollment, respectively. Few organizations use computer software to monitor compliance (68/366; 19%). One organization had penalized an investigator for non-compliance. Among the 287/366 (78%) accounts reporting that they allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent staff is 0.08 (interquartile range = 0.02–0.25). Because of non-response and social desirability, this could be a “best case” scenario. Conclusions Before the compliance date for The Final Rule, some academic organizations had policies and resources that facilitate clinical trial registration and reporting. Most organizations appear to be unprepared to meet the new requirements. Organizations could enact the following: adopt policies that require trial registration and reporting, allocate resources (e.g., staff, software) to support registration and reporting, and ensure there are consequences for investigators who do not follow standards for clinical research.
Clinical trial registries can improve the validity of trial results by facilitating …
Clinical trial registries can improve the validity of trial results by facilitating comparisons between prospectively planned and reported outcomes. Previous reports on the frequency of planned and reported outcome inconsistencies have reported widely discrepant results. It is unknown whether these discrepancies are due to differences between the included trials, or to methodological differences between studies. We aimed to systematically review the prevalence and nature of discrepancies between registered and published outcomes among clinical trials. Methods We searched MEDLINE via PubMed, EMBASE, and CINAHL, and checked references of included publications to identify studies that compared trial outcomes as documented in a publicly accessible clinical trials registry with published trial outcomes. Two authors independently selected eligible studies and performed data extraction. We present summary data rather than pooled analyses owing to methodological heterogeneity among the included studies. Results Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25–110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31 %; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0 % (0/66) to 100 % (1/1), IQR 17–45 %). We found less variability within the subset of studies that assessed the agreement between prospectively registered outcomes and published outcomes, among which the median observed discrepancy rate was 41 % (range 30 % (13/43) to 100 % (1/1), IQR 33–48 %). The nature of observed primary outcome discrepancies also varied substantially between included studies. Among the studies providing detailed descriptions of these outcome discrepancies, a median of 13 % of trials introduced a new, unregistered outcome in the published manuscript (IQR 5–16 %). Conclusions Discrepancies between registered and published outcomes of clinical trials are common regardless of funding mechanism or the journals in which they are published. Consistent reporting of prospectively defined outcomes and consistent utilization of registry data during the peer review process may improve the validity of clinical trial publications.
This webinar (recorded Sept. 27, 2017) introduces how to connect other services …
This webinar (recorded Sept. 27, 2017) introduces how to connect other services as add-ons to projects on the Open Science Framework (OSF; https://osf.io). Connecting services to your OSF projects via add-ons enables you to pull together the different parts of your research efforts without having to switch away from tools and workflows you wish to continue using. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.
This video will go over three issues that can arise when scientific …
This video will go over three issues that can arise when scientific studies have low statistical power. All materials shown in the video, as well as the content from our other videos, can be found here: https://osf.io/7gqsi/
Background All clinical research benefits from transparency and validity. Transparency and validity …
Background All clinical research benefits from transparency and validity. Transparency and validity of studies may increase by prospective registration of protocols and by publication of statistical analysis plans (SAPs) before data have been accessed to discern data-driven analyses from pre-planned analyses. Main message Like clinical trials, recommendations for SAPs for observational studies increase the transparency and validity of findings. We appraised the applicability of recently developed guidelines for the content of SAPs for clinical trials to SAPs for observational studies. Of the 32 items recommended for a SAP for a clinical trial, 30 items (94%) were identically applicable to a SAP for our observational study. Power estimations and adjustments for multiplicity are equally important in observational studies and clinical trials as both types of studies usually address multiple hypotheses. Only two clinical trial items (6%) regarding issues of randomisation and definition of adherence to the intervention did not seem applicable to observational studies. We suggest to include one new item specifically applicable to observational studies to be addressed in a SAP, describing how adjustment for possible confounders will be handled in the analyses. Conclusion With only few amendments, the guidelines for SAP of a clinical trial can be applied to a SAP for an observational study. We suggest SAPs should be equally required for observational studies and clinical trials to increase their transparency and validity.
A number of publishers and funders, including PLOS, have recently adopted policies …
A number of publishers and funders, including PLOS, have recently adopted policies requiring researchers to share the data underlying their results and publications. Such policies help increase the reproducibility of the published literature, as well as make a larger body of data available for reuse and re-analysis. In this study, we evaluate the extent to which authors have complied with this policy by analyzing Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016. Our analysis shows that compliance with the policy has increased, with a significant decline over time in papers that did not include a Data Availability Statement. However, only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method. More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy. These findings suggest that additional review of Data Availability Statements or more stringent policies may be needed to increase data sharing.
In this deep dive session, we discuss the current model of scholarly …
In this deep dive session, we discuss the current model of scholarly publishing, and highlight the challenges and limitations of this model of research dissemination. We then focus on the value of open access and elaborate on different open access levels (Gold, Bronze, and Green), before discussing how preprints/postprints may be leveraged to promote open access.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.