Reporting along the research lifecycle. Registrations, preprints, publication models, copyright, peer review, metrics, open access publishing, and more.
Note: This webinar was presented in Spanish. The slides presented during this …
Note: This webinar was presented in Spanish. The slides presented during this webinar can be found here:https://osf.io/6qnse/ The slides presented during this seminar can be found here: https://osf.io/6qnse/ Este seminario web se centrará en el estado de la ciencia abierta en América Latina, desde los esfuerzos de los investigadores individuales para abrir sus flujos de trabajo, herramientas para ayudar a los investigadores a ser abiertos y nuevas redes e iniciativas prometedoras en ciencia abierta. Ricardo Hartley (@ametodico) es profesor de metodología de la investigación de la Universidad Central de Chile, investigador en biología de la reproducción y en comunicación - valoración del conocimiento. Organizador de las OpenCon Santiago 2016 y 2017 y embajador COS. Erin McKiernan es profesora del Departamento de Física, Programa de Física Biomédica de la Universidad Nacional Autónoma de México. También es la fundadora del Why Open Research? proyecto, un sitio educativo para que los investigadores aprendan cómo compartir su trabajo, financiado en parte por la Fundación Shuttleworth. Fernan Federici Noe es profesor asistente e investigador de la Universidad Católica de Chile y fellow internacional del OpenPlant Synthetic Biology Center, University of Cambridge. Fernan es miembro del Global For Open Science Hardware (GOSH) y TECNOx (www.tecnox.org).
In this webinar, we demonstrate the OSF tools available for contributors, labs, …
In this webinar, we demonstrate the OSF tools available for contributors, labs, centers, and institutions that support stronger collaborations. The demo includes useful practices like: contributor management, the OSF wiki as an electronic lab notebook, using OSF to manage online courses and syllabi, and more. Finally, we look at how OSF Institutions can provide discovery and intelligence gathering infrastructure so that you can focus on conducting and supporting exceptional research. The Center for Open Science’s ongoing mission is to provide community and technical resources to support your commitments to rigorous, transparent research practices. Visit cos.io/institutions to learn more.
Recent literature hints that outcomes of clinical trials in medicine are selectively …
Recent literature hints that outcomes of clinical trials in medicine are selectively reported. If applicable to psychotic disorders, such bias would jeopardize the reliability of randomized clinical trials (RCTs) investigating antipsychotics and thus their extrapolation to clinical practice. We therefore comprehensively examined outcome reporting bias in RCTs of antipsychotic drugs by a systematic review of prespecified outcomes on ClinicalTrials.gov records of RCTs investigating antipsychotic drugs in schizophrenia and schizoaffective disorder between 1 January 2006 and 31 December 2013. These outcomes were compared with outcomes published in scientific journals. Our primary outcome measure was concordance between prespecified and published outcomes; secondary outcome measures included outcome modifications on ClinicalTrials.gov after trial inception and the effects of funding source and directionality of results on record adherence. Of the 48 RCTs, 85% did not fully adhere to the prespecified outcomes. Discrepancies between prespecified and published outcomes were found in 23% of RCTs for primary outcomes, whereas 81% of RCTs had at least one secondary outcome non-reported, newly introduced, or changed to a primary outcome in the respective publication. In total, 14% of primary and 44% of secondary prespecified outcomes were modified after trial initiation. Neither funding source (P=0.60) nor directionality of the RCT results (P=0.10) impacted ClinicalTrials.gov record adherence. Finally, the number of published safety endpoints (N=335) exceeded the number of prespecified safety outcomes by 5.5 fold. We conclude that RCTs investigating antipsychotic drugs suffer from substantial outcome reporting bias and offer suggestions to both monitor and limit such bias in the future.
Journals are exploring new approaches to peer review in order to reduce …
Journals are exploring new approaches to peer review in order to reduce bias, increase transparency and respond to author preferences. Funders are also getting involved. If you start reading about the subject of peer review, it won't be long before you encounter articles with titles like Can we trust peer review?, Is peer review just a crapshoot? and It's time to overhaul the secretive peer review process. Read some more and you will learn that despite its many shortcomings – it is slow, it is biased, and it lets flawed papers get published while rejecting work that goes on to win Nobel Prizes – the practice of having your work reviewed by your peers before it is published is still regarded as the 'gold standard' of scientific research. Carry on reading and you will discover that peer review as currently practiced is a relatively new phenomenon and that, ironically, there have been remarkably few peer-reviewed studies of peer review.
Objective To investigate the replication validity of biomedical association studies covered by …
Objective To investigate the replication validity of biomedical association studies covered by newspapers. Methods We used a database of 4723 primary studies included in 306 meta-analysis articles. These studies associated a risk factor with a disease in three biomedical domains, psychiatry, neurology and four somatic diseases. They were classified into a lifestyle category (e.g. smoking) and a non-lifestyle category (e.g. genetic risk). Using the database Dow Jones Factiva, we investigated the newspaper coverage of each study. Their replication validity was assessed using a comparison with their corresponding meta-analyses. Results Among the 5029 articles of our database, 156 primary studies (of which 63 were lifestyle studies) and 5 meta-analysis articles were reported in 1561 newspaper articles. The percentage of covered studies and the number of newspaper articles per study strongly increased with the impact factor of the journal that published each scientific study. Newspapers almost equally covered initial (5/39 12.8%) and subsequent (58/600 9.7%) lifestyle studies. In contrast, initial non-lifestyle studies were covered more often (48/366 13.1%) than subsequent ones (45/3718 1.2%). Newspapers never covered initial studies reporting null findings and rarely reported subsequent null observations. Only 48.7% of the 156 studies reported by newspapers were confirmed by the corresponding meta-analyses. Initial non-lifestyle studies were less often confirmed (16/48) than subsequent ones (29/45) and than lifestyle studies (31/63). Psychiatric studies covered by newspapers were less often confirmed (10/38) than the neurological (26/41) or somatic (40/77) ones. This is correlated to an even larger coverage of initial studies in psychiatry. Whereas 234 newspaper articles covered the 35 initial studies that were later disconfirmed, only four press articles covered a subsequent null finding and mentioned the refutation of an initial claim. Conclusion Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.
In this webinar Professor Brian Nosek, Executive Director of the Center for …
In this webinar Professor Brian Nosek, Executive Director of the Center for Open Science (https://cos.io), outlines the practice of Preregistration and how it can aid in increasing the rigor and reproducibility of research. The webinar is co-hosted by the Health Research Alliance, a collaborative member organization of nonprofit research funders. Slides available at: https://osf.io/9m6tx/
In recent years, open science practices have become increasingly popular in psychology …
In recent years, open science practices have become increasingly popular in psychology and related sciences. These practices aim to increase rigour and transparency in science as a potential response to the challenges posed by the replication crisis. Many of these reforms -- including the highly influential preregistration -- have been designed for experimental work that tests simple hypotheses with standard statistical analyses, such as assessing whether an experimental manipulation has an effect on a variable of interest. However, psychology is a diverse field of research, and the somewhat narrow focus of the prevalent discussions surrounding and templates for preregistration has led to debates on how appropriate these reforms are for areas of research with more diverse hypotheses and more complex methods of analysis, such as cognitive modelling research within mathematical psychology. Our article attempts to bridge the gap between open science and mathematical psychology, focusing on the type of cognitive modelling that Crüwell, Stefan, & Evans (2019) labelled model application, where researchers apply a cognitive model as a measurement tool to test hypotheses about parameters of the cognitive model. Specifically, we (1) discuss several potential researcher degrees of freedom within model application, (2) provide the first preregistration template for model application, and (3) provide an example of a preregistered model application using our preregistration template. More broadly, we hope that our discussions and proposals constructively advance the debate surrounding preregistration in cognitive modelling, and provide a guide for how preregistration templates may be developed in other diverse or complex research contexts.
Background There is increasing interest to make primary data from published research …
Background There is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature. Methods and Results We reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available. Conclusion A substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement.
Background The p value obtained from a significance test provides no information …
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
P values represent a widely used, but pervasively misunderstood and fiercely contested …
P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome.
This webinar addresses questions related to writing, reviewing, editing, or funding a …
This webinar addresses questions related to writing, reviewing, editing, or funding a study using the Registered Report format, featuring Chris Chambers and ...
The recent ‘replication crisis’ in psychology has focused attention on ways of …
The recent ‘replication crisis’ in psychology has focused attention on ways of increasing methodological rigor within the behavioral sciences. Part of this work has involved promoting ‘Registered Reports’, wherein journals peer review papers prior to data collection and publication. Although this approach is usually seen as a relatively recent development, we note that a prototype of this publishing model was initiated in the mid-1970s by parapsychologist Martin Johnson in the European Journal of Parapsychology (EJP). A retrospective and observational comparison of Registered and non-Registered Reports published in the EJP during a seventeen-year period provides circumstantial evidence to suggest that the approach helped to reduce questionable research practices. This paper aims both to bring Johnson’s pioneering work to a wider audience, and to investigate the positive role that Registered Reports may play in helping to promote higher methodological and statistical standards.
Preprints in biology are becoming more popular, but only a small fraction …
Preprints in biology are becoming more popular, but only a small fraction of the articles published in peer-reviewed journals have previously been released as preprints. To examine whether releasing a preprint on bioRxiv was associated with the attention and citations received by the corresponding peer-reviewed article, we assembled a dataset of 74,239 articles, 5,405 of which had a preprint, published in 39 journals. Using log-linear regression and random-effects meta-analysis, we found that articles with a preprint had, on average, a 49% higher Altmetric Attention Score and 36% more citations than articles without a preprint. These associations were independent of several other article- and author-level variables (such as scientific subfield and number of authors), and were unrelated to journal-level variables such as access model and Impact Factor. This observational study can help researchers and publishers make informed decisions about how to incorporate preprints into their work.
The reliability of experimental findings depends on the rigour of experimental design. …
The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of randomisation in work published in journals of high impact, and very limited reporting of measures to reduce the risk of bias in publications from leading United Kingdom institutions. Ascertainment of differences between institutions might serve both as a measure of research quality and as a tool for institutional efforts to improve research quality.
SPARC is a global coalition committed to making Open the default for …
SPARC is a global coalition committed to making Open the default for research and education. SPARC empowers people to solve big problems and make new discoveries through the adoption of policies and practices that advance Open Access, Open Data, and Open Education.
The Journal of European Psychology Students (JEPS) is an open-access, double-blind, peer-reviewed …
The Journal of European Psychology Students (JEPS) is an open-access, double-blind, peer-reviewed journal for psychology students worldwide. JEPS is run by highly motivated European psychology students and has been publishing since 2009. By ensuring that authors are always provided with extensive feedback, JEPS gives psychology students the chance to gain experience in publishing and to improve their scientific skills. Furthermore, JEPS provides students with the opportunity to share their research and to take a first step toward a scientific career.
This webinar will introduce the integration of JASP Statistical Software (https://jasp-stats.org/) with …
This webinar will introduce the integration of JASP Statistical Software (https://jasp-stats.org/) with the Open Science Framework (OSF; https://osf.io). The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github, Mendeley, and now is integrated with JASP, to streamline workflows and increase efficiency.
Background The increased use of meta-analysis in systematic reviews of healthcare interventions …
Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Methodology/Principal Findings In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
The Transparency and Openness Promotion guidelines include eight modular standards, each with …
The Transparency and Openness Promotion guidelines include eight modular standards, each with three levels of increasing stringency. Journals select which of the eight transparency standards they wish to implement and select a level of implementation for each. These features provide flexibility for adoption depending on disciplinary variation, but simultaneously establish community standards.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.