Updating search results...

Search Resources

215 Results

View
Selected filters:
  • reproducibility
7 Easy Steps to Open Science: An Annotated Reading List
Unrestricted Use
CC BY
Rating
0.0 stars

The Open Science movement is rapidly changing the scientific landscape. Because exact definitions are often lacking and reforms are constantly evolving, accessible guides to open science are needed. This paper provides an introduction to open science and related reforms in the form of an annotated reading list of seven peer-reviewed articles, following the format of Etz et al. (2018). Written for researchers and students - particularly in psychological science - it highlights and introduces seven topics: understanding open science; open access; open data, materials, and code; reproducible analyses; preregistration and registered reports; replication research; and teaching open science. For each topic, we provide a detailed summary of one particularly informative and actionable article and suggest several further resources. Supporting a broader understanding of open science issues, this overview should enable researchers to engage with, improve, and implement current open, transparent, reproducible, replicable, and cumulative scientific practices.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Alexander Etz
Amy Orben
Hannah Moshontz
Jesse Niebaum
Johnny van Doorn
Matthew Makel
Michael Schulte-Mecklenbeck
Sam Parsons
Sophia Crüwell
Date Added:
08/12/2019
Aging Research and Open Science Supplemental Reading List
Unrestricted Use
CC BY
Rating
0.0 stars

Open science practices are broadly applicable within the field of aging research. Across study types, these practices carry the potential to influence changes within research practices in aging science that can improve the integrity and reproducibility of studies. Resources on open science practices in aging research can, however, be challenging to discover due to the breadth of aging research and the range of resources available on the subject. By accumulating resources on open science and aging research and compiling them in a centralized location, we hope to facilitate the discoverability and use of these resources among researchers who study aging, and among any other interested parties.  Unfortunately, not all resources are openly available. The following list of resources, while not open access, provide valuable perspectives, information, and insight into the open science movement and its place in aging research. 

Subject:
Applied Science
Material Type:
Reading
Author:
Olivia Lowrey
Date Added:
08/27/2021
Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology
Unrestricted Use
Public Domain
Rating
0.0 stars

Ongoing technological developments have made it easier than ever before for scientists to share their data, materials, and analysis code. Sharing data and analysis code makes it easier for other researchers to re-use or check published research. These benefits will only emerge if researchers can reproduce the analysis reported in published articles, and if data is annotated well enough so that it is clear what all variables mean. Because most researchers have not been trained in computational reproducibility, it is important to evaluate current practices to identify practices that can be improved. We examined data and code sharing, as well as computational reproducibility of the main results, without contacting the original authors, for Registered Reports published in the psychological literature between 2014 and 2018. Of the 62 articles that met our inclusion criteria, data was available for 40 articles, and analysis scripts for 37 articles. For the 35 articles that shared both data and code and performed analyses in SPSS, R, Python, MATLAB, or JASP, we could run the scripts for 31 articles, and reproduce the main results for 20 articles. Although the articles that shared both data and code (35 out of 62, or 56%) and articles that could be computationally reproduced (20 out of 35, or 57%) was relatively high compared to other studies, there is clear room for improvement. We provide practical recommendations based on our observations, and link to examples of good research practices in the papers we reproduced.

Subject:
Psychology
Social Science
Material Type:
Reading
Author:
Daniel Lakens
Jaroslav Gottfried
Nicholas Alvaro Coles
Pepijn Obels
Seth Ariel Green
Date Added:
08/07/2020
Análisis y visualización de datos usando Python
Unrestricted Use
CC BY
Rating
0.0 stars

Python es un lenguaje de programación general que es útil para escribir scripts para trabajar con datos de manera efectiva y reproducible. Esta es una introducción a Python diseñada para participantes sin experiencia en programación. Estas lecciones pueden enseñarse en un día (~ 6 horas). Las lecciones empiezan con información básica sobre la sintaxis de Python, la interface de Jupyter Notebook, y continúan con cómo importar archivos CSV, usando el paquete Pandas para trabajar con DataFrames, cómo calcular la información resumen de un DataFrame, y una breve introducción en cómo crear visualizaciones. La última lección demuestra cómo trabajar con bases de datos directamente desde Python. Nota: los datos no han sido traducidos de la versión original en inglés, por lo que los nombres de variables se mantienen en inglés y los números de cada observación usan la sintaxis de habla inglesa (coma separador de miles y punto separador de decimales).

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Alejandra Gonzalez-Beltran
April Wright
Christopher Erdmann
Enric Escorsa O'Callaghan
Erin Becker
Fernando Garcia
Hely Salgado
Juan M. Barrios
Juan Martín Barrios
Katrin Leinweber
LUS24
Laura Angelone
Leonardo Ulises Spairani
Maxim Belkin
Miguel González
Nicolás Palopoli
Nohemi Huanca Nunez
Paula Andrea Martinez
Raniere Silva
Rayna Harris
Sarah Brown
Silvana Pereyra
Spencer Harris
Stephan Druskat
Trevor Keller
Wilson Lozano
chekos
monialo2000
rzayas
Date Added:
08/07/2020
Assessing data availability and research reproducibility in hydrology and water resources
Unrestricted Use
CC BY
Rating
0.0 stars

There is broad interest to improve the reproducibility of published research. We developed a survey tool to assess the availability of digital research artifacts published alongside peer-reviewed journal articles (e.g. data, models, code, directions for use) and reproducibility of article results. We used the tool to assess 360 of the 1,989 articles published by six hydrology and water resources journals in 2017. Like studies from other fields, we reproduced results for only a small fraction of articles (1.6% of tested articles) using their available artifacts. We estimated, with 95% confidence, that results might be reproduced for only 0.6% to 6.8% of all 1,989 articles. Unlike prior studies, the survey tool identified key bottlenecks to making work more reproducible. Bottlenecks include: only some digital artifacts available (44% of articles), no directions (89%), or all artifacts available but results not reproducible (5%). The tool (or extensions) can help authors, journals, funders, and institutions to self-assess manuscripts, provide feedback to improve reproducibility, and recognize and reward reproducible articles as examples for others.

Subject:
Applied Science
Hydrology
Information Science
Physical Science
Material Type:
Reading
Provider:
Scientific Data
Author:
Adel M. Abdallah
David E. Rosenberg
Hadia Akbar
James H. Stagge
Nour A. Attallah
Ryan James
Date Added:
08/07/2020
Association between trial registration and treatment effect estimates: a meta-epidemiological study
Unrestricted Use
CC BY
Rating
0.0 stars

To increase transparency in research, the International Committee of Medical Journal Editors required, in 2005, prospective registration of clinical trials as a condition to publication. However, many trials remain unregistered or retrospectively registered. We aimed to assess the association between trial prospective registration and treatment effect estimates. Methods This is a meta-epidemiological study based on all Cochrane reviews published between March 2011 and September 2014 with meta-analyses of a binary outcome including three or more randomised controlled trials published after 2006. We extracted trial general characteristics and results from the Cochrane reviews. For each trial, we searched for registration in the report’s full text, contacted the corresponding author if not reported and searched ClinicalTrials.gov and the International Clinical Trials Registry Platform in case of no response. We classified each trial as prospectively registered (i.e. registered before the start date); retrospectively registered, distinguishing trials registered before and after the primary completion date; and not registered. Treatment effect estimates of prospectively registered and other trials were compared by the ratio of odds ratio (ROR) (ROR <1 indicates larger effects in trials not prospectively registered). Results We identified 67 meta-analyses (322 trials). Overall, 225/322 trials (70 %) were registered, 74 (33 %) prospectively and 142 (63 %) retrospectively; 88 were registered before the primary completion date and 54 after. Unregistered or retrospectively registered trials tended to show larger treatment effect estimates than prospectively registered trials (combined ROR = 0.81, 95 % CI 0.65–1.02, based on 32 contributing meta-analyses). Trials unregistered or registered after the primary completion date tended to show larger treatment effect estimates than those registered before this date (combined ROR = 0.84, 95 % CI 0.71–1.01, based on 43 contributing meta-analyses). Conclusions Lack of trial prospective registration may be associated with larger treatment effect estimates.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
BMC Medicine
Author:
Agnès Dechartres
Carolina Riveros
Ignacio Atal
Isabelle Boutron
Philippe Ravaud
Date Added:
08/07/2020
Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor
Unrestricted Use
CC BY
Rating
0.0 stars

Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm–benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman’s rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm–benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Christina Nathues
Hanno Würbel
Lucile Vogt
Thomas S. Reichlin
Date Added:
08/07/2020
Automation and Make
Unrestricted Use
CC BY
Rating
0.0 stars

A Software Carpentry lesson to learn how to use Make Make is a tool which can run commands to read files, process these files in some way, and write out the processed files. For example, in software development, Make is used to compile source code into executable programs or libraries, but Make can also be used to: run analysis scripts on raw data files to get data files that summarize the raw data; run visualization scripts on data files to produce plots; and to parse and combine text files and plots to create papers. Make is called a build tool - it builds data files, plots, papers, programs or libraries. It can also update existing files if desired. Make tracks the dependencies between the files it creates and the files used to create these. If one of the original files (e.g. a data file) is changed, then Make knows to recreate, or update, the files that depend upon this file (e.g. a plot). There are now many build tools available, all of which are based on the same concepts as Make.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Adam Richie-Halford
Ana Costa Conrado
Andrew Boughton
Andrew Fraser
Andy Kleinhesselink
Andy Teucher
Anna Krystalli
Bill Mills
Brandon Curtis
David E. Bernholdt
Deborah Gertrude Digges
François Michonneau
Gerard Capes
Greg Wilson
Jake Lever
Jason Sherman
John Blischak
Jonah Duckles
Juan F Fung
Kate Hertweck
Lex Nederbragt
Luiz Irber
Matthew Thomas
Michael Culshaw-Maurer
Mike Jackson
Pete Bachant
Piotr Banaszkiewicz
Radovan Bast
Raniere Silva
Rémi Emonet
Samuel Lelièvre
Satya Mishra
Trevor Bekolay
Date Added:
03/20/2017
Awesome Open Science Resources
Unrestricted Use
CC BY
Rating
0.0 stars

Scientific data and tools should, as much as possible, be free as in beer and free as in freedom. The vast majority of science today is paid for by taxpayer-funded grants; at the same time, the incredible successes of science are strong evidence for the benefit of collaboration in knowledgable pursuits. Within the scientific academy, sharing of expertise, data, tools, etc. is prolific, but only recently with the rise of the Open Access movement has this sharing come to embrace the public. Even though most research data is never shared, both the public and even scientists in their own fields are often unaware of just much data, tools, and other resources are made freely available for analysis! This list is a small attempt at bringing light to data repositories and computational science tools that are often siloed according to each scientific discipline, in the hopes of spurring along both public and professional contributions to science.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Austin Soplata
Date Added:
09/23/2018
A Bayesian Perspective on the Reproducibility Project: Psychology
Unrestricted Use
CC BY
Rating
0.0 stars

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Alexander Etz
Joachim Vandekerckhove
Date Added:
08/07/2020
Being a Reviewer or Editor for Registered Reports
Unrestricted Use
CC BY
Rating
0.0 stars

Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Best-practice evaluation and guidance for human metagenomic studies
Unrestricted Use
CC BY
Rating
0.0 stars

This resource is a video abstract of a research paper created by Research Square on behalf of its authors. It provides a synopsis that's easy to understand, and can be used to introduce the topics it covers to students, researchers, and the general public. The video's transcript is also provided in full, with a portion provided below for preview:

"Metagenomic analysis frequently plays an important role in development pipelines for human fecal microbiome-related products, but validation and standardization of the methods used to extract DNA and assemble sequence libraries for these studies is currently lacking. To close this gap, researchers recently characterized existing protocols for accuracy and precision. First, they tested the quantification accuracy by using a defined mock community of bacteria. Then, the protocols that performed as expected were evaluated for both within- and inter-laboratory precision metrics. The protocols were also tested against the MOSAIC Standards Challenge samples. Lastly, they defined performance metrics for the recommended protocols to provide best-practice guidance. The uptake of the recommendations generated here should improve reproducibility in human metagenomic research and therefore facilitate development and commercialization of human microbiome-related products..."

The rest of the transcript, along with a link to the research itself, is available on the resource itself.

Subject:
Biology
Life Science
Material Type:
Diagram/Illustration
Reading
Provider:
Research Square
Provider Set:
Video Bytes
Date Added:
10/14/2021
Building a collaborative Psychological Science: Lessons learned from ManyBabies 1
Only Sharing Permitted
CC BY-NC-ND
Rating
0.0 stars

The field of infancy research faces a difficult challenge: some questions require samples that are simply too large for any one lab to recruit and test. ManyBabies aims to address this problem by forming large-scale collaborations on key theoretical questions in developmental science, while promoting the uptake of Open Science practices. Here, we look back on the first project completed under the ManyBabies umbrella – ManyBabies 1 – which tested the development of infant-directed speech preference. Our goal is to share the lessons learned over the course of the project and to articulate our vision for the role of large-scale collaborations in the field. First, we consider the decisions made in scaling up experimental research for a collaboration involving 100+ researchers and 70+ labs. Next, we discuss successes and challenges over the course of the project, including: protocol design and implementation, data analysis, organizational structures and collaborative workflows, securing funding, and encouraging broad participation in the project. Finally, we discuss the benefits we see both in ongoing ManyBabies projects and in future large-scale collaborations in general, with a particular eye towards developing best practices and increasing growth and diversity in infancy research and psychological science in general. Throughout the paper, we include first-hand narrative experiences, in order to illustrate the perspectives of researchers playing different roles within the project. While this project focused on the unique challenges of infant research, many of the insights we gained can be applied to large-scale collaborations across the broader field of psychology.

Subject:
Social Science
Material Type:
Reading
Author:
Casey Lew-Williams
Catherine Davies
Christina Bergmann
Connor P. G. Waddell
J. Kiley Hamlin
Jessica E. Kosie
Jonathan F. Kominsky
Leher Singh
Liquan Liu
Martin Zettersten
Meghan Mastroberardino
Melanie Soderstrom
Melissa Kline
Michael C. Frank
Krista Byers-Heinlein
Date Added:
11/13/2020
COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time
Unrestricted Use
CC BY
Rating
0.0 stars

Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Trials
Author:
Aaron Dale
Anna Powell-Smith
Ben Goldacre
Carl Heneghan
Cicely Marston
Eirion Slade
Henry Drysdale
Ioan Milosevic
Kamal R. Mahtani
Philip Hartley
Date Added:
08/07/2020
COS Registered Reports Portal
Unrestricted Use
CC BY
Rating
0.0 stars

Registered Reports: Peer review before results are known to align scientific values and practices.

Registered Reports is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology.

This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings.

This page includes information on Registered Reports including readings on Registered Reports, Participating Journals, Details & Workflow, Resources for Editors, Resources For Funders, FAQs, and Allied Initiatives.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Student Guide
Provider:
Center for Open Science
Author:
Center for Open Science
David Mellor
Date Added:
08/07/2020
Carpentries Instructor Training
Unrestricted Use
CC BY
Rating
0.0 stars

A two-day introduction to modern evidence-based teaching practices, built and maintained by the Carpentry community.

Subject:
Applied Science
Computer Science
Education
Higher Education
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Aleksandra Nenadic
Alexander Konovalov
Alistair John Walsh
Allison Weber
Amy E. Hodge
Andrew B. Collier
Anita Schürch
AnnaWilliford
Ariel Rokem
Brian Ballsun-Stanton
Callin Switzer
Christian Brueffer
Christina Koch
Christopher Erdmann
Colin Morris
Dan Allan
DanielBrett
Danielle Quinn
Darya Vanichkina
David Jennings
Eric Jankowski
Erin Alison Becker
Evan Peter Williamson
François Michonneau
Gerard Capes
Greg Wilson
Ian Lee
Jason M Gates
Jason Williams
Jeffrey Oliver
Joe Atzberger
John Bradley
John Pellman
Jonah Duckles
Jonathan Bradley
Karen Cranston
Karen Word
Kari L Jordan
Katherine Koziar
Katrin Leinweber
Kees den Heijer
Laurence
Lex Nederbragt
Maneesha Sane
Marie-Helene Burle
Mik Black
Mike Henry
Murray Cadzow
Neal Davis
Neil Kindlon
Nicholas Tierney
Nicolás Palopoli
Noah Spies
Paula Andrea Martinez
Petraea
Rayna Michelle Harris
Rémi Emonet
Rémi Rampin
Sarah Brown
Sarah M Brown
Sarah Stevens
Sean
Serah Anne Njambi Kiburu
Stefan Helfrich
Steve Moss
Stéphane Guillou
Ted Laderas
Tiago M. D. Pereira
Toby Hodges
Tracy Teal
Yo Yehudi
amoskane
davidbenncsiro
naught101
satya-vinay
Date Added:
08/07/2020
Connecting Research Tools to the Open Science Framework (OSF)
Unrestricted Use
CC BY
Rating
0.0 stars

This webinar (recorded Sept. 27, 2017) introduces how to connect other services as add-ons to projects on the Open Science Framework (OSF; https://osf.io). Connecting services to your OSF projects via add-ons enables you to pull together the different parts of your research efforts without having to switch away from tools and workflows you wish to continue using. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
Consequences of Low Statistical Power
Unrestricted Use
CC BY
Rating
0.0 stars

This video will go over three issues that can arise when scientific studies have low statistical power. All materials shown in the video, as well as the content from our other videos, can be found here: https://osf.io/7gqsi/

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020