This Library Carpentry lesson introduces librarians and others to working with data. …
This Library Carpentry lesson introduces librarians and others to working with data. This Library Carpentry lesson introduces people with library- and information-related roles to working with data using regular expressions. The lesson provides background on the regular expression language and how it can be used to match and extract text and to clean data.
Library Carpentry lesson: an introduction to OpenRefine for Librarians This Library Carpentry …
Library Carpentry lesson: an introduction to OpenRefine for Librarians This Library Carpentry lesson introduces people working in library- and information-related roles to working with data in OpenRefine. At the conclusion of the lesson you will understand what the OpenRefine software does and how to use the OpenRefine software to work with data files.
Library Carpentry, an introduction to SQL for Librarians This Library Carpentry lesson …
Library Carpentry, an introduction to SQL for Librarians This Library Carpentry lesson introduces librarians to relational database management system using SQLite. At the conclusion of the lesson you will: understand what SQLite does; use SQLite to summarise and link data.
Library Carpentry lesson to learn how to use the Shell. This Library …
Library Carpentry lesson to learn how to use the Shell. This Library Carpentry lesson introduces librarians to the Unix Shell. At the conclusion of the lesson you will: understand the basics of the Unix shell; understand why and how to use the command line; use shell commands to work with directories and files; use shell commands to find and manipulate data.
Join us for a 30 minute guest webinar by Brandon Butler, Director …
Join us for a 30 minute guest webinar by Brandon Butler, Director of Information Policy at the University of Virginia. This webinar will introduce questions to think about when picking a license for your research. You can signal which license you pick using the License Picker on the Open Science Framework (OSF; https://osf.io). The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github, Mendeley, and now is integrated with JASP, to streamline workflows and increase efficiency.
This recorded webinar features insights from international panelists currently nurturing culture change …
This recorded webinar features insights from international panelists currently nurturing culture change in research among their local communities.Representat...
Addressing issues with the reproducibility of results is critical for scientific progress, …
Addressing issues with the reproducibility of results is critical for scientific progress, but conflicting ideas about the sources of and solutions to irreproducibility are a barrier to change. Prior work has attempted to address this problem by creating analytical definitions of reproducibility. We take a novel empirical, mixed methods approach to understanding variation in reproducibility conversations, which yields a map of the discursive dimensions of these conversations. This analysis demonstrates that concerns about the incentive structure of science, the transparency of methods and data, and the need to reform academic publishing form the core of reproducibility discussions. We also identify three clusters of discussion that are distinct from the main group: one focused on reagents, another on statistical methods, and a final cluster focused the heterogeneity of the natural world. Although there are discursive differences between scientific and popular articles, there are no strong differences in how scientists and journalists write about the reproducibility crisis. Our findings show that conversations about reproducibility have a clear underlying structure, despite the broad scope and scale of the crisis. Our map demonstrates the value of using qualitative methods to identify the bounds and features of reproducibility discourse, and identifies distinct vocabularies and constituencies that reformers should engage with to promote change.
Headlines and scholarly publications portray a crisis in biomedical and health sciences. …
Headlines and scholarly publications portray a crisis in biomedical and health sciences. In this webinar, you will learn what the crisis is and the vital role of librarians in addressing it. You will see how you can directly and immediately support reproducible and rigorous research using your expertise and your library services. You will explore reproducibility guidelines and recommendations and develop an action plan for engaging researchers and stakeholders at your institution. #MLAReproducibilityLearning OutcomesBy the end of this webinar, participants will be able to: describe the basic history of the “reproducibility crisis” and define reproducibility and replicability explain why librarians have a key role in addressing concerns about reproducibility, specifically in terms of the packaging of science explain 3-4 areas where librarians can immediately and directly support reproducible research through existing expertise and services start developing an action plan to engage researchers and stakeholders at their institution about how they will help address research reproducibility and rigorAudienceLibrarians who work with researchers; librarians who teach, conduct, or assist with evidence-synthesis or critical appraisal, and managers and directors who are interested in allocating resources toward supporting research rigor. No prior knowledge or skills required. Basic knowledge of scholarly research and publishing helpful.
Expectations by funders for transparent and reproducible methods are on the rise. …
Expectations by funders for transparent and reproducible methods are on the rise. This session covers expectations for preregistration, data sharing, and open access results of three key funders of education research including the Institute of Education Sciences, the National Science Foundation, and Arnold Ventures. Presenters cover practical resources for meeting these requirements such as the Registry for Efficacy and Effectiveness Studies (REES), the Open Science Framework (OSF), and EdArXiv. Presenters: Jessaca Spybrook, Western Michigan University Bryan Cook, University of Virginia David Mellor, Center for Open Science
Numerous biases are believed to affect the scientific literature, but their actual …
Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis.
In his talk, Professor Nosek defines replication as gathering evidence that tests …
In his talk, Professor Nosek defines replication as gathering evidence that tests an empirical claim made in an original paper. This intent influences the design and interpretation of a replication study and addresses confusion between conceptual and direct replications. --- Are you a funder interested in supporting research on the scientific process? Learn more about the communities mobilizing around the emerging field of metascience by visiting metascience.com. Funders are encouraged to review and adopt the practices overviewed at cos.io/top-funders as part of the solution to issues discussed during the Funders Forum.
This essay introduces a new analytical category of scientific actors: the methodologists. …
This essay introduces a new analytical category of scientific actors: the methodologists. These actors are distinguished by their tendency to continue to probing scientific objects that their peers consider to be settled. The methodologists are a useful category of actors for science and technology studies (STS) scholars to follow because they reveal contingencies and uncertainties in taken-for-granted science. Identifying methodologists is useful for STS analysts seeking a way into science in moments when it is no longer “in the making” or there is little active controversy. Studying methodologists is also useful for scholars seeking to understand the genesis of scientific controversies, particularly controversies about long-established methods, facts, or premises.
In January 2014, NIH launched a series of initiatives to enhance rigor …
In January 2014, NIH launched a series of initiatives to enhance rigor and reproducibility in research. As a part of this initiative, NIGMS, along with nine other NIH institutes and centers, issued a funding opportunity announcement (FOA) RFA-GM-15-006 to develop, pilot, and disseminate training modules to enhance data reproducibility. This FOA was reissued in 2018 (RFA-GM-18-002).For the benefit of the scientific community, we will post the products of grants funded by these FOAs on this website as they become available. In addition, we are sharing here other relevant training modules developed, including courses developed from administrative supplements to NIGMS predoctoral T32 grants.
This webinar walks you through the basics of creating an OSF project, …
This webinar walks you through the basics of creating an OSF project, structuring it to fit your research needs, adding collaborators, and tying your favorite online tools into your project structure. OSF is a free, open source web application built by the Center for Open Science, a non-profit dedicated to improving the alignment between scientific values and scientific practices. OSF is part collaboration tool, part version control software, and part data archive. It is designed to connect to popular tools researchers already use, like Dropbox, Box, Github, and Mendeley, to streamline workflows and increase efficiency.
The OSF Collections repository platform supports the discoverability and reuse of research …
The OSF Collections repository platform supports the discoverability and reuse of research by enabling the aggregation of related projects across OSF. With OSF Collections, any funder, journal, society, or research community can show their commitment to scientific integrity by aggregating the open outputs from their disciplines, grantees, journal articles, or more. Learn how research collections can foster new norms for sharing, collaboration, and reproducibility.
We also provide a demo of how OSF Collections aggregates and hosts your research by discipline, funded outcomes, project type, journal issue, and more.
Files for this webinar are available at: https://osf.io/ewhvq/ This webinar focuses on …
Files for this webinar are available at: https://osf.io/ewhvq/ This webinar focuses on how to use the Open Science Framework (OSF) to tie together and organize multiple projects. We look at example structures appropriate for organizing classroom projects, a line of research, or a whole lab's activity. We discuss the OSF's capabilities for using projects as templates, linking projects, and forking projects as well as some considerations for using each of those capabilities when designing a structure for your own project. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.
This webinar will introduce how to use the Open Science Framework (OSF; …
This webinar will introduce how to use the Open Science Framework (OSF; https://osf.io) in a Classroom. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency. This webinar will discuss how to introduce reproducible research practices to students, show ways of tracking student activity, and introduce the use of Templates and Forks on the OSF to allow students to easily make new class projects. The OSF is the flagship product of the Center for Open Science, a non-profit technology start-up dedicated to improving the alignment between scientific values and scientific practices. Learn more at cos.io and osf.io, or email contact@cos.io.
Scientific reproducibility has been at the forefront of many news stories and …
Scientific reproducibility has been at the forefront of many news stories and there exist numerous initiatives to help address this problem. We posit that a contributor is simply a lack of specificity that is required to enable adequate research reproducibility. In particular, the inability to uniquely identify research resources, such as antibodies and model organisms, makes it difficult or impossible to reproduce experiments even where the science is otherwise sound. In order to better understand the magnitude of this problem, we designed an experiment to ascertain the “identifiability” of research resources in the biomedical literature. We evaluated recent journal articles in the fields of Neuroscience, Developmental Biology, Immunology, Cell and Molecular Biology and General Biology, selected randomly based on a diversity of impact factors for the journals, publishers, and experimental method reporting guidelines. We attempted to uniquely identify model organisms (mouse, rat, zebrafish, worm, fly and yeast), antibodies, knockdown reagents (morpholinos or RNAi), constructs, and cell lines. Specific criteria were developed to determine if a resource was uniquely identifiable, and included examining relevant repositories (such as model organism databases, and the Antibody Registry), as well as vendor sites. The results of this experiment show that 54% of resources are not uniquely identifiable in publications, regardless of domain, journal impact factor, or reporting requirements. For example, in many cases the organism strain in which the experiment was performed or antibody that was used could not be identified. Our results show that identifiability is a serious problem for reproducibility. Based on these results, we provide recommendations to authors, reviewers, journal editors, vendors, and publishers. Scientific efficiency and reproducibility depend upon a research-wide improvement of this substantial problem in science today.
OpenML is an online machine learning platform where researchers can easily share …
OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its usage in combination with the machine learning R package mlr (Bischl et al, 2016). We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks. Furthermore, we also show how to upload results of experiments, share them with others and download results from other users. Beyond ensuring reproducibility of results, the OpenML platform automates much of the drudge work, speeds up research, facilitates collaboration and increases the users’ visibility online.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.