All resources in Researchers

A checklist is associated with increased quality of reporting preclinical biomedical research: A systematic review

(View Complete Item Description)

Irreproducibility of preclinical biomedical research has gained recent attention. It is suggested that requiring authors to complete a checklist at the time of manuscript submission would improve the quality and transparency of scientific reporting, and ultimately enhance reproducibility. Whether a checklist enhances quality and transparency in reporting preclinical animal studies, however, has not been empirically studied. Here we searched two highly cited life science journals, one that requires a checklist at submission (Nature) and one that does not (Cell), to identify in vivo animal studies. After screening 943 articles, a total of 80 articles were identified in 2013 (pre-checklist) and 2015 (post-checklist), and included for the detailed evaluation of reporting methodological and analytical information. We compared the quality of reporting preclinical animal studies between the two journals, accounting for differences between journals and changes over time in reporting. We find that reporting of randomization, blinding, and sample-size estimation significantly improved when comparing Nature to Cell from 2013 to 2015, likely due to implementation of a checklist. Specifically, improvement in reporting of the three methodological information was at least three times greater when a mandatory checklist was implemented than when it was not. Reporting the sex of animals and the number of independent experiments performed also improved from 2013 to 2015, likely from factors not related to a checklist. Our study demonstrates that completing a checklist at manuscript submission is associated with improved reporting of key methodological information in preclinical animal studies.

Material Type: Reading

Authors: Doris M. Rubio, Janet S. Lee, Jill Zupetic, John P. Pribis, Joo Heung Yoon, Kwonho Jeong, Kyle M. Holleran, Nader Shaikh, SeungHye Han, Tolani F. Olonisakin

Reporting in Experimental Philosophy: Current Standards and Recommendations for Future Practice

(View Complete Item Description)

Recent replication crises in psychology and other fields have led to intense reflection about the validity of common research practices. Much of this reflection has focussed on reporting standards, and how they may be related to the questionable research practices that could underlie a high proportion of irreproducible findings in the published record. As a developing field, it is particularly important for Experimental Philosophy to avoid some of the pitfalls that have beset other disciplines. To this end, here we provide a detailed, comprehensive assessment of current reporting practices in Experimental Philosophy. We focus on the quality of statistical reporting and the disclosure of information about study methodology. We assess all the articles using quantitative methods (n = 134) that were published over the years 2013–2016 in 29 leading philosophy journals. We find that null hypothesis significance testing is the prevalent statistical practice in Experimental Philosophy, although relying solely on this approach has been criticised in the psychological literature. To augment this approach, various additional measures have become commonplace in other fields, but we find that Experimental Philosophy has adopted these only partially: 53% of the papers report an effect size, 28% confidence intervals, 1% examined prospective statistical power and 5% report observed statistical power. Importantly, we find no direct relation between an article’s reporting quality and its impact (numbers of citations). We conclude with recommendations for authors, reviewers and editors in Experimental Philosophy, to facilitate making research statistically-transparent and reproducible.

Material Type: Reading

Authors: Andrea Polonioli, Brittany Blankinship, David Carmel, Mariana Vega-Mendoza

Poor statistical reporting, inadequate data presentation and spin persist despite editorial advice

(View Complete Item Description)

The Journal of Physiology and British Journal of Pharmacology jointly published an editorial series in 2011 to improve standards in statistical reporting and data analysis. It is not known whether reporting practices changed in response to the editorial advice. We conducted a cross-sectional analysis of reporting practices in a random sample of research papers published in these journals before (n = 202) and after (n = 199) publication of the editorial advice. Descriptive data are presented. There was no evidence that reporting practices improved following publication of the editorial advice. Overall, 76-84% of papers with written measures that summarized data variability used standard errors of the mean, and 90-96% of papers did not report exact p-values for primary analyses and post-hoc tests. 76-84% of papers that plotted measures to summarize data variability used standard errors of the mean, and only 2-4% of papers plotted raw data used to calculate variability. Of papers that reported p-values between 0.05 and 0.1, 56-63% interpreted these as trends or statistically significant. Implied or gross spin was noted incidentally in papers before (n = 10) and after (n = 9) the editorial advice was published. Overall, poor statistical reporting, inadequate data presentation and spin were present before and after the editorial advice was published. While the scientific community continues to implement strategies for improving reporting practices, our results indicate stronger incentives or enforcements are needed.

Material Type: Reading

Authors: Annie A. Butler, Joanna Diong, Martin E. Héroux, Simon C. Gandevia

COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time

(View Complete Item Description)

Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

Material Type: Reading

Authors: Aaron Dale, Anna Powell-Smith, Ben Goldacre, Carl Heneghan, Cicely Marston, Eirion Slade, Henry Drysdale, Ioan Milosevic, Kamal R. Mahtani, Philip Hartley

Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017

(View Complete Item Description)

Currently, there is a growing interest in ensuring the transparency and reproducibility of the published scientific literature. According to a previous evaluation of 441 biomedical journals articles published in 2000–2014, the biomedical literature largely lacked transparency in important dimensions. Here, we surveyed a random sample of 149 biomedical articles published between 2015 and 2017 and determined the proportion reporting sources of public and/or private funding and conflicts of interests, sharing protocols and raw data, and undergoing rigorous independent replication and reproducibility checks. We also investigated what can be learned about reproducibility and transparency indicators from open access data provided on PubMed. The majority of the 149 studies disclosed some information regarding funding (103, 69.1% [95% confidence interval, 61.0% to 76.3%]) or conflicts of interest (97, 65.1% [56.8% to 72.6%]). Among the 104 articles with empirical data in which protocols or data sharing would be pertinent, 19 (18.3% [11.6% to 27.3%]) discussed publicly available data; only one (1.0% [0.1% to 6.0%]) included a link to a full study protocol. Among the 97 articles in which replication in studies with different data would be pertinent, there were five replication efforts (5.2% [1.9% to 12.2%]). Although clinical trial identification numbers and funding details were often provided on PubMed, only two of the articles without a full text article in PubMed Central that discussed publicly available data at the full text level also contained information related to data sharing on PubMed; none had a conflicts of interest statement on PubMed. Our evaluation suggests that although there have been improvements over the last few years in certain key indicators of reproducibility and transparency, opportunities exist to improve reproducible research practices across the biomedical literature and to make features related to reproducibility more readily visible in PubMed.

Material Type: Reading

Authors: John P. A. Ioannidis, Joshua D. Wallach, Kevin W. Boyack

The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases

(View Complete Item Description)

Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = .36) were much larger than effects from the latter (median r = .16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.

Material Type: Reading

Authors: Marcus A. Schwarz, Thomas Schäfer

The Economics of Reproducibility in Preclinical Research

(View Complete Item Description)

Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.

Material Type: Reading

Authors: Iain M. Cockburn, Leonard P. Freedman, Timothy S. Simcoe

Discrepancies in the Registries of Diet vs Drug Trials

(View Complete Item Description)

This cross-sectional study examines discrepancies between registered protocols and subsequent publications for drug and diet trials whose findings were published in prominent clinical journals in the last decade. ClinicalTrials.gov was established in 2000 in response to the Food and Drug Administration Modernization Act of 1997, which called for registration of trials of investigational new drugs for serious diseases. Subsequently, the scope of ClinicalTrials.gov expanded to all interventional studies, including diet trials. Presently, prospective trial registration is required by the National Institutes of Health for grant funding and many clinical journals for publication.1 Registration may reduce risk of bias from selective reporting and post hoc changes in design and analysis.1,2 Although a study3 of trials with ethics approval in Finland in 2007 identified numerous discrepancies between registered protocols and subsequent publications, the consistency of diet trial registration and reporting has not been well explored.

Material Type: Reading

Authors: Cara B. Ebbeling, David S. Ludwig, Steven B. Heymsfield

A study of the impact of data sharing on article citations using journal policies as a natural experiment

(View Complete Item Description)

This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

Material Type: Reading

Authors: Allan Dafoe, Andrew K. Rose, Don A. Moore, Edward Miguel, Garret Christensen

Questionable research practices among italian research psychologists

(View Complete Item Description)

A survey in the United States revealed that an alarmingly large percentage of university psychologists admitted having used questionable research practices that can contaminate the research literature with false positive and biased findings. We conducted a replication of this study among Italian research psychologists to investigate whether these findings generalize to other countries. All the original materials were translated into Italian, and members of the Italian Association of Psychology were invited to participate via an online survey. The percentages of Italian psychologists who admitted to having used ten questionable research practices were similar to the results obtained in the United States although there were small but significant differences in self-admission rates for some QRPs. Nearly all researchers (88%) admitted using at least one of the practices, and researchers generally considered a practice possibly defensible if they admitted using it, but Italian researchers were much less likely than US researchers to consider a practice defensible. Participants’ estimates of the percentage of researchers who have used these practices were greater than the self-admission rates, and participants estimated that researchers would be unlikely to admit it. In written responses, participants argued that some of these practices are not questionable and they have used some practices because reviewers and journals demand it. The similarity of results obtained in the United States, this study, and a related study conducted in Germany suggest that adoption of these practices is an international phenomenon and is likely due to systemic features of the international research and publication processes.

Material Type: Reading

Authors: Coosje L. S. Veldkamp, Franca Agnoli, Jelte M. Wicherts, Paolo Albiero, Roberto Cubelli

Rate and success of study replication in ecology and evolution

(View Complete Item Description)

The recent replication crisis has caused several scientific disciplines to self-reflect on the frequency with which they replicate previously published studies and to assess their success in such endeavours. The rate of replication, however, has yet to be assessed for ecology and evolution. Here, I survey the open-access ecology and evolution literature to determine how often ecologists and evolutionary biologists replicate, or at least claim to replicate, previously published studies. I found that approximately 0.023% of ecology and evolution studies are described by their authors as replications. Two of the 11 original-replication study pairs provided sufficient statistical detail for three effects so as to permit a formal analysis of replication success. Replicating authors correctly concluded that they replicated an original effect in two cases; in the third case, my analysis suggests that the finding by the replicating authors was consistent with the original finding, contrary the conclusion of “replication failure” by the authors.

Material Type: Reading

Author: Clint D. Kelly

On the reproducibility of science: unique identification of research resources in the biomedical literature

(View Complete Item Description)

Scientific reproducibility has been at the forefront of many news stories and there exist numerous initiatives to help address this problem. We posit that a contributor is simply a lack of specificity that is required to enable adequate research reproducibility. In particular, the inability to uniquely identify research resources, such as antibodies and model organisms, makes it difficult or impossible to reproduce experiments even where the science is otherwise sound. In order to better understand the magnitude of this problem, we designed an experiment to ascertain the “identifiability” of research resources in the biomedical literature. We evaluated recent journal articles in the fields of Neuroscience, Developmental Biology, Immunology, Cell and Molecular Biology and General Biology, selected randomly based on a diversity of impact factors for the journals, publishers, and experimental method reporting guidelines. We attempted to uniquely identify model organisms (mouse, rat, zebrafish, worm, fly and yeast), antibodies, knockdown reagents (morpholinos or RNAi), constructs, and cell lines. Specific criteria were developed to determine if a resource was uniquely identifiable, and included examining relevant repositories (such as model organism databases, and the Antibody Registry), as well as vendor sites. The results of this experiment show that 54% of resources are not uniquely identifiable in publications, regardless of domain, journal impact factor, or reporting requirements. For example, in many cases the organism strain in which the experiment was performed or antibody that was used could not be identified. Our results show that identifiability is a serious problem for reproducibility. Based on these results, we provide recommendations to authors, reviewers, journal editors, vendors, and publishers. Scientific efficiency and reproducibility depend upon a research-wide improvement of this substantial problem in science today.

Material Type: Reading

Authors: Gregory M. LaRocca, Holly Paddock, Laura Ponting, Matthew H. Brush, Melissa A. Haendel, Nicole A. Vasilevsky, Shreejoy J. Tripathy

Attitudes towards animal study registries and their characteristics: An online survey of three cohorts of animal researchers

(View Complete Item Description)

Objectives Prospective registration of animal studies has been suggested as a new measure to increase value and reduce waste in biomedical research. We sought to further explore and quantify animal researchers’ attitudes and preferences regarding animal study registries (ASRs). Design Cross-sectional online survey. Setting and participants We conducted a survey with three different samples representing animal researchers: i) corresponding authors from journals with high Eigenfactor, ii) a random Pubmed sample and iii) members of the CAMARADES network. Main outcome measures Perceived level of importance of different aspects of publication bias, the effect of ASRs on different aspects of research as well as the importance of different research types for being registered. Results The survey yielded responses from 413 animal researchers (response rate 7%). The respondents indicated, that some aspects of ASRs can increase administrative burden but could be outweighed by other aspects decreasing this burden. Animal researchers found it more important to register studies that involved animal species with higher levels of cognitive capabilities. The time frame for making registry entries publicly available revealed a strong heterogeneity among respondents, with the largest proportion voting for “access only after consent by the principal investigator” and the second largest proportion voting for “access immediately after registration”. Conclusions The fact that the more senior and experienced animal researchers participating in this survey clearly indicated the practical importance of publication bias and the importance of ASRs underscores the problem awareness across animal researchers and the willingness to actively engage in study registration if effective safeguards for the potential weaknesses of ASRs are put into place. To overcome the first-mover dilemma international consensus statements on how to deal with prospective registration of animal studies might be necessary for all relevant stakeholder groups including animal researchers, academic institutions, private companies, funders, regulatory agencies, and journals.

Material Type: Reading

Authors: André Bleich, Daniel Strech, Emily S. Sena, Hans Laser, René Tolba, Susanne Wieschowski

DEBATE-statistical analysis plans for observational studies

(View Complete Item Description)

Background All clinical research benefits from transparency and validity. Transparency and validity of studies may increase by prospective registration of protocols and by publication of statistical analysis plans (SAPs) before data have been accessed to discern data-driven analyses from pre-planned analyses. Main message Like clinical trials, recommendations for SAPs for observational studies increase the transparency and validity of findings. We appraised the applicability of recently developed guidelines for the content of SAPs for clinical trials to SAPs for observational studies. Of the 32 items recommended for a SAP for a clinical trial, 30 items (94%) were identically applicable to a SAP for our observational study. Power estimations and adjustments for multiplicity are equally important in observational studies and clinical trials as both types of studies usually address multiple hypotheses. Only two clinical trial items (6%) regarding issues of randomisation and definition of adherence to the intervention did not seem applicable to observational studies. We suggest to include one new item specifically applicable to observational studies to be addressed in a SAP, describing how adjustment for possible confounders will be handled in the analyses. Conclusion With only few amendments, the guidelines for SAP of a clinical trial can be applied to a SAP for an observational study. We suggest SAPs should be equally required for observational studies and clinical trials to increase their transparency and validity.

Material Type: Reading

Authors: Bart Hiemstra, Christian Gluud, Frederik Keus, Iwan C. C. van der Horst, Jørn Wetterslev

Research practices and statistical reporting quality in 250 economic psychology master's theses: a meta-research investigation

(View Complete Item Description)

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p-hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p-values in 18% of cases, while 2% led to decision errors. There were no clear indications of p-hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.

Material Type: Reading

Authors: Erich Kirchler, Jerome Olsen, Johanna Mosen, Martin Voracek

Library Carpentry: Tidy data for Librarians

(View Complete Item Description)

Tidy data for librarians: Library Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with Library data in Spreadsheets.

Material Type: Module

Authors: Alex Volkov, Annelise Sklar, Belinda Weaver, Christopher Erdmann, erikamias, Erin Alison Becker, Francois Michonneau, Jacqueline Frisina, James Baker, Jeffrey Oliver, Jez Cope, Ken Lacey, Niamh Wallace, Phil Reed, Scott Carl Peterson, Serah Anne Njambi Kiburu, Sherry Lake, Thea Atwood, Tim Dennis, yvonnemery

The Unix Shell

(View Complete Item Description)

Software Carpentry lesson on how to use the shell to navigate the filesystem and write simple loops and scripts. The Unix shell has been around longer than most of its users have been alive. It has survived so long because it’s a power tool that allows people to do complex things with just a few keystrokes. More importantly, it helps them combine existing programs in new ways and automate repetitive tasks so they aren’t typing the same things over and over again. Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including “high-performance computing” supercomputers). These lessons will start you on a path towards using these resources effectively.

Material Type: Module

Authors: Adam Huffman, Adam James Orr, Adam Richie-Halford, AidaMirsalehi, Alexander Konovalov, Alexander Morley, Alex Kassil, Alex Mac, Alix Keener, Amy Brown, Andrea Bedini, Andrew Boughton, Andrew Reid, Andrew T. T. McRae, Andrew Walker, Ariel Rokem, Armin Sobhani, Ashwin Srinath, Bagus Tris Atmaja, Bartosz Telenczuk, Ben Bolker, Benjamin Gabriel, Bertie Seyffert, Bill Mills, Brian Ballsun-Stanton, BrianBill, Camille Marini, Chris Mentzel, Christina Koch, Colin Morris, Colin Sauze, csqrs, Damien Irving, Dana Brunson, Daniel Baird, Danielle M. Nielsen, Daniel McCloy, Daniel Standage, Dan Jones, Dave Bridges, David Eyers, David McKain, David Vollmer, Dean Attali, Devinsuit, Dmytro Lituiev, Donny Winston, Doug Latornell, Dustin Lang, earkpr, ekaterinailin, Elena Denisenko, Emily Dolson, Emily Jane McTavish, Eric Jankowski, Erin Alison Becker, Ethan P White, Evgenij Belikov, Farah Shamma, Fatma Deniz, Filipe Fernandes, Francis Gacenga, François Michonneau, Gabriel A. Devenyi, Gerard Capes, Giuseppe Profiti, Greg Wilson, Halle Burns, Hannah Burkhardt, Harriet Alexander, Hugues Fontenelle, Ian van der Linde, Inigo Aldazabal Mensa, Jackie Milhans, Jake Cowper Szamosi, James Guelfi, Jan T. Kim, Jarek Bryk, Jarno Rantaharju, Jason Macklin, Jay van Schyndel, Jens vdL, John Blischak, John Pellman, John Simpson, Jonah Duckles, Jonny Williams, Joshua Madin, Kai Blin, Kathy Chung, Katrin Leinweber, Kevin M. Buckley, Kirill Palamartchouk, Klemens Noga, Kristopher Keipert, Kunal Marwaha, Laurence, Lee Zamparo, Lex Nederbragt, Mahdi Sadjadi, Marcel Stimberg, Marc Rajeev Gouw, Maria Doyle, Marie-Helene Burle, Marisa Lim, Mark Mandel, Martha Robinson, Martin Feller, Matthew Gidden, Matthew Peterson, M Carlise, Megan Fritz, Michael Zingale, Mike Henry, Mike Jackson, Morgan Oneka, Murray Hoggett, Nicolas Barral, Nicola Soranzo, Noah D Brenowitz, Noam Ross, Norman Gray, nther, Orion Buske, Owen Kaluza, Patrick McCann, Paul Gardner, Pauline Barmby, Peter R. Hoyt, Peter Steinbach, Philip Lijnzaad, Phillip Doehle, Piotr Banaszkiewicz, Rafi Ullah, Raniere Silva, Rémi Emonet, reshama shaikh, Robert A Beagrie, Ruud Steltenpool, Ry4an Brase, Sarah Mount, Sarah Simpkin, s-boardman, Scott Ritchie, sjnair, Stéphane Guillou, Stephan Schmeing, Stephen Jones, Stephen Turner, Steve Leak, Susan Miller, Thomas Mellan, Tim Keighley, Tobin Magle, Tom Dowrick, Trevor Bekolay, Varda F. Hagh, Victor Koppejan, Vikram Chhatre, Yee Mey

Library Carpentry: OpenRefine

(View Complete Item Description)

Library Carpentry lesson: an introduction to OpenRefine for Librarians This Library Carpentry lesson introduces people working in library- and information-related roles to working with data in OpenRefine. At the conclusion of the lesson you will understand what the OpenRefine software does and how to use the OpenRefine software to work with data files.

Material Type: Module

Authors: Alexander Mendes, andreamcastillo, Anna Neatrour, Antonin Delpeuch, Betty Rozum, Christina Koch, Christopher Erdmann, Daniel Bangert, dnesdill, Elizabeth Lisa McAulay, Evan Williamson, hauschke, Jamene Brooks-Kieffer, James Baker, Jamie Jamison, Jeffrey Oliver, Katherine Koziar, mhidas, Naupaka Zimmerman, Paul R. Pival, Rémi Emonet, Tim Dennis, Tom Honeyman, Tracy Teal

Library Carpentry: SQL

(View Complete Item Description)

Library Carpentry, an introduction to SQL for Librarians This Library Carpentry lesson introduces librarians to relational database management system using SQLite. At the conclusion of the lesson you will: understand what SQLite does; use SQLite to summarise and link data.

Material Type: Module

Authors: Anna-Maria Sichani, Belinda Weaver, Christopher Erdmann, Dan Michael Heggø, David Kane, Elaine Wong, Emanuele Lanzani, Fernando Rios, Jamene Brooks-Kieffer, James Baker, Janice Chan, Jeffrey Oliver, Katrin Leinweber, Kunal Marwaha, mdschleu, orobecca, Reid Otsuji, Ruud Steltenpool, thegsi, Tim Dennis

Library Carpentry: The UNIX Shell

(View Complete Item Description)

Library Carpentry lesson to learn how to use the Shell. This Library Carpentry lesson introduces librarians to the Unix Shell. At the conclusion of the lesson you will: understand the basics of the Unix shell; understand why and how to use the command line; use shell commands to work with directories and files; use shell commands to find and manipulate data.

Material Type: Module

Authors: Adam Huffman, Alexander Konovalov, Alexander Morley, Alex Kassil, Alex Mendes, Ana Costa Conrado, Andrew Reid, Andrew T. T. McRae, Ariel Rokem, Ashwin Srinath, Bagus Tris Atmaja, Belinda Weaver, Benjamin Bolker, Benjamin Gabriel, BertrandCaron, Brian Ballsun-Stanton, Christopher Erdmann, Christopher Mentzel, colinmorris, Colin Sauze, csqrs, Dan Michael Heggø, Dave Bridges, David McKain, Dmytro Lituiev, earkpr, ekaterinailin, Elena Denisenko, Eric Jankowski, Erin Alison Becker, Evan Williamson, Farah Shamma, Gabriel Devenyi, Gerard Capes, Giuseppe Profiti, Halle Burns, Hannah Burkhardt, hugolio, Ian Lessing, Ian van der Linde, Jake Cowper Szamosi, James Baker, James Guelfi, Jarno Rantaharju, Jarosław Bryk, Jason Macklin, Jeffrey Oliver, jenniferleeucalgary, John Pellman, Jonah Duckles, Jonny Williams, Katrin Leinweber, Kevin M. Buckley, Kunal Marwaha, Laurence, Marc Gouw, Marie-Helene Burle, Marisa Lim, Martha Robinson, Martin Feller, Megan Fritz, Michael Lascarides, Michael Zingale, Michele Hayslett, Mike Henry, Morgan Oneka, Murray Hoggett, Nicolas Barral, Nicola Soranzo, Noah D Brenowitz, Owen Kaluza, Patrick McCann, Peter Hoyt, Rafi Ullah, Raniere Silva, Rémi Emonet, reshama shaikh, Ruud Steltenpool, sjnair, Stéphane Guillou, Stephan Schmeing, Stephen Jones, Stephen Leak, Susan J Miller, Thomas Mellan, Tim Dennis, Tom Dowrick, Travis Lilleberg, Victor Koppejan, Vikram Chhatre, Yee Mey