Updating search results...

Search Resources

2 Results

View
Selected filters:
Building a collaborative Psychological Science: Lessons learned from ManyBabies 1
Only Sharing Permitted
CC BY-NC-ND
Rating
0.0 stars

The field of infancy research faces a difficult challenge: some questions require samples that are simply too large for any one lab to recruit and test. ManyBabies aims to address this problem by forming large-scale collaborations on key theoretical questions in developmental science, while promoting the uptake of Open Science practices. Here, we look back on the first project completed under the ManyBabies umbrella – ManyBabies 1 – which tested the development of infant-directed speech preference. Our goal is to share the lessons learned over the course of the project and to articulate our vision for the role of large-scale collaborations in the field. First, we consider the decisions made in scaling up experimental research for a collaboration involving 100+ researchers and 70+ labs. Next, we discuss successes and challenges over the course of the project, including: protocol design and implementation, data analysis, organizational structures and collaborative workflows, securing funding, and encouraging broad participation in the project. Finally, we discuss the benefits we see both in ongoing ManyBabies projects and in future large-scale collaborations in general, with a particular eye towards developing best practices and increasing growth and diversity in infancy research and psychological science in general. Throughout the paper, we include first-hand narrative experiences, in order to illustrate the perspectives of researchers playing different roles within the project. While this project focused on the unique challenges of infant research, many of the insights we gained can be applied to large-scale collaborations across the broader field of psychology.

Subject:
Social Science
Material Type:
Reading
Author:
Casey Lew-Williams
Catherine Davies
Christina Bergmann
Connor P. G. Waddell
J. Kiley Hamlin
Jessica E. Kosie
Jonathan F. Kominsky
Leher Singh
Liquan Liu
Martin Zettersten
Meghan Mastroberardino
Melanie Soderstrom
Melissa Kline
Michael C. Frank
Krista Byers-Heinlein
Date Added:
11/13/2020
Should I test more babies? Solutions for transparent data peeking
Only Sharing Permitted
CC BY-NC-ND
Rating
0.0 stars

Research with infants is often slow and time-consuming, so infant researchers face great pressure to use the available participants in an efficient way. One strategy that researchers sometimes use to optimize efficiency is data peeking (or “optional stopping”), that is, doing a preliminary analysis (whether a formal significance test or informal eyeballing) of collected data. Data peeking helps researchers decide whether to abandon or tweak a study, decide that a sample is complete, or decide to continue adding data points. Unfortunately, data peeking can have negative consequences such as increased rates of false positives (wrongly concluding that an effect is present when it is not). We argue that, with simple corrections, the benefits of data peeking can be harnessed to use participants more efficiently. We review two corrections that can be transparently reported: one can be applied at the beginning of a study to lay out a plan for data peeking, and a second can be applied after data collection has already started. These corrections are easy to implement in the current framework of infancy research. The use of these corrections, together with transparent reporting, can increase the replicability of infant research.

Subject:
Social Science
Material Type:
Reading
Author:
Krista Byers-Heinlein
Mijke Rhemtulla
Esther Schott
Date Added:
11/13/2020