It ’s getting increasingly difficult for psychology’s replication-crisis sceptics to explain away failed replications

The Many Labs 2 project managed to successfully replicate only half of 28 previously published significant effects By guest blogger Jesse Singal Replicating a study isn’t easy. Just knowing how the original was conducted isn’t enough. Just having access to a sample of experimental participants isn’t enough. As psychological researchers have known for a long time, all sorts of subtle cues can affect how individuals respond in experimental settings. A failure to replicate, then, doesn’t always mean that the effect being studied isn’t there – it can simply mean the new study was conducted a bit differently. Many Labs 2, a project of the Center for Open Science at the University of Virginia, embarked on one of the most ambitious replication efforts in psychology yet – and did so in a way designed to address these sorts of critiques, which have in some cases hampered past efforts. The resultant paper, a preprint of which can be viewed here, is lead-authored by Richard A. Klein of the Université Grenoble Alpes. Klein and his very, very large team – it takes almost four pages of the preprint just to list all the contributors – “conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer-reviewed in advance to examine variation in effect magnitudes across sample and setting.” Many Labs included 79 samples of participants tested in-person and 46 samples tested online, and of these, 39 were from the U.S. ...
Source: BPS RESEARCH DIGEST - Category: Psychiatry & Psychology Authors: Tags: Methods Replications Source Type: blogs