Premium
Law and Psychology Grows Up, Goes Online, and Replicates
Author(s) -
Irvine Krin,
Hoffman David A.,
WilkinsonRyan Tess
Publication year - 2018
Publication title -
journal of empirical legal studies
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.529
H-Index - 24
eISSN - 1740-1461
pISSN - 1740-1453
DOI - 10.1111/jels.12180
Subject(s) - skepticism , replication (statistics) , amazon rainforest , subject (documents) , context (archaeology) , replicate , psychology , intervention (counseling) , the internet , experimental psychology , order (exchange) , data science , law , political science , artificial intelligence , social psychology , computer science , epistemology , world wide web , business , geography , ecology , cognition , neuroscience , philosophy , statistics , mathematics , archaeology , finance , psychiatry , biology
Over the last 30 years, legal scholars have increasingly deployed experimental studies, particularly hypothetical scenarios, to test intuitions about legal reasoning and behavior. That movement has accelerated in the last decade, facilitated in large part by cheap and convenient Internet participant recruiting platforms like Amazon Mechanical Turk. The widespread use of online subjects, a practice that dramatically lowers the barriers to entry for experimental research, has been controversial. At the same time, the field of experimental psychology is experiencing a public crisis of confidence widely discussed in terms of the “replication crisis.” At present, law and psychology research is arguably in a new era, in which it is both an accepted feature of the legal landscape and also a target of fresh skepticism. The moment is ripe for taking stock. In this article, we bring an empirical approach to these problems. Using three canonical law and psychology findings, we document the challenges and the feasibility of reproducing results across platforms. We evaluate the extent to which we are able to reproduce the original findings with contemporary subject pools (Amazon Mechanical Turk, other national online platforms, and in‐person labs). We partially replicate all three results, and show marked similarities in subject responses across platforms. In the context of the experiments here, we conclude that meaningful replication requires active intervention in order to keep the materials relevant and sensible. The second aim is to compare Amazon Mechanical Turk subjects to the original samples and to the replication samples. We find, consistent with the weight of recent evidence, that the Amazon Mechanical Turk samples are reasonably appropriate for these kinds of scenario studies. Subjects are highly similar to subjects on other online platforms and in‐person samples, though they differ in their high level of attentiveness. Finally, we review the growing replication literature across disciplines, as well as our firsthand experience, to propose a set of standard practices for the publication of results in law and psychology.