On getting it right by being wrong: A case study of how flawed research may become self-fulfilling at last.
Proc Natl Acad Sci U S A 2022;
119:e2122274119. [PMID:
35394869 PMCID:
PMC9169707 DOI:
10.1073/pnas.2122274119]
[Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Understanding how humans process time series data is more pressing now than ever amid a progressing pandemic. Current research draws on some fifty years of empirical evidence on laypeople’s (in-)ability to extrapolate exponential growth. Yet even canonized evidence ought not to be trusted blindly. As a case in point, I review a seminal study that is still highly (even increasingly) cited, although seriously flawed. This case serves as both a reminder of how readily even experienced, well-meaning researchers underestimate exponential dynamics, and an admonition for subsequent researchers to critically read and evaluate the research they cite in order to catch and correct errors quickly rather than carry them forward over decades.
Scientists prominently argue that the COVID-19 pandemic stems not least from people’s inability to understand exponential growth. They increasingly cite evidence from a classic psychological experiment published some 45 years prior to the first case of COVID-19. Despite—or precisely because of—becoming such a canonical study (more often cited than read), its critical design flaws went completely unnoticed. They are discussed here as a cautionary tale against uncritically enshrining unsound research in the “lore” of a field of research. In hindsight, this is a unique case study of researchers falling prey to just the cognitive bias they set out to study—undermining an experiment’s methodology while, ironically, still supporting its conclusion.
Collapse