Response to Ed Yong’s Questions

Norbert Schwarz

http://sitemaker.umich.edu/norbert.schwarz/home

nschwarz@umich.edu

2 Oct 2012

Background

Ed Yong, a science journalist (http://notexactlyrocketscience.wordpress.com/), emailed questions in response to a call from Danny Kahneman that priming researchers should engage in a concerted effort to replicate findings. Ed’s questions are in italics. I share the answers in form of a public Google Document because they are longer than Ed can possibly use and I anticipate that there will be questions down the road about what I may or may not have said once parts of my response might be quoted.  

Questions & Answers

EY:         Do you think that suspicions about social priming are as strong as he suggests?

NS:         Experiments are conducted to test theoretical predictions. No theoretical proposal stands or falls on the basis of a single, isolated finding. Instead, theoretical proposals are evaluated on the basis of a body of convergent findings and their compatibility with what else we know. Individual findings can provoke a rethinking of assumptions, but they are just one building block in a research program.

In his book “Thinking, fast and slow” Danny Kahneman has done a masterful job of reviewing and integrating the diverse findings that some people loosely refer to as “priming research” (knowledge accessibility effects, automaticity, fluency, and so on). As his book shows, there is a large body of converging findings from labs around the world, accumulated over almost four decades of peer-reviewed research published in an array of different journals. This work paints a coherent picture of the underlying processes that does not ride on any single individual finding. Researchers familiar with this literature are also familiar with the large number of conceptual, and sometimes exact, replications and the convergence documented in meta-analyses.

There is no empirical evidence that work in this area is more or less replicable than work in other areas. What distinguishes this work from other areas is solely that some of the findings are more surprising to lay people than findings in other domains. Unfortunately, the surprise value of the findings has sometimes been in the foreground of the publications (and has always been in the foreground of popular reports). This gave some particularly surprising individual findings an iconic status that far exceeds their empirical contribution to theory testing. It also focused the popular discussion on individual results and away from the convergence of a large body of evidence, including many findings that are not eye-catching, and the rather straightforward processes that underlie the surprising effects.

This created a context in which the concerns of a few sceptics, focused on one or two iconic findings, received more attention than either the critics’ slim empirical evidence or the relevance of the iconic findings warrants. You can think of this as psychology’s version of the climate change debate: Much as the consensus among the vast majority of climate researchers gets drowned out by a debate created by poorly supported and narrowly focused claims of a few persistent climate sceptics, the consensus of the vast majority of psychologists closely familiar with work in this area gets drowned out by claims of a few persistent “priming” sceptics. Their scepticism is based on isolated nonreplications of individual findings combined with a refusal to acknowledge the results of meta-analyses that count as conclusive evidence in any other area. Their critiques find attention because the findings they doubt are counterintuitive and of interest to a wide audience -- a failure to replicate a ten millisecond difference in a standard attention experiment would never be covered by you, Ed, or your colleagues. Hence, nonreplications in other domains of psychology rarely become the topic of public debate -- that people care in the case of “priming” studies is a tribute to those who put these phenomena on the map in the first place. While much remains to be learned about these phenomena, a response of broad doubt is incompatible with the available body of consistent evidence and its compatibility with related domains of knowledge (as Kahneman’s “Thinking, fast and slow” documents).

EY:         Would you agree with him that there is a "train wreck looming" and that priming researchers must take action to address the suspicions?

NS:        If there is a “train wreck” looming, it is one of public perception, not one of the quality of the vast majority of the scientific work. The perceived “suspicion” far exceeds what critics’ supporting evidence might warrant. But as the climate change debate illustrates, the perceptions created by such debates are difficult to change through scientific evidence. Obviously, Danny Kahneman is more optimistic on this count than I am -- he thinks that the “suspicions” are unwarranted and that the perceptions can be corrected by the daisy-chain replications he suggests.

EY:        Does his suggestion of a daisy-chain of labs carrying out replications make sense? Are you willing to take up the suggestion, and would others in the field do the same? If so/not, why?

NS:        A daisy-chain of replications is an interesting idea and could provide information about the reliability of new results that is more quickly available than the results of meta-analyses. I will participate in such a daisy-chain if the field decides that it is something that should be implemented in a broader way. I will not participate in it when it is merely directed at one single area of research that happens to be the target of poorly supported “suspicions” voiced by critics who find a few isolated individual results implausible and ignore the majority of the available research. (Independent of this, I will obviously provide what is needed for others to replicate findings from my lab, but that’s not the point of the Kahneman proposal.)