Summary: Registered Reports are a new way of doing and publishing research, that might lead to a quite radical change in scientific practice. The new format is a partial solution to the replication crisis in science. How do Registered Reports work? Initially, only the theoretical motivation for the study and a very precise methodology are submitted for peer review to the journal– before any data is collected or analysis is performed. The paper is then accepted, rejected or accepted if revisions to the methodology are made. Only then, the data is collected and the analyses are performed. The paper will be published regardless of the outcome e.g. whether a drug worked or not. Here we argue that this approach is superior to traditional publishing. A more widespread adoption of Registered Reports would change scientific practice for the better across many scientific disciplines through increased replicability of studies, more theory-driven science, and potentially simplified grant-making. Advocacy for Registered Reports is relatively neglected by other funders and there’s a unique window of opportunity to advocate for wider adoption of the format by more scientific journals. Despite some push-back from vested interests in the publishing industry, we believe that advocacy is tractable, and there is a chance that, with a little push through increased funding, Registered Reports might become the gold standard for hypothesis-driven science.


Table of Contents

What are Registered Reports?

Importance: What would be the advantages of more Registered Reports?

More methodologically rigorous research

What advantages do Registered Reports have over pre-registration?

Early peer-review might lead to more theory-driven research and better methodologies

Increasing the credibility of non-randomized studies such as natural experiments

Improving the Efficiency of Grant and Journal Peer Review: Registered Reports Funding

Other potential applications to markets for ideas and prediction markets in science

High tractability due to right incentive structures

Incentive structures

Alternatives to Registered Reports: multiverse analysis and Bayes Factors

Criticism of Registered Reports

Neglectedness

References


What are Registered Reports?

Figure 1 shows a simple model of the Registered Reports review process:

Figure 1 (taken from [1], adapted from [2]): This figure shows a simplified model Registered Report review process. Unlike the usual review format an extra review stage is added after the study is designed and before the data is collected.

The process is described in more detail on the Center for Open Science website:

Authors of Registered Reports initially submit a Stage 1 manuscript that includes an Introduction, Methods, and the results of any pilot experiments that motivate the research proposal. Following assessment of the protocol by editors and reviewers, the manuscript can then be offered in-principle acceptance (IPA), which means that the journal virtually guarantees publication if the authors conduct the experiment in accordance with their approved protocol. With IPA in hand, the researchers then implement the experiment. Following data collection, they resubmit a Stage 2 manuscript that includes the Introduction and Methods from the original submission plus the Results and Discussion. The Results section includes the outcome of the pre-registered analyses together with any additional unregistered analyses in a separate section titled “Exploratory Analyses”. Authors are also encouraged or required to share their data on a public and freely accessible archive such as OSF or Figshare and are encouraged to share data analysis scripts. The final complete article is published after this process is complete. A published Registered Report will thus appear very similar to a standard research report but will give readers confidence that the hypotheses and main analyses are free of questionable research practices. (Figure 1). ” [3]

Figure 1 (adapted from[4]): The figure shows a more detailed submission pipeline and workflow for Registered Reports

More information can be found in the Frequently Asked Questions on Center for Open Science website[5].

For a very detailed academic treatment of Registered Reports see [6], for a treatment of Registered Reports in the popular press see [7].

Importance: What would be the advantages of more Registered Reports?

Registered Reports might open up a very different way of scientific practice. Across scientific fields, from social science to physics, scientific research papers often have surprisingly similar characteristics in terms of structure, methodology and statistical tests. This makes it possible for Registered Reports to affect many different areas of hypothesis-driven science at once. Improving this practice (the publishing of scientific research) might have a wider reach and therefore the potential for higher impact than other meta-research (e.g. meta-research that merely aims at improving frequentist statistics might not be as widely applicable because some fields use different statistics).

In the following sections, we list the likely outcomes of more widespread adoption of the Registered Reports format. The most often highlighted outcome is that research will become more replicable and methodologically sound, but there are several other advantages that Registered Reports have.

We go into all the advantages in more detail below; but, in brief, more widespread adoption of Registered Reports might cause:

More methodologically rigorous research

The most often cited reason for Registered Reports is that it will make published research more rigorous and therefore less prone to statistical and methodological errors.

For instance, questionable research practices are surprisingly common in the sciences and include[8]:

Most of these issues would be addressed by the introduction of the Registered Reports submission format though they might not completely prevent them. Figure 2 shows data from a Nature survey of 1,576 researchers about research reproducibility. Our interpretation of these data is that those research practices that are most often viewed as leading to irreproducibility, are precisely those that Registered Reports would address.

Figure 2A: A survey by Nature[9] of 1,576 researchers who took a brief online questionnaire on reproducibility in research showed the reasons that lead to irreproducible research. We argue that many of these issues can be addressed through more widespread adoption of the Registered Reports format.

The following are selected list of questionable research practices that would be improved by more widespread adoption of the Registered Reports format[10] (roughly in order of importance). Some of these practices are so widespread that they have received their own shorthand terms such as ‘p-hacking’ and ‘HARKing’:

Figure 2B taken from[19]: “Source: Allen, C. & Mehler, D. Preprint at PsyArXiv https://psyarxiv.com/3czyt (2018)

Figure 3 taken from (Chambers et al., 2014): “The hypothetico-deductive model of the scientific method is compromised by a range of questionable research practices (QRPs; red). Lack of replication impedes the elimination of false discoveries and weakens the evidence base underpinning theory. Low statistical power increases the chances of missing true discoveries and reduces the likelihood that obtained positive effects are real. Exploiting researcher degrees of freedom (p-hacking) manifests in two general forms: collecting data until analyses return statistically significant effects, and selectively reporting analyses that reveal desirable outcomes. HARKing, or hypothesizing after results are known, involves generating a hypothesis from the data and then presenting it as a priori. Publication bias occurs when journals reject manuscripts on the basis that they report negative or undesirable findings. Finally, lack of data sharing prevents detailed meta-analysis and hinders the detection of data fabrication.”

What advantages do Registered Reports have over pre-registration?

Registered Reports can be seen as a special, more extensive form of pre-registration of trials. Pre-registration is already standard practice for clinical trials in medicine[26].

Munafò and colleagues[27] say the following about preregistration:

“Pre-registration of study protocols for randomized controlled trials in clinical medicine has become standard practice. In its simplest form, it may simply comprise the registration of the basic study design, but it can also include a detailed pre-specification of the study procedures, outcomes, and statistical analysis plan.”

Munafò and colleagues[28] list many advantages of pre-registration, that also apply to Registered Reports (see footnote [29]).

Munafò and colleagues[30] further write:

“The strongest form of pre-registration involves both registering the study (with a commitment to make the results public) and closely pre-specifying the study design, primary outcome, and analysis plan in advance of conducting the study or knowing the outcomes of the research. In principle, this addresses publication bias by making all research discoverable, whether or not it is ultimately published, allowing all of the evidence about a finding to be obtained and evaluated. It also addresses outcome switching, and P-hacking more generally, by requiring the researcher to articulate analytical decisions prior to observing the data, so that these decisions remain data-independent.”

The principal differences between pre-registration and Registered Reports are:

For the reason, simple pre-registration might not be as good as Registered Reports. For instance, in cancer trials, the descriptions of what will be measured are often of low quality i.e. vague, leading to ‘outcome switching’ (i.e. switching between planned and published outcomes) [31], [32]. Moreover, data processing can often involve very many seemingly reasonable options for excluding or transforming data[33], which can then be used for data dredging pre-registered trials (“With 20 binary choices, 220 = 1,048,576 different ways exist to analyze the same data.” [34]). Theoretically, preregistration could be more exhaustive and precise, but in practice, it rarely is, because it is not peer-reviewed.

To give a concrete example: imagine a clinical trial that tests the effect of an antidepressant on self-reported well-being collected over the course of 5 months. This trial might even be pre-registered on a trial registry such as clinicaltrials.gov. However, even if a study stipulates the outcome measures there, there might still be some flexibility in how to analyze the results. For instance, one can construct different measures of wellbeing by splitting the data into different subgroups based on gender, take different averages of all tests during 5 months, etc. This might introduce false positives. This is sometimes referred to as ‘researcher degrees of freedom’, where there are many different analysis pathways and it is unclear what is exploratory and what is confirmatory research; this practice is common[35]. In contrast, in a Registered Report, one would have to write the complete methodology section before collecting the data, and thus concretely specify what tests are going to be run. This would reduce the number of false positives. Moreover, there is the issue of outcome switching, which refers to discrepancies between which pre-registered outcomes and what is actually reported in the published literature. Outcome switching on clinical trials is common, with estimates as high as 31% [36]. Discrepancies such as dropping or changing preregistered outcomes, or adding new outcomes that were not pre-registered leads to false positives being reported and thus distorts the literature[37]. Registered Reports would help with this issue because outcomes cannot easily be switched as in pre-registration. For instance, reviewers might rarely cross-check what outcomes were pre-registered on clinicaltrials.gov, but reviewers and readers would notice if the analysis and the outcomes would differ from the registered methodology section of a Registered Report.

Early peer-review might lead to more theory-driven research and better methodologies

Registered Reports might lead researchers to think more about the underlying theories that can explain data beyond the data they find in the experiments. We believe that this is a perhaps undervalued advantage of the format, and perhaps even more important than the advantage of combating poor methodology described above.

Recently, some researchers have criticized an increasing trend towards publishing empirical studies and emphasis on data and results, with too little focus on the underlying and unifying theory that might explain existing and future data, i.e. generalizable theories that will help to make better predictions.

For instance, in neuroscience, researchers have argued that there is an overemphasis on experimental results and too little emphasis on theory[38]. Figure 3 shows the numbers of pages of the standard neuroscience textbook “Principles of Neural Science” by Eric Kandel et al.[39]. Of course, this is partly because of the increased interest in neuroscience generally. But it also highlights relative space that experimental results take up in the book. Researchers have used this point to argue for more focus on unifying theory that could explain more data, rather than amassing more data through experimental studies.  

Figure 3 taken from[40]: “Why a computational approach to the brain?”.

A similar trend and criticism have emerged in economics. Figure 4 shows that more papers across all economic disciplines (e.g. microeconomics, macroeconomics, labor economics, finance, development economics, etc.) are empirical, as opposed to theoretical. Like in neuroscience, there might be an overemphasis on data, because data can be collected easily and computers make it easy to analyze this data. In contrast, it is harder to come up with good theories and publishing theoretical papers is harder. Some researchers argue that since the late 1990s there has been “a partial abandonment of economic theory in applied work in the “experimentalist paradigm” [41]. For an alternative view, i.e. that the case for an empirical revolution in economics has been overstated, see [42]. While there might not be a very clear-cut distinction between empirical and theoretical work, and empirical work is certainly needed, from our read of both the neuroscience and economics literature, there is too little theory and too much focus on empirical data analysis.

Figure 4 taken from [43]: ”In 1963, 1973 and 1983, the majority of the articles published in the American Economic Review, Journal of Political Economy and Quarterly Review of Economics, three of the field’s most influential journals, were works of theory -- with theory's dominance peaking in 1983. By 2011, theory's share was down to 27.9 percent.”

How would Registered Reports help with the issue of too much data and empirically focused work and too little theory? There is some evidence that Registered Reports force researchers to think more about the theoretical underpinnings of their work. For instance, in a special issue of “Comparative Political Studies” that tested the Registered Reports format, the editors concluded that:

"The independent evaluations of the four special issue editors were in complete agreement regarding the rigor and focus of the reviews. All four of us were struck by the reviewers’ extensive focus on each manuscript’s theory and substance. The reviews were in comparable length to a regular journal review but did not have the same focus on the interpretation of results. Reviewers obviously made comments on the methodology, control variables, and issues with the empirical research design. But we judged these reviews as focusing much more on the “substance” of the manuscript and the relationship between the question, the theory, research design, and the potential contribution. [...] We believe that this outcome could very well be the greatest success of the special issue. [...] Our main findings from this exercise are in retrospect intuitive, but they were largely unanticipated. First, we found that reviewers placed a much greater focus on theory, the importance of the question, and most notably the relationship between theory and research design. This last point is worth emphasizing as some of our submissions had important theoretical contributions and rigorous research designs, but reviewers consistently commented on weak links between theory and analysis." [emphasis mine] [44]

This quote suggests that Registered Reports might indeed have the potential to improve theorizing in sciences.

This also shows that it is perhaps much better to have peer review at an earlier stage, where changes to the methodology and the analysis are still possible, creating a more collaborative process between investigators and reviewers with potential for huge cost-savings. The current state of affairs is that the peer-review process is often seen as (unnecessarily) combative. Imagine a study that has a flaw in the methodology, such as a clinical trial that includes people of a certain age for which there is reason to think a drug might not respond as well. Maybe the investigator does not know that this drug does not respond very well to people of that age range, but the reviewers highlight this before the study is run. This would then be trivial and in the best interest for the investigator to correct. However, if the peer review happens after the data is collected and the manuscript is written up, this information will not be as useful and the investigator might have to use statistical techniques to adjust for this, which is suboptimal, and the reviewer and the investigator might argue about this. In other words, it is perhaps better to get an outsider’s perspective earlier in the experimental process rather than later. A similar incident actually happened during the review of one Registered Report for a neuroscience journal [45], where reviewers pointed out some problems in the methodology that were then fixed [46]. Reviewers might also suggest that additional data should be collected to test additional hypotheses, and the marginal cost for collecting additional data might be much lower than collecting more data after a traditional peer review.

Increasing the credibility of non-randomized studies such as natural experiments

If natural experiments from observational data would be submitted as Registered Reports, they might become much more credible. Randomized controlled trials (RCTs; also called experiments) are often referred to as the gold standard of scientific evidence because they provide clearer causal evidence than other methods. Unlike observational studies where observed correlations are often not causal (c.f. correlation does not imply causation [47]), RCTs often shed light on causality. However, recently some non-randomized methods such as natural experiments (e.g. quasi-experimental approaches, regression discontinuities, differences in differences designs, and computational modeling) have become more popular. Natural experiments use observational non-randomized data that exploit natural variation to mirror randomization and have been becoming increasingly popular in fields such as economics[48], [49], public health [50], [51], political science[52], and even biology[53].

Natural experiments have some major advantages. They can observe phenomena that would be virtually impossible or very costly to study in RCT, such as following thousands of people over long stretches of time (one could not randomize that many people), which gives them very high statistical power. They can also look at effects that would be unethical to randomize experimentally. At the same time, unlike purely correlational research, they suggest a causal mechanism. For instance, one recent natural experiment exploited changes in a nationwide iodine-fortification policy in India, to compare test scores of school-aged children in naturally iodine sufficient and deficient districts over time[54]. The authors found that children in iodine-poor districts who were exposed to a decrease in iodine-fortified salt in early life that was almost random, are ‘1-8 percentage points less likely to be able to do any math or reading’. In another study from China showed that for females in the rural areas, a one standard deviation reduction in goiter rates results in a roughly 15% increase in cognitive ability measured by standardized math and verbal tests. Females who benefit from the new salt also obtain 0.5 additional years of schooling and have a higher educational attainment. There are other natural experiments that show an even more dramatic effect of salt iodization[55], up to an increase of 15 IQ points. In contrast, while there are some randomized controlled trials that suggest that improving iodine deficiency in children improves cognitive functions (summarized in a meta-analysis [56]), the evidence is not as clear-cut because sample sizes in RCTs are much lower, participants were not followed over their whole development and it is deemed unethical to deprive people of micronutrients they need. Why is it that researchers do not rely more on natural experiments?

Natural experiments have a major disadvantage compared to RCTs that is commonly cited: there are even more plausible ways to analyze the data and even more researchers degrees of freedom than in RCTs. For example, in a large observational dataset one can usually test a greater number of hypotheses and run even more statistical tests than in an RCT where it is usually costly to obtain more data[57]. For this reason, researchers using the natural experiment approach often run a vast number of falsification tests (for instance, see [58]), but this does not completely overcome the issue.

But if natural experiments from observational data would be submitted as Registered Reports, they might become much more credible. Indeed there is a first example of a natural experiment that was also a Registered Report [59]. For some data, it might even be possible to have researchers submit Registered Reports before they can gain access to data, in order to prevent them peaking at the data before submitting their analysis.

In sum, Registered Reports have the potential to substantially increase the credence we put on natural experiments, which are increasingly popular and can have many policy-relevant implications. Recall the salt iodization from above. If the natural experiments on salt iodization that show very large effect sizes had been submitted as Registered Reports and shown the same results, our credence of them would increase substantially, and we would weigh the evidence much more highly. We believe that this is an underemphasized point even by Registered Reports advocates.

Improving the Efficiency of Grant and Journal Peer Review: Registered Reports Funding

Grant funding that could be awarded based on the first stage of peer review. Researchers could submit their introduction and methodology as well as their analysis plan, and peer reviewers could be responsible for awarding funding relative to the cost of the study. This could lead to more meritocratic funding decisions where the most interesting studies are run instead of those that are run by researchers with the best track record (who will be most likely to be awarded grants). One paper argued that “Combining grant funding and publication decisions into a single, two-stage process promises to dramatically reduce the burden on reviewers”.[60] There are currently several tests run by a few funders and journals with awarding grant funding for the first stage of Registered Reports. For instance, the journal Nicotine and Tobacco Research has partnered with Cancer Research UK [61], the journal PLoS One has partnered with the Children’s Tumor Foundation for the “Drug Discovery Initiative Registered Report (DDIRR) 2017 Awards”. [62]

Other potential applications to markets for ideas and prediction markets in science

There are some other innovations in science that the Registered Report format might help make more viable. However, we do not believe that there is a very high likelihood of these innovations emerging through solely through Registered Reports advocacy.

High tractability due to right incentive structures

We think there is perhaps a unique window of opportunity for Registered Reports to quickly become the standard in scientific publishing. Registered Reports have received high profile coverage very recently in September/October  2018 in Science [68]  and Nature [69], and more and more journals are now accepting Registered Reports (see Figure 5). We believe additional funding for advocacy could capitalize on this momentum.

Figure 1 taken from [70] “Planning ahead: Study preregistrations on the Open Science Framework (OSF) are doubling every year”; 140 journals have introduced Registered Reports

Also, given that pre-registration is becoming more and more standard practice in clinical trials, and Registered Reports are just taking preregistration further to its logical conclusion (i.e. preregistration should be as exhaustive and as concrete as possible), we think that the core idea of Registered Reports (e.g. more and more stringent preregistration) will very likely succeed eventually. This does not mean that the current manifestation of Registered Reports that is currently proposed will succeed or that it is necessarily going to happen in the near term, but given the current traction of preregistration and awareness of the drawbacks that come along with not preregistering analysis, it would seem quite unlikely to us if researchers would continue indefinitely with this practice.

In 2013, an open letter by scientists calling all empirical journals in the life sciences to offer Registered Reports had more than 80 signatures [71]. Many of the signatories are respected academics at top universities. Another petition of 162 researchers in Linguistics have called for linguistics journals to adopt Registered Reports[72]. The original papers introducing the concept of Registered Reports has been cited [73],[74] 100 times each at the time of writing.

For instance, recently some economists have endorsed experimenting with Registered Reports in economics and they predict that this is perhaps especially pertinent given the rise in experimental studies and pre-analysis plans in economics, as evidenced by the rapid growth of the American Economic Association registry, which will likely facilitate the eventual acceptance of Registered Reports [75].

The authors of one Registered Report [76] stated that they viewed the Registered Report submission positively and would submit Registered Reports again [77], suggesting that the format works in practice.

Incentive structures

More generally, we believe advocacy for Registered Reports is quite tractable because the incentive structures are aligned so that most stakeholders will benefit from increased adoption of Registered Reports. We think that lack of awareness of the format and misunderstanding of the format is the main problem holding it back – precisely the thing that advocacy for Registered Reports would address.

The public has an interest in having Registered Reports simply because it will improve the quality of research and it will be the recipient of such benefits. The replication crisis in science is increasingly covered in the popular media, which makes the public aware of this issue. Similarly, funders, who should have a vested interest in supporting higher quality research might increasingly be receptive to funding more Registered Reports (some funders are already funding Registered Reports, see section above on improving grant making). Not only would they fund better research, and, as mentioned above, might they be able to make grants based on the first part of a Registered Report alone. By doing so they can fund concrete research projects instead of grant proposals that are vaguer.

For researchers, the incentives to publish are as follows [78]:

The disincentives of Registered Reports for researchers are:

Journals and academic publishing houses have perhaps the least incentives. One researcher we talked to in the process of our research told us that they talked to an editor of a high impact journal who worried about adopting the Registered Reports format, because it would increase the number of published null results in the journal, which, because they are typically not cited as often, would hurt the impact factor of the journal. Generally, for-profit academic publishing houses are incentivized to publish as many articles and journals as possible, regardless of quality, that they then sell to government-funded academic libraries and universities[81].

Alternatives to Registered Reports: multiverse analysis and Bayes Factors

There are several alternatives to Registered Reports that we are aware of that would address a similar set of methodological issues within science:

In sum, our all things considered judgment is that none of these alternatives are better than Registered Reports.

Criticism of Registered Reports

We have reviewed the criticism of Registered Reports and preregistration, which we review here for completeness. There are two peer-reviewed papers which have argued against pre-analysis plans:

There have been some critics of Registered Reports in general – the criticism is varied and the interested reader is referred to the footnote [91]. We believe that most of the criticisms are based on various misunderstanding, for instance, that all analyses should be predicted and that there is no room for exploratory analysis. A good response to these criticisms can be found in a recent book by Chris Chambers (a free relevant excerpt can be found here[92], the Google Books preview of the book can be found in footnote[93]) and openly available FAQs of Registered Reports at the Open Science Foundation.[94] Given that there are quite a few critics, this admittedly makes Registered Reports advocacy less tractable and thus effective. Ideally, one wants to promote policies that are “pulling the policy rope sideways” and not get into a policy tug off war[95].  

There is one very recent paper that we believe gives more valid and constructive criticism of the format[96]. The authors highlighted a number of important implementation issues. For instance, only about half of accepted Stage 1 protocols were publicly available and only 50% of journals required independent, public registration of accepted Stage 1 protocol. However, the authors ultimately concluded:

“With these caveats in mind, Registered Reports seem to be a promising initiative that improve the transparency, validity, and credibility of registered studies. Continuous evaluation of their performance will be helpful to assess whether they meet their goals and how their adoption can be optimized.”

A recent paper by Chris Chambers has addressed these criticisms.[97]

Neglectedness

The Laura and John Arnold Foundation is a multi-billion dollar foundation with a focus area on research integrity[98]. They have awarded grants to the Center for Open Science in the past[99], which does some advocacy for Registered Reports, but it is only one of their many initiatives. We are not aware of any funders that fund advocacy for Registered Reports.

Also, as far as we are aware there do not seem to be any researchers who work full-time or even part-time on Registered Reports advocacy.

References


[1] "Figure 1: The Registered Report workflow. : Nature Human Behaviour." http://www.nature.com/articles/s41562-016-0034/figures/1. Accessed 18 Jan. 2017.

[2] "Registered Reports - Center for Open Science." https://cos.io/our-services/registered-reports/. Accessed 18 Jan. 2017.

[3] "Registered Reports - Center for Open Science." https://cos.io/our-services/registered-reports/. Accessed 20 Jan. 2017.

[4] "Registered Reports - Center for Open Science." https://cos.io/our-services/registered-reports/. Accessed 18 Jan. 2017.

[5] "Registered Reports - Center for Open Science." https://cos.io/our-services/registered-reports/. Accessed 20 Jan. 2017.

[6] "Instead of “playing the game” it is time to change the rules - AIMS Press." http://www.aimspress.com/article/10.3934/Neuroscience.2014.1.4. Accessed 17 Jan. 2017.

[7] "Trust in science would be improved by study pre-registration | Science ...." 5 Jun. 2013, https://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration. Accessed 17 Jan. 2017.

[8] "Measuring the Prevalence of Questionable Research Practices With ...." https://www.cmu.edu/dietrich/sds/docs/loewenstein/MeasPrevalQuestTruthTelling.pdf. Accessed 21 Jan. 2017.

[9] "1,500 scientists lift the lid on reproducibility : Nature News & Comment." 25 May. 2016, https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970. Accessed 24 Sep. 2018.

[10] "Instead of “playing the game” it is time to change the rules: Registered ...." 8 May. 2014, http://www.aimspress.com/article/10.3934/Neuroscience.2014.1.4/fulltext.html. Accessed 22 Jan. 2017.

[11] "Data dredging - Wikipedia." https://en.wikipedia.org/wiki/Data_dredging. Accessed 22 Jan. 2017.

[12] "False-Positive Psychology: Undisclosed Flexibility in Data Collection ...." http://journals.sagepub.com/doi/pdf/10.1177/0956797611417632. Accessed 22 Jan. 2017.

[13] "P-hacking - FiveThirtyEight." https://projects.fivethirtyeight.com/p-hacking/. Accessed 24 Jan. 2017.

[14] "HARKing: hypothesizing after the results are known. - NCBI." https://www.ncbi.nlm.nih.gov/pubmed/15647155. Accessed 23 Jan. 2017.

[15] "A manifesto for reproducible science : Nature Human Behaviour." 10 Jan. 2017, http://www.nature.com/articles/s41562-016-0021?flip=true. Accessed 24 Jan. 2017.

[16] "“Positive” Results Increase Down the Hierarchy of the Sciences - Plos." 7 Apr. 2010, http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0010068. Accessed 23 Jan. 2017.

[17] "Publication bias: what is it? How do we measure it? How do we avoid ...." 4 Jul. 2013, https://www.dovepress.com/publication-bias-what-is-it-how-do-we-measure-it-how-do-we-avoid-it-peer-reviewed-article-OAJCT. Accessed 23 Jan. 2017.

[18] "PsyArXiv Preprints | Open Science challenges, benefits and tips in ...." 17 Oct. 2018, https://psyarxiv.com/3czyt/. Accessed 18 Oct. 2018.

[19] "First analysis of 'pre-registered' studies shows sharp rise in ... - Nature." 24 Oct. 2018, https://www.nature.com/articles/d41586-018-07118-1. Accessed 25 Oct. 2018.

[20] "Power failure: why small sample size undermines the ... - Nature." 10 Apr. 2013, http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html. Accessed 23 Jan. 2017.

[21] "Power failure: why small sample size undermines the ... - Nature." 10 Apr. 2013, http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html. Accessed 23 Jan. 2017.

[22] "Confidence and precision increase with high statistical power : Nature ...." 3 Jul. 2013, http://www.nature.com/nrn/journal/v14/n8/full/nrn3475-c4.html. Accessed 23 Jan. 2017.

[23] "The Reproducibility Wars: Successful, Unsuccessful, Uninterpretable ...." http://clinchem.aaccjnls.org/content/63/5/943. Accessed 17 Oct. 2018.

[24] "Willingness to Share Research Data Is Related to the Strength ... - Plos." 2 Nov. 2011, http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026828. Accessed 23 Jan. 2017.

[25] “ “We actually wrote the complete R code before data collection, and once data were collected, the actual analysis was straight-forward and very efficient” from "Meet the authors: Ratner et al. talk Registered Reports | Publishing ...." 2 Dec. 2016, https://blogs.royalsociety.org/publishing/meet-the-authors-ratner-et-al-talk-registered-reports/. Accessed 23 Jan. 2017.

[26] "A manifesto for reproducible science : Nature Human Behaviour." 10 Jan. 2017, http://www.nature.com/articles/s41562-016-0021?platform=hootsuite. Accessed 17 Jan. 2017.

[27] "A manifesto for reproducible science : Nature Human Behaviour." 10 Jan. 2017, http://www.nature.com/articles/s41562-016-0021?platform=hootsuite. Accessed 17 Jan. 2017.

[28] "A manifesto for reproducible science : Nature Human Behaviour." 10 Jan. 2017, http://www.nature.com/articles/s41562-016-0021?platform=hootsuite. Accessed 17 Jan. 2017.

[29] “[Pre-registration] was introduced to address two problems: publication bias and analytical flexibility (in particular outcome switching in the case of clinical medicine). Publication bias 47, also known as the file drawer problem 48, refers to the fact that many more studies are conducted than published. Studies that obtain positive and novel results are more likely to be published than studies that obtain negative results or report replications of prior results 47,49,50. The consequence is that the published literature indicates stronger evidence for findings than exists in reality. Outcome switching refers to the possibility of changing the outcomes of interest in the study depending on the observed results. A researcher may include ten variables that could be considered outcomes of the research, and — once the results are known — intentionally or unintentionally select the subset of outcomes that show statistically significant results as the outcomes of interest. The consequence is an increase in the likelihood that reported results are spurious by leveraging chance, while negative evidence gets ignored. This is one of several related research practices that can inflate spurious findings when analysis decisions are made with knowledge of the observed data, such as selection of models, exclusion rules and covariates. Such data-contingent analysis decisions constitute what has become known as P-hacking 51, and pre-registration can protect against all of these.” … “It also effectively blinds the researcher to the outcome because the data are not collected yet and the outcomes are not yet known. This way the researcher’s unconscious biases cannot influence the analysis strategy“

“ "A manifesto for reproducible science : Nature Human Behaviour." 10 Jan. 2017, http://www.nature.com/articles/s41562-016-0021?platform=hootsuite. Accessed 17 Jan. 2017.

[30] "A manifesto for reproducible science : Nature Human Behaviour." 10 Jan. 2017, http://www.nature.com/articles/s41562-016-0021?platform=hootsuite. Accessed 17 Jan. 2017.

[31] “In oncology trials, primary outcome descriptions in ClinicalTrials.gov are often of low quality and may not reflect what is in the protocol, thus limiting the detection of modifications between planned and published outcomes.”

"Comparison of primary outcomes in protocols ... - Annals of Oncology." 22 Dec. 2016, http://annonc.oxfordjournals.org/content/early/2016/12/21/annonc.mdw682.short?rss=1. Accessed 20 Jan. 2017.

[32] "COMPare Trials." http://compare-trials.org/. Accessed 23 Jan. 2017.

[33] "Increasing Transparency Through a Multiverse ... - Sage Publications." http://journals.sagepub.com/doi/full/10.1177/1745691616658637. Accessed 23 Jan. 2017.

[34] "Meta-research: Why research on research matters - PLOS." 13 Mar. 2018, https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2005468. Accessed 17 Oct. 2018.

[35] ““We discuss in the context of several examples of published papers where data-analysis decisions were theoretically-motivated based on previous literature, but where the details of data selection and analysis were not pre-specified and, as a result, were contingent on data.” "The garden of forking paths: Why multiple comparisons can be a ...." 14 Nov. 2013, http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf. Accessed 23 Jan. 2017.

[36] “Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25-110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31%; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0% (0/66) to 100% (1/1), IQR 17-45%).” "Comparison of registered and published outcomes in randomized ...." 18 Nov. 2015, https://www.ncbi.nlm.nih.gov/pubmed/26581191. Accessed 21 Jan. 2017.

[37] “Why outcome switching matters: Before carrying out a clinical trial, all outcomes that will be measured (e.g. blood pressure after one year of treatment) should be pre-specified in a trial protocol, and on a clinical trial registry. This is because if researchers measure lots of things, some of those things are likely to give a positive result by random chance (a false positive). A pre-specified outcome is much less likely to give a false-positive result. Once the trial is complete, the trial report should then report all pre-specified outcomes. Where reported outcomes differ from those pre-specified, this must be declared in the report, along with an explanation of the timing and reason for the change. This ensures a fair picture of the trial results. However, in reality, pre-specified outcomes are often left unreported, while outcomes that were not pre-specified are reported, without being declared as novel. This is an extremely common problem that distorts the evidence we use to make real-world clinical decisions. “ "COMPare - Methods - Tracking switched outcomes in clinical trials." http://compare-trials.org/methods. Accessed 21 Jan. 2017.

[38] https://youtu.be/wTYHF4LAKQI?t=2373 

[39] "Principles of Neural Science - Wikipedia." https://en.wikipedia.org/wiki/Principles_of_Neural_Science. Accessed 23 Jan. 2017.

[40] https://youtu.be/wTYHF4LAKQI?t=2373 

[41] "Theory and Measurement: Emergence, Consolidation and ... - NBER." http://www.nber.org/papers/w22253. Accessed 23 Jan. 2017.

[42] “If there is an emancipation of empirical under way, economics is not there yet. All in all, it seems that 1) theory dominating empirical work has been the exception in the history of economics rather than the rule 2) there is currently a reequilibration of theoretical and empirical work. But an emancipation of the latter? Not yet. And 3) what is dying, rather, is exclusively theoretical papers. “ ... "Is there really an empirical turn in economics?." 29 Sep. 2016, https://www.ineteconomics.org/perspectives/blog/is-there-really-an-empirical-turn-in-economics. Accessed 23 Jan. 2017.

[43] "How Economics Went From Theory to Data - Bloomberg View." 6 Jan. 2016, http://origin-www.bloombergview.com/articles/2016-01-06/how-economics-went-from-theory-to-data. Accessed 23 Jan. 2017.

[44] "Can Results-Free Review Reduce Publication ... - Sage Publications." http://journals.sagepub.com/doi/abs/10.1177/0010414016655539. Accessed 23 Jan. 2017.

[45] "Mu suppression – A good measure of the human ... - ScienceDirect." 20 May. 2014, http://www.sciencedirect.com/science/article/pii/S0010945216300570. Accessed 23 Jan. 2017.

[46] “a reviewer pointed out an issue with her control stimuli. If she had conducted the study following the standard format, reviewers would only be able to point this out retrospectively when there is no option to change it” ... "Publishing a Registered Report as a Postgraduate Researcher ...." 9 Sep. 2016, http://blog.efpsa.org/2016/09/09/publishing-a-registered-report-as-a-postgraduate-researcher/. Accessed 23 Jan. 2017.

[47] "Correlation does not imply causation - Wikipedia." https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation. Accessed 23 Jan. 2017.

[48] "The Empirical Economist's Toolkit: From Models to Methods by ... - SSRN." 30 May. 2015, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2611236. Accessed 23 Jan. 2017.

[49] "Natural Experiments in Macroeconomics." 19 Jan. 2016, http://www.nber.org/papers/w21228. Accessed 23 Jan. 2017.

[50] "Natural experiments: An overview of methods, approaches, and ...." 16 Dec. 2016, http://eprints.gla.ac.uk/131594/. Accessed 23 Jan. 2017.

[51] "Natural experiments: an underused tool for public health? - NCBI." https://www.ncbi.nlm.nih.gov/pubmed/15913681. Accessed 23 Jan. 2017.

[52] "Can Results-Free Review Reduce Publication Bias? The Results and ...." http://cps.sagepub.com/content/early/2016/07/01/0010414016655539.abstract. Accessed 23 Jan. 2017.

[53] "Quasi-experimental methods enable stronger ... - Science Direct." https://www.sciencedirect.com/science/article/pii/S1439179116301323. Accessed 26 Sep. 2018.

[54] "The effect of iodine deficiency on test scores and child mortality in rural ...." 14 Dec. 2016, https://www.isid.ac.in/~epu/acegd2016/papers/WiktoriaTafesse.pdf. Accessed 26 Sep. 2018.

[55] "Did iodized salt raise the IQ of 50 million Americans by 15 points? - the ...." 7 Jan. 2016, https://www.givingwhatwecan.org/post/2016/01/are-we-underestimating-benefits-salt-iodization/. Accessed 26 Sep. 2018.

[56] "Therapy of endocrine disease: Impact of iodine ... - NCBI." 2 Oct. 2013, https://www.ncbi.nlm.nih.gov/pubmed/24088547. Accessed 26 Sep. 2018.

[57] Even though even in RCTs it is possible to e.g. adjust for many random confounders such as sex that were not hypothesized and thus increase the researcher degrees of freedom and the number of type II errors due to the combinatorial explosion.

[58] "The Cognitive Effects of Micronutrient Deficiency: Evidence from Salt ...." http://www.nber.org/papers/w19233.pdf. Accessed 23 Jan. 2017.

[59] "Can Politicians Police Themselves? Natural Experimental Evidence ...." http://journals.sagepub.com/doi/abs/10.1177/0010414015626436. Accessed 23 Jan. 2017.

[60] "Registered Reports Funding - Oxford Journals - Oxford University Press." 6 Apr. 2017, https://academic.oup.com/ntr/article/19/7/773/3106460. Accessed 29 Sep. 2018.

[61] "Registered Reports Funding - Oxford Journals - Oxford University Press." 6 Apr. 2017, https://academic.oup.com/ntr/article/19/7/773/3106460. Accessed 29 Sep. 2018.

[62] "CTF trials Registered Reports | EveryONE: The PLOS ONE blog." 26 Sep. 2017, https://blogs.plos.org/everyone/2017/09/26/registered-reports-with-ctf/. Accessed 2 Oct. 2018.

[63] "Using prediction markets to estimate the reproducibility of scientific ...." 15 Dec. 2015, http://www.pnas.org/content/112/50/15343.abstract. Accessed 23 Jan. 2017.

[64] "Evaluating replicability of laboratory experiments in economics | Science." 3 Mar. 2016, http://science.sciencemag.org/content/early/2016/03/02/science.aaf0918. Accessed 23 Jan. 2017.

[65] "Prediction market - Wikipedia." https://en.wikipedia.org/wiki/Prediction_market. Accessed 23 Jan. 2017.

[66] "Using prediction markets to estimate the reproducibility of scientific ...." 15 Dec. 2015, http://www.pnas.org/content/112/50/15343.abstract. Accessed 23 Jan. 2017.

[67] "Social Sciences Replication Project." http://www.socialsciencesreplicationproject.com/. Accessed 23 Jan. 2017.

[68] "A recipe for rigor | Science." 20 Sep. 2018, http://www.sciencemag.org/feature/recipe-rigor. Accessed 27 Sep. 2018.

[69] "First analysis of 'pre-registered' studies shows sharp rise in ... - Nature." 24 Oct. 2018, https://www.nature.com/articles/d41586-018-07118-1. Accessed 25 Oct. 2018.

[70] "A simple strategy to avoid bias—declaring in advance what ... - Science." 21 Sep. 2018, http://science.sciencemag.org/node/715589.full . Accessed 29 Sep. 2018.

[71] "Trust in science would be improved by study pre-registration | Science ...." 5 Jun. 2013, https://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration. Accessed 17 Jan. 2017.

[72] https://docs.google.com/spreadsheets/d/17dLaqKXcjyWk1thG8y5C3_fHXXNEqQMcGWDY62BOc0Q/edit#gid=0 

[73] "Registered Reports: Registered Reports: Social ... - Hogrefe eContent." 1 Jan. 2014, http://econtent.hogrefe.com/doi/full/10.1027/1864-9335/a000192. Accessed 17 Jan. 2017.

[74] "Registered Reports: a new publishing initiative at Cortex. - NCBI." 26 Dec. 2012, https://www.ncbi.nlm.nih.gov/pubmed/23347556. Accessed 17 Jan. 2017.

[75] "Transparency, Reproducibility, and the Credibility of Economics ...." 1 Jan. 2017, http://emiguel.econ.berkeley.edu/assets/miguel_research/78/Transparency-JEL-2016-12-20.pdf. Accessed 16 Jan. 2017.

[76] "The effects of exposure to objective coherence on perceived meaning ...." 23 Nov. 2016, http://rsos.royalsocietypublishing.org/content/3/11/160431. Accessed 23 Jan. 2017.

[77] [Author KR] “I would absolutely submit more Registered Reports to Royal Society Open Science. I was honestly surprised to learn that we were the first Registered Report to be accepted by the journal because the process was so smooth. Although I found the Registered Reports process to be more demanding upfront, I whole-heartedly believe that it was worth it. Once we had Stage 1 completed and approved, everything else felt like it fell into place and it was a welcomed change in contrast to the traditional publication process. [...] Author FT: All of us have a strong commitment to open science and pre-registration, so a future submission to Royal Society Open Science is likely. “ … "Meet the authors: Ratner et al. talk Registered Reports | Publishing ...." 2 Dec. 2016, https://blogs.royalsociety.org/publishing/meet-the-authors-ratner-et-al-talk-registered-reports/. Accessed 23 Jan. 2017.

[78] "Publishing a Registered Report as a Postgraduate Researcher ...." 9 Sep. 2016, http://blog.efpsa.org/2016/09/09/publishing-a-registered-report-as-a-postgraduate-researcher/. Accessed 24 Jan. 2017.

[79] "Will pre-registration of studies be good for psychology ... - Google Sites." https://sites.google.com/site/speechskscott/SpeakingOut/willpre-registrationofstudiesbegoodforpsychology. Accessed 26 Jan. 2017.

[80] "Will pre-registration of studies be good for psychology ... - Google Sites." https://sites.google.com/site/speechskscott/SpeakingOut/willpre-registrationofstudiesbegoodforpsychology. Accessed 26 Jan. 2017.

[81] "Is the staggeringly profitable business of scientific publishing bad for ...." 27 Jun. 2017, https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science. Accessed 28 Sep. 2018.

[82] "Increasing Transparency Through a Multiverse ... - Sage Publications." http://journals.sagepub.com/doi/full/10.1177/1745691616658637. Accessed 23 Jan. 2017.

[83] "How Bayes factors change scientific practice - ScienceDirect." 7 Jan. 2016, http://www.sciencedirect.com/science/article/pii/S0022249615000607. Accessed 26 Jan. 2017.

[84] "Conditional equivalence testing: An alternative remedy for ... - PLOS." 13 Apr. 2018, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0195145. Accessed 25 Oct. 2018.

[85] "The Proposal to Lower P Value Thresholds to .005 | Research ...." 22 Mar. 2018, https://jamanetwork.com/journals/jama/fullarticle/2676503. Accessed 26 Sep. 2018.

[86] "Lowering the P Value Threshold | JAMA | JAMA Network." 4 Sep. 2018, https://jamanetwork.com/journals/jama/fullarticle/2698915. Accessed 26 Sep. 2018.

[87] "Cross-validation (statistics) - Wikipedia." https://en.wikipedia.org/wiki/Cross-validation_(statistics). Accessed 25 Oct. 2018.

[88] "Ten ironic rules for non-statistical reviewers. - NCBI." 13 Apr. 2012, https://www.ncbi.nlm.nih.gov/pubmed/22521475. Accessed 25 Oct. 2018.

[89] "Pre-Analysis Plans Have Limited Upside, Especially Where ...." https://pubs.aeaweb.org/doi/pdf/10.1257/jep.29.3.81. Accessed 28 Sep. 2018.

[90] "Promises and Perils of Pre-Analysis Plans - MIT Economics." https://economics.mit.edu/files/10654. Accessed 28 Sep. 2018.

[91] "Will pre-registration of studies be good for psychology ... - Google Sites." 28 Jun. 2013, https://sites.google.com/site/speechskscott/SpeakingOut/willpre-registrationofstudiesbegoodforpsychology. Accessed 28 Sep. 2018.

[92] "A vaccine against bias | The Psychologist." 10 May. 2017, https://thepsychologist.bps.org.uk/vaccine-against-bias. Accessed 15 Oct. 2018.

[93] https://books.google.co.uk/books?id=qwhpDQAAQBAJ&lpg=PP1&dq=The%20Seven%20Deadly%20Sins%20of%20Psychology&pg=PA185#v=onepage&q&f=false

[94] "Registered Reports - The Center for Open Science." https://cos.io/rr/. Accessed 2 Oct. 2018.

[95] "Overcoming Bias : Policy Tug-O-War." 23 May. 2007, http://www.overcomingbias.com/2007/05/policy_tugowar.html. Accessed 24 Oct. 2018.

[96] "Mapping the universe of Registered Reports | Nature Human Behaviour." 1 Oct. 2018, https://www.nature.com/articles/s41562-018-0444-y. Accessed 2 Oct. 2018.

[97] "Protocol transparency is vital for Registered Reports | Nature Human ...." https://www.nature.com/articles/s41562-018-0449-6. Accessed 2 Oct. 2018.

[98] "Research Integrity - Laura and John Arnold Foundation." http://www.arnoldfoundation.org/initiative/research-integrity/. Accessed 11 Jan. 2017.

[99] "Center for Open Science - Wikipedia." https://en.wikipedia.org/wiki/Center_for_Open_Science. Accessed 11 Jan. 2017.