PSA Meta-research hub
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

META-RESEARCH HUB Just add your name, project title and a link to your document where you give more info about your project. Make sure your document is open for potential collaborators to add their comments.
ProposerProject titleShort descriptionStatusPlanned starting date Link to the project/ emailInterested in project? Add your email here
Patrick S. ForscherHow many reviewers are required to obtain reliable evaluations of NIH R01 grant proposals?Project uses data on 48 grant proposals that were reviewed by 412 reviewers. We use generalizability theory to investigate how predicted reliability of judgments would change when the number of reviewers changeStarted
Patrick S. ForscherWhat value do researchers place on different kinds of authorship?Sketch of the idea -- take abstracts of real papers from different journals and ask researchers how much money they would be willing to pay for a certain kind of authorship (first, second, third, etc) on each paper. Once you have enough of these data you'd be able to estimate the value (in money) that researchers place on authorship and how that value varies with type of journal (high JIF vs low, prestige vs non-prestige, open access vs not, etc), type of authorship (first, second, third, etc), and by factors of the researcher (male vs female researchers, career stage, etc)Open for collaboration;;;;;
Patrick S. ForscherEliciting the smallest effect of substantive interestUse 005 to obtain researcher ideas about (1) the smallest stereotype threat they think would be interesting and (2) what effects they think each threat procedure will have on performance. If we get pre-post information, we can also see how these two quantities change (if at all) once the data are in.Open for collaboration
Balazs AczelEstablishing Retroactive Open ScienceIt is widely discussed that when asked, a proportion of the researchers don't share their old data. Some of them, however, are happy to share upon request. These researchers probably value open science and keep their old data organised but at the time the publication of their study they did/could not publicly share them. The aim of this initiative is to propagate retroactive open science. This aim has three requirements: (1) researchers learn about the practices by which they can link their old data to their publication; (2) journals/institutes find incentives that can convince researchers to share; (3) practices are developed so that researchers can retrieve and organise their old data/code and materials.
***Join us if you are interested***
Open for collaboration;;;,
Balazs AczelDeveloping a Lab Manual TemplateWe propose the development of a Lab Manual Template which could help researchers incorporate more efficient, transparent, and reproducible practices into their daily work. During the hackathon, participants will help us identify important practices and guidelines to include in a lab manual template designed for social scientists. We will also develop a web infrastructure that can enables the lab manual to be easily customized and updated according to the needs of individual labs.
***Join us if you are interested***
Open for collaboration2019-06-01
Sau-chin Chen (,;;;;;
Peder M. IsagerReplicate Makel et. al. (2012), Replications in Psychology Research: How Often Do They Really Occur?Makel et. al. (2012) provides, as far as I know, the only estimate of replication rates in psychological research. However, their study predates many widely know replication projects, and it would be informative to know whether replication rates are actually increasing 7 years after their paper wa published, given how much emphasis has been put on replication research in the years after 2012.
In addition, Makel et. al.'s analyses could be improved on. For example, it does not appear that they checked their false negative rate. There may have been papers that did not contain the regular expression "replicat*", but that nonetheless would be counted as a replication study., jan.roeer@uni-wh.dem,;
Peder M. IsagerEstimating variance components in psychological researchCalculate the variance partitioning components (see Westfall et. al. 2014) for subject, stimulus, lab (if available) and culture (if available) for the data in Many Labs 1-3 and RP:Ps, to get an estimate for plausible variance component ranges in experimental psychology. This information could be extremely important for accurately estimating the power of future studies, both within the PSA and beyond. Open for
Peder M. IsagerHow many papers are researchers expected to read to stay up to date on their fields?I do not know how we would go about estimating this, although I have some ideas. However, never-ending reading lists and information overload is one of the most common complaints I hear from researchers, and one that I frequently experience myself. I think this is an important question to attempt to answer because the ability to stay up-to-date on science within and between fields is likely an important ingredient for psychological theory development. In addition, if we are not able to stay updated on the literature in a system that generates exponentially more research with each passing year, then we have to find more efficient strategies for disseminating information in (psychological) science. Open for;;;
Nick FoxHow does mode of statistical evidence affect reader's understanding of finding?There is increasing intrest in presenting statistical evidence for scientific claims with Bayesian or liklihood estimates. Compared to results reported using frequentist methods (p<0.05), do readers interpret equivelent findings differently if reported with different statistics?Open for;;;;;;;;
Nick FoxFurther understanding how researchers find scientific literatureI've started collecting data on the role of digital metadata (download counts) in how researchers decide what papers to read. The real world is a much more stimulus-filled place, and different types of metadata potentially carry different amounts of weight (citation vs. download vs. visit). This project would aim to compare different metadata in a cross-world approach to determine how potential readers use the cues of previous reader behavior to drive scientific readershipOpen for collaborationSome of the prior work I've started was used in my dissertation here:
Chuan-Peng HuHow WEIRD are the samples in psychological studiesI've started collecting the age, sex, education attainment, and SES (if available) from about 500 papers published in Chinese psychological Journals and plannted to compare the sample with the whole population in terms of age, sex, and education. I'd like to extend it to all psychological papers published in any langunages, especially English, "international" Journals.Open for;,
Gerit PfuhlHow gender-biased / neutral is researchbackground: gender gap in citation, senior authorship, gender gap in higher academic position. Working hypothesis: apart from time lag, is this due to different perception of deadlines, responsibilities/commitments (i.e. women treat journal deadlines more "final" than men do)?, and or due to male networking differently / earlier than female researchers (some evo psy to back that up = short-term alliances in males, more long-term commitments in females)? How should a gender-neutral academia look like?Open for collaboration2020
Gerit Pfuhlhow interdisciplinary is psychology / behavioural sciencesrelated to a recent paper on "what happened to cognitive sciences". Would use: composition of Psy departments (at which faculty are they, # of non-psy staff and their profession), courses they offer, industry grants etc. maybe do random sample of x Universities from 100+ countries/states, most info should be on the institutes' websides. Ideally also acquire data what kind of job graduates take! I.e. how versatile is studying PsychologyOpen for, sketching it out over the next;
Gerit PfuhlWho uses altmetrics?on trust, misconceptions (false hopes) of diverse measures - from IF, to h-index, to social media attention ...Open for collaborationhappy if someone else takes the lead, esp librarian
Gerit PfuhlCRAZED researchinstrumental irrationality - which incentives make us "bad" scientists?Started2019;
Sau-Chin ChenCan Experienced Researchers Forecast Object Orientation Effect of a Language?I am prpopsing the prediction markets plan for PSA 002: object oreintation. Every language will have a trading market gathering the researchers' predicitons for the significance and effect size.Open for collaboration2019
Olmo van den AkkerPreregistration in practice: A comparison of published papers with their preregistrationsIn this project, we look at papers that have earned a Preregistration Challenge award (N = 179), papers that have earned a preregistration badge (N = 150), published registered reports (N = 158), and a control group of papers that have not been preregistered (N = 150) to check if and how published studies deviate from their preregistrations, and whether there is an association between the strictness of the different preregistrations formats and the reproducibility of published studies.Open for collaboration2019; it would be great if people could help code the preregistrations and published papers (I'm working on our own preregistration now, will upload it here when finished);;;;;;;
Anna van 't VeerPreregistration Planning and Deviation Documentation (PPDD) tableA table was made during APS hackathon (and may be continued at SIPS) that authors could fill out to provide some meta data about the locations and deviations of the elements of their preregistration. Adding this table to a preregistered paper will help standardize preregistration as a practice over time, and assist in evaluating preregistrations (e.g. by reviewers, editors, anyone looking to find where the element is to be found). If during SIPS we see merit in the first version of such a standardized table, a way to implement it for existing preregs would be to ask PSA members (and others) to fill it out for preregistered studies, providing 1) standardized information for their readers and 2) giving us data on e.g. what kind of rationels researchers have for their deviations, what elements people have in their preregs. See ongoing open review of the current table in google doc here:
Tweet to elicit open review:
APS hackathon members: @LorneJCampbell @siminevazire @giladfeldman @AlxEtz @dstephenlindsay
Open for;;;;
Chuan-Peng HuFlexibility of SES measurements in Cognitive NeuroscienceWe will conduct a systematic review of the literature that studies brain-SES relationship and investigate how the SES were measured in each of those paper. Then we will use open data from large-scale survey to reproduce those different ways of measuring SES and estimate how much variation had been introduced by the flexibility of SES measurementOpen for
Maximilian Primbs
The Effects of Data Pre-Processing and Analysis Pathway on Statistical Outcomes in Reaction Time Data
Breaking continuous flash suppression (b-CFS) has been established as an important tool in the study of consciousness and visual perception. Past reviews and re-analyses of highly disputed datasets indicate that the findings and conclusions of studies employing b-CFS may strongly be affected by the way the data is pre-processed and analyzed (Moors & Hesselmann, 2018b). In the current review we (1) establish the prevalence of all pre-processing techniques and analysis pathways employed in the b-CFS literature (2) discuss the theoretical reasoning behind these choices and (3) show that data-processing indeed drastically affects reportable statistical outcomes for several analysis pathways across multiple datasets. Finally, we conclude with several recommendations to researchers employing b-CFS and other reaction time-based measures, to promote theoretically grounded, replicable data pre-processing and analyses.Open for collaborationOn-going