Published using Google Docs
SIPS 2021 Static Program
Updated automatically every 5 minutes

2021 Conference Program

Remote Edition

June 23-25, 2021

with support from

A close up of a logo

Description automatically generated

Logo

Description automatically generated

Logo for Prolific Academic

Welcome to SIPS 2021!

The 2021 SIPS conference is not like a regular conference. There are no symposia or keynote speakers: the meeting will be action-oriented and will focus on initiating and conducting projects. In other words: no passive listening to boring talks. Instead, get your hands dirty and start improving psychological science!

The general principles:

Key Terms

Code of Conduct & Meeting Rules

The Society for the Improvement of Psychological Science (SIPS) aims to provide a harassment-free event experience for everyone, regardless of gender, gender identity, gender expression, race, ethnicity, caste, national origin, citizen status, age, sexual orientation, disability, appearance, body size, religion, socioeconomic status, other group status, or their intersection. We do not tolerate harassment of event participants in any form. Event participants violating these rules may be sanctioned, including being expelled without a refund. The full SIPS Event Code of Conduct is available here. Please see also the conference rules for the online 2021 meeting. In short:

Schedule and Abstracts

Wednesday, June 23

All times are in CEST (local time in Padova, Italy)

A

B

C

D

E

F

G

H

15:00

Opening Session: Ivy Onyeador

16:00

Getting Practice with Theory Building: Starting with What you Know…or What You Think You Know

Increasing Researcher Transparency and Reflection Through Positionality Statements: Lessons From Qualitative Research

Robust Mediation Analysis

How Do We Meaningfully Interpret Effect Size Indices?

Decolonizing Science

How to Dip Your Toes in Open Science and Bring It to Your Department

Accelerating the Adoption of Registered Reports in Clinical Psychology and Psychiatry

Getting Psychology to the People: Creating Topical 'Wikis' That Target Crisis-Relevant Issues

17:30

Break

18:00

Decentering Whiteness Within Research Methods Courses

Best Practices for Addressing Missing Data through Multiple Imputation

Developing Best Practices for Publishing Theses, Dissertations, and Other Student Scholarship

Understanding and Incorporating Data Simulation into the Research Pipeline: A Practical Guide for the Novice Simulator

Machine Learning for Exploratory Research

Interaction Effect: Doing the Right Thing

Minimum-Effect Significance Testing (MEST) and Equivalence Testing: A Unified Framework and a Hands-On Tutorial

How Can We Improve Registered Reports for Authors, Reviewers, and Editors?

19:30

Statistical Frontiers for Selective Reporting and Publication Bias

Transparency in Coding Open-Ended Data: Best Practices from Interrater Reliability to Dissemination

(Too) Many Shades of Reaction Time Data Preprocessing

21:00

Social Hour in gather.town!

Hackathons

Workshops

Unconferences

1A: Workshop: Getting Practice with Theory Building: Starting with What you Know…or What You Think You Know

Facilitators: Natasha Tonge, Marilyn Piccirillo

The “theory crisis” in psychological science is a longstanding issue that has been recently spotlighted in several papers. Although previous literature  has contributed helpful outlines of building and testing stronger psychological theories, beginning a reproducible workflow can be daunting and may pose a barrier to implementing these practices. We draw from the first step of the TCM in Borsboom (2020): identifying empirical phenomenon. Our workshop will use our own experience of building a theory of social anxiety/depression comorbidity and will highlight the tools we used to create a replicable workflow for reviewing and consolidating the literature around our phenomenon of interest. The first part of our workshop will present our workflow in connection to theory building. In the second part, we will guide attendees as they brainstorm and plan their own workflow. We will conclude with a discussion of the proposed workflows, areas for improvement, and solicit feedback from attendees.

1B: Workshop: Increasing Researcher Transparency and Reflection Through Positionality Statements: Lessons From Qualitative Research

Facilitators: Crystal Steltenpohl, Jaclyn A. Siegel, Kathryn R. Klement

Positionality statements are an excellent way at improving the transparency and reflexivity of any qualitative, quantitative, or mixed methods project. In this workshop, learn about how unchecked and biased assumptions have led us astray as a field, why and how people use positionality statements, who positionality statements are for, and how to use them to increase the transparency and rigor of your own work. Participants will be able to begin drafting one for a project of their choosing through guided activities, and are encouraged to use these processes to examine the tools we use, questions we ask, and how we interpret our results.

1C: Workshop: Robust Mediation Analysis

Facilitators: Andreas Alfons, Nüfer Y. Ates

Mediation analysis is one of the most widely used statistical techniques in the behavioral sciences. The simplest form of a mediation model allows to study how an independent variable (X) affects a dependent variable (Y) through an intervening variable called a mediator (M). The standard test for the indirect effect of X on Y through M is a bootstrap test based on ordinary least squares (OLS) regressions. However, this test is very sensitive to deviations from normality assumptions, such as outliers, heavy tails, or skewness. This poses a serious threat to empirical testing of theory about mediation mechanisms. The R package robmed implements a robust test for mediation analysis based on the fast and robust bootstrap methodology for robust regression estimators. This procedure yields reliable results for estimating the effect size and assessing its significance, even when the data deviate from the usual normality assumptions. In addition to simple mediation models, the package also provides functionality for mediation models with multiple mediators as well as control variables. Furthermore, several alternative bootstrap tests are included in the package.

1D: Unconference: How Do We Meaningfully Interpret Effect Size Indices?

Facilitators: Kevin Peters, Rob Cribbie, Nataly Beribisky

Researchers are encouraged to report effect size indices in their work and advised to interpret these indices within the context of their research area. This advice seems very reasonable and an improvement over relying on fixed cutoffs for small, medium, and large effect sizes. Upon reflection, however, this advice is difficult to apply in practice. What exactly does context mean here, and how does a researcher go about interpreting their effect size indices in this manner? What factors do researchers consider (and not consider) when interpreting their effect size indices? The goal of this session will be to have an interactive discussion of issues surrounding effect size interpretation. In addition to discussing how they approach effect size interpretation, participants will also be asked to brainstorm ways in which our discipline can promote meaningful interpretations of effect size indices.

1E: Unconference: Decolonizing Science

Facilitators: Sarah A Sauvé, Elizabeth Phillips, Wyatt Schiefelbein

Many of the scientific practices upheld in Universities and research institutions today are products of European rationalism, and are deeply tied to colonialism. Searching for a single objective “truth,” constructing exclusive hierarchies and benchmarks of academic success, and performing research on minority populations while systematically repressing and ignoring their own knowledge creations and canons are just some of the ways colonialism manifests itself in Western science. In contrast, decolonized science  emphasizes the value of different types of knowledge, doing research with minority populations, and including these populations and their traditional practices in knowledge creation.

In this unconference, the session leaders (two white settlers and a Métis scholar from Canada) will give an overview of how science – often unwittingly – upholds colonialism and perpetuates harmful patterns of extraction and power imbalance. We will then spend the remaining time discussing anti-racist, open science, and community-based methodologies that can change the way we do science and mitigate these harms.

1F: Unconference: How to Dip Your Toes in Open Science and Bring It to Your Department

Facilitators: Naseem Dillman-Hasso, Ummul-Kiram Kathawalla, Lena Ackermann

Targeted towards helping graduate students and early career researchers, this session builds on the Easing Into Open Science: A Guide for Graduate Students and Their Advisors paper. Wanting to improve science practices and methodology is not something that always comes naturally to the world of academia, and can sometimes be intimidating if you lack the support around you. We will talk about how to dip your toes into open science practices while bringing your department with you, dealing with the intricacies of publishing quotas and departmental push-back, and hear about people’s personal anecdotes of what has worked and what hasn’t. There will be a working document for this session where people can add resources and anecdotes that will be published on the OSF page of the paper.

1G: Unconference: Accelerating the Adoption of Registered Reports in Clinical Psychology and Psychiatry

Facilitators: Olivia Kirtley, Ginette Lafit

Registered Reports (RRs) have been adopted in 250 journals as an article format that prioritises the quality of research questions and methods over results. RRs have specific requirements, including minimum statistical power, focus on specific data types, and timing of submission in relation to ethical approval. Clinical psychology and psychiatry journals have not yet widely adopted RRs, perhaps because some requirements raise particular challenges for these fields. Phenomena of interest may be rare, threatening statistical power. Pre-existing data is widely used, yet many journals do not support RRs for such data. Furthermore, ethical approval for clinical studies is often slow and intensive, and amendments required following RR peer-review may significantly delay the start of studies. In this unconference, we will 1) critically discuss barriers to RR use in clinical psychology and psychiatry; 2) discuss the creation of support structures to reduce/remove these barriers, whilst preserving the goals of RRs.

1H: Unconference: Getting Psychology to the People: Creating Topical 'Wikis' that Target Crisis-Relevant Issues

Facilitators: Dawn Holford, Ulrike Hahn

Our session asks how psychology as a discipline can better respond in a crisis. We will lead a discussion on some of the underlying barriers impeding psychologists from responding effectively to crises like COVID-19, including the difficulty of consolidating rapidly-emerging information from many sources, the challenges in generating consensus within the field, and the lack of contributions from a wider range of expertise—often due to lack of opportunity for diverse input— among others. We will also share how we created a living resource with expert-led consolidation of information to tackle the (then imminent, now real!) behavioural issue of vaccine hesitancy (the “COVID-19 Vaccine Communication Handbook and Wiki”), and use it as a springboard to discuss other grassroots projects with similar aims. We encourage attendees to identify other opportunities that could be addressed with a topical “Wiki” and give us feedback on how these ideas and processes could be improved.

2A: Hackathon: Decentering Whiteness within Research Methods Courses

Facilitators: Margaret Echelbarger, Tissyana Camacho

Much of psychology is WEIRD and White cultural norms dominate how we teach the science. These norms often center the experiences of White, WEIRD participants and privilege some methodologies over others (i.e., deeming some methods more rigorous than others). Further, centering the experiences of White, WEIRD participants can signal to students of color that their own experiences are less relevant to the science and/or are better discussed as “special topics.” We call for more culturally-affirming curricula that decenters Whiteness from research methods courses and offers students, especially students of color, the opportunity to see themselves in a science in which their own experiences are underrepresented (Camacho & Echelbarger, 2021). During this hackathon, we will generate a list of resources from which instructors can draw to: 1) develop more culturally-affirming research methods curricula, and 2) interrogate their own teaching practices in the service of positively moving psychological science forward.

2B: Hackathon: Best Practices for Addressing Missing Data through Multiple Imputation

Facilitators: Adrienne D. Woods, Pamela Davis-Kean, Jessica Logan, Max Halvorson, Kevin King, Menglin Xu

Adequately addressing missing data is a pervasive issue in the social sciences. Failure to correctly address missing data can lead to biased or inefficient estimation of parameters, confidence intervals, and significance tests. Multiple imputation (MI) is a statistical technique for handling missing data that involves using existing data to generate multiple datasets of plausible values for missing data that each incorporate random components to reflect their uncertainty. Each dataset is analyzed individually and identically, and parameter estimates are pooled into one set of estimates, variances, and confidence intervals. Although this technique is widely used, there is little consensus on what constitutes best practices in MI, including with regard to assessing the extent of missing data bias and reporting MI procedures in publications. The goal of this session is to collectively procure a list of resources or citations on multiple imputation and missing data, as well as to create MI coding templates for several prominent software languages (Stata, Mplus, R, SAS, Blimp). We will crowdsource these resources and templates to create an academic paper that can be used as a “roadmap” to MI, similar to previous SIPS products on preregistration and open science. Interested participants will be invited to coauthor this paper.

2C: Hackathon: Developing Best Practices for Publishing Theses, Dissertations, and Other Student Scholarship

Facilitators: Kathleen Schmidt, Hannah Moshontz

Transforming a thesis or dissertation into a journal article presents a number of practical and ethical challenges. Graduate students, postdocs, and early career faculty may struggle to find time, motivation, or guidance for adapting their theses and dissertations into publishable article manuscripts. Revising these works often requires substantial changes to their length, scope, and intended audience or focus. Authors may need to revisit analyses, add or emphasize arguments in light of new or unconsidered literature, or remove aspects of the work that were tailored to the idiosyncratic requests of a committee. Many student works are never published in any format or are published in part (e.g., including only positive results) or without full transparency. This hackathon will produce a guide outlining best practices to adapting theses, dissertations, and other academic projects into rigorous, transparent manuscripts suited for publication. We intend to submit this guide as a manuscript for publication shortly after the conference. We will draw on collective knowledge and experiences to identify and summarize problems and solutions for adapting both recently completed and long neglected student scholarship for publication.

2D: Workshop: Understanding and Incorporating Data Simulation into the Research Pipeline: A Practical Guide for the Novice Simulator

Facilitators: Mark C. Adkins, Udi Alter, Nataly Beribisky, Phil Chalmers, Y. Andre Wang

The proposed workshop will introduce researchers to data simulation methods in psychological research. Methodologists frequently rely on simulation experiments to create tools and make recommendations for research practices aimed at improving psychological science. Yet, empirical researchers often have little experience in, or knowledge of, data simulation techniques, which create barriers to critically assessing simulation results and effectively using simulation-based tools. We seek to lower these barriers in the proposed workshop. The first half of the workshop will introduce the concept of Monte Carlo simulations, why and when they should be used, and how to interpret results from simulation studies. Attendees will be acquainted with pwrSEM, an open-source simulation-based application for power estimation, and learn how it can be flexibly adapted for their individual research programs. The second half will guide attendees through simulating data for various research purposes using the SimDesign package in R. This section will provide hands-on experience with constructing and interpreting a completely customized simulation study. The proposed workshop offers theoretical background, practical tools, and applied experience with simulation methods to improve attendees’ literacy and skills in quantitative methodology for psychological research.

2E: Workshop: Machine Learning for Exploratory Research

Facilitators: Anna Szabelska

Machine learning is very popular in data science nowadays, and it is a very handy tool. Can psychology benefit from it? Yes, it can! In the workshop I will introduce machine learning, describe various techniques, and discuss various ways in which machine learning can be used in psychological research. In the practical part we will very briefly go through the classic exploratory techniques (to provide context) and then focus on the machine learning part. Step-by-step, we will build a machine learning model, then brainstorm alternative ways of interpretation, and discuss possible lines for generating predictions.The workshop will end with providing and discussing the best resources for further self-learning.

2F: Workshop: Interaction Effect: Doing the Right Thing

Facilitators: Sara Garofalo, Mariagrazia Benassi

Interaction effects are of special interest in psychological sciences. They are observed whenever the impact of one independent variable changes based on the level of the another independent variable. Typically, experimental designs in psychology involve this kind of expectation, as they often consist of factorial (ANOVA) designs in which experimental and control groups or conditions are contrasted and compared. Despite being largely used, ANOVA interaction effects have been historically misinterpreted and recent evidence still points to the presence of errors in the way in which they are analysed and explained. This workshop will be divided in two sessions: the first one will provide an overview on the statistical assumptions behind interaction effects and the pros and cons of the most common approaches to their investigation (post-hoc tests, planned comparisons, t-tests); the second one will present how to use confidence intervals and Bayesian informative hypothesis for a more powerful interpretation of interaction effects, based on a more descriptive and exploratory or hypothesis-driven, respectively.

2G: Workshop: Minimum-Effect Significance Testing (MEST) and Equivalence Testing: A Unified Framework and a Hands-On Tutorial

Facilitators: Adam Smiley, Jessica Glazier, Yuichi Shoda

Minimum-effect significance testing (MEST) allows researchers to test if the true effect in the population is large enough to be meaningful. No matter how large the sample size, MEST—unlike traditional null hypothesis testing—will never be significant if the observed effect is weaker than the smallest effect of consequence. When MEST is used in conjunction with equivalence testing (EqT), researchers now have a complete set of tools for testing if the effect is large enough to matter, too small to be of consequence, or if more evidence is needed to reach a conclusion. In this workshop, we will present the basic logic of a unified framework encompassing MEST, EqT, and traditional null hypothesis testing. We will then provide a hands-on tutorial with several examples of applications to data including use in registered reports, as well as suggested wording for reporting results. Additionally, we will facilitate conversations (both in small groups and as a full session) about how attendees can apply this framework to their own research.

2H: Unconference: How Can We Improve Registered Reports for Authors, Reviewers, and Editors?

Facilitators: Loukia Tzavella, Ben Meghreblian, Aoife O’Mahony

We invite researchers from all career levels to discuss how we can improve Registered Reports (RRs) for authors, reviewers, and editors. Discussion themes will include addressing the key challenges surrounding the adoption and implementation of RRs, expanding the format for more types of research, and increasing their overall accessibility. The insights gained from this discussion will be used to guide improvements to the RR process and quality of RRs being published (e.g. RRs study design template, community feedback, quality monitoring). We plan to have a dedicated space for early career researchers (ECRs) who wish to discuss their experience with RRs and potential barriers or concerns. RRs can further be improved for reviewers and editors with standardised checklists of RR criteria and tailored guidance and/or training. Feedback from our survey, slack channel, and the unconference will also inform initiatives that encourage the adoption of RRs by authors and journals.

3F: Workshop: Statistical Frontiers for Selective Reporting and Publication Bias

Facilitators: Maya Mathur, James E. Pustejovsky

This workshop will cover methods to investigate selective reporting in meta-analysis of statistically dependent effect sizes, which are a common feature of systematic reviews in psychology. The workshop is organized into two sections. In the first section, we will describe situations where dependent effect sizes occur and review methods for summarizing findings in the presence of dependent effects. We will then describe methods for creating and interpreting funnel plots, including tests of asymmetry, with dependent effect sizes. In the second section, we will present new statistical sensitivity analyses for publication bias, which perform well in small meta-analyses, those with non-normal or dependent effect sizes, and those with heterogeneity. The sensitivity analyses enable statements such as “For publication bias to shift the observed point estimate to the null, ‘significant’ results would need to be at least 10-fold more likely to be published than negative or ‘non-significant’ results” or “no amount of publication bias could explain away the average effect.” In both sections, we will demonstrate methods using R code and examples from real meta-analyses.

3G: Unconference: Transparency in Coding Open-Ended Data: Best Practices from Inter-rater Reliability to Dissemination

Facilitators: Talia Waltzer, Clare Conry-Murray

Quantifying open-ended data (e.g., verbal responses to questions, video-recorded behaviors) is a crucial part of social science. However, practices for coding and assessing reliability can vary widely across different groups of researchers, and practices are not always made clear in publications. How were coding categories developed? How was agreement between coders established? Many scholars are left to figure out the answers by themselves, or they inherit practices from their lab groups. Even though these decisions can influence statistical measures of reliability (e.g., Cohen’s κ), they are often omitted from published papers. This session aims to increase transparency about coding and reliability practices by fostering dialogue among folks who work with (or are interested in) open-ended data. If there is interest, we will also have an informal hack-a-thon to (1) draft a document to summarize common practices and recommendations and (2) compile a list of key information that should be made transparent when disseminating research.

3H: Unconference: (Too) Many Shades of Reaction Time Data Preprocessing

Facilitators: Krzysztof Cipora, Hannah D. Loenneker

When doing research in cognitive psychology and measuring reaction times the number of researcher’s degrees of freedom seems quite limited as compared to more complex observational designs. Nevertheless, there are multiple possible pipelines of data preprocessing (e.g., how trim for outlier reaction times, how to aggregate etc.). Even while investigating the same phenomena, and using supposedly the same tasks, labs differ considerably in data treatment routines. These differences might contribute to differences in observed effect sizes and reliabilities of observed effects. In this session, I would like to initiate the discussion on whether and how to account for these differences: Does it make sense to run a form of multiverse analysis on results of cognitive tasks? Shall we systematically investigate the effects of data treatment routines? Shall we build the standards / best practice for each task / paradigm?

Thursday, June 24

All times are in CEST (local time in Padova, Italy)

A

B

C

D

E

F

G

H

12:00

Guidelines on Including Non-WEIRD Populations in Psychological Science

Web Scraping Using R

Matching Stimuli (or Anything) Reproducibly

Taking Experiments Online with PsychoPy/ Pavlovia

Improving Interdisciplinary Review

Replication and Meta-Analysis: When Similar and When Not?

Bridging the gap between research and the public: Building an online resource repository of best practices for public engagement and research communication

13:30

Break

14:00

Lightning talks

15:00

Data Management Hackathon

Developing Resources to Support Teaching Faculty and Integrate Open Scholarship Content Into Curricula

Finalizing a Preregistrat-ion Template for ERP studies

MetaSIPS: A Metascience Un-unconference

How Helpful Are Diversity Classifications Such As WEIRD/Non-WEIRD or Global North/South for Psychological Science?

Disseminating the Idea of a Standard Enabling Sustainable (Re)use of Research Data

Digital Trace Data for Psychological Research – How Can We Access Data That Enable Innovative Research While Avoiding Another Cambridge Analytica Case?

How could we create a researcher skills/time exchange platform to improve psychology?

16:30

How Psych Science Can De-racialize for Its Improvement

Large-Scale Psychological Science: Reflecting on Lessons Learned

Sponsored Workshop: Online Research Methods with Gorilla Experiment Builder

Guidelines for Transparency in Open-Ended Data

18:00

Meet Prolific, see pre-data posters, attend roundtable discussions in Gather.town

Hackathons

Workshops

Unconferences

4A: Hackathon: Guidelines on Including Non-WEIRD Populations in Psychological Science

Facilitator: James Montilla Doble, Arathy Puthillam, Hansika Kapoor

Previous studies have shown that research in mainstream psychology has been dependent on American (Arnett, 2008) or WEIRD (Western, educated, industrialized, rich, and democratic; Heinrich et al., 2010) populations. Not much has changed in the past decade or so. A 2018 study, for example, found that over 70% of samples in research published in Psychological Science during 2017 are from North America, Europe, and Australia (Rad et al., 2018). A recent preprint has also identified that USA-based researchers were overrepresented in editorial positions in psychology and neuroscience journals (Palser et al., 2021). In this hackathon, we aim to create guidelines on and standards for evaluating and increasing diversity and inclusion in psychological research. We have identified key stakeholder groups for whom these guidelines are, such as authors, journal editors, and reviewers.

4B: Workshop: Web Scraping Using R

Facilitator: Tobias Wingen, Felix Speckmann

The internet contains a broad range of data concerning people's online behavior. Using automated web scraping scripts, researchers can download large amounts of this data with relatively little effort. Types of data that can be publicly accessed are manifold, such as Amazon reviews, newspaper articles, movie ratings, or blog posts. In our web scraping workshop, we will explain how to use web scraping to systematically extract data from websites, effectively supplying researchers with additional approaches within their field of research. Our workshop will focus on the use of the package “rvest” in conjunction with the popular programming language “R”. The theoretical introduction to web scraping will be accompanied by practical exercises. As part of those exercises, participants will write their own basic scripts to extract data from the web. The workshop is an ideal primer for participants to conduct web scraping projects in their field of research.

4C: Workshop: Matching Stimuli (or Anything) Reproducibly

Facilitator: Jack Taylor

Researchers often need to tightly control for confounding variables across conditions. Often, however, researchers are limited to using only a finite set of existing items. For example, you may be restricted to using a database of only a limited number of candidate words, or images of faces, or recordings of speech. Usually, people approach this problem by manually finding close matches on relevant dimensions. Manually crafting stimuli in this way is time-consuming and very difficult to do reproducibly. In this workshop, I'll show two solutions, using existing tools, for creating controlled stimuli reproducibly in R. The first solution uses an item-wise approach, creating directly comparable items in each condition. The second solution uses a distribution-wise approach, maximising the similarity in distributions across conditions. I’ll show how these two solutions are extremely flexible and can be applied to a range of different problems. Finally, I’ll discuss how using such an approach can aid reproducibility, replicability, and transparency of studies’ methods.

4D: Workshop: Taking Experiments Online with PsychoPy/Pavlovia

Facilitator: Rebecca Hirst, Thomas Pronk

PsychoPy is a free, open source, software for running behavioural studies that now supports online experiments through integration with Pavlovia.org. In this session we will demonstrate the basics of pushing a study online from PsychoPy, how to view the data and how to make the most of Pavlovia; for example, by using the thousands of publicly available experiments shared by the PsychoPy community.

4E: Unconference: Improving Inderdisciplinary Review

Facilitator: Hannah Metzler, Jana Lasser

While the importance of interdisciplinary work is widely recognized, getting such work funded or published is often hard. One reason is the difficulty to simultaneously meet the standards of different disciplines, according to which reviewers judge the work. Although a body of work already recognizes this problem, concrete tips and guidelines for people reviewing and writing interdisciplinary articles and proposals are missing. In this session, we will first collect common problems in interdisciplinary review of publications and research proposals in the attendees’ experience, and then draft a list of points to include in practical guidelines for reviewers and reviewees and journals/grant agencies. Potential issues to address include ways to deal with partial expertise and confidence of reviewers, ideas for an expertise taxonomy for reviewers, and the adaptation of new tools for peer review (crowdsourcing, open peer review etc.) to interdisciplinary contexts.

4F: Unconference: Replication and Meta-Analysis: When Similar and When Not?

Facilitator: Sera-Maren Wiechert

Based on Carter and colleagues' paper (2019), there are field-dependent statistical biases in meta-analyses depending on the extent to which biases are present at the individual study level, e.g., based on the level of publication bias in the literature, heterogeneity, and/or number of studies available. Therefore, across fields, topics and paradigms meta-analysis effect sizes may differ by degree of magnitude (either presence more or less effect) when compared to controlled/pre-registered (larger-scale) replication effect sizes. But is this always the case? Or under which circumstances are replication and meta-analysis effect sizes more similar? In an open discussion, new ideas may arise on different variables that may affect this comparison, thus, contributing to the biases and further the divergence in effect size. These insights would be relevant not only from a theoretical standpoint, but would also give a better estimation of the meaning of replication and meta-analysis comparisons.

4G: Hackathon: Bridging the gap between research and the public: Building an online resource repository of best practices for public engagement and research communication

Facilitator: Annayah Prosser

Public engagement and research communication are becoming an increasingly important skills for scientists and researchers looking to address societal challenges. However, training in this is scarce and widely dispersed, and resources can be difficult for researchers to find. It can be difficult for researchers to know how best to communicate with the public on different platforms (e.g. broadcast media, social media, community partnerships), and how to discuss their work while maintaining full rigour and transparency. In this hackathon, we'll be working together to collate links into best practices for public engagement and research communication into an open-access online repository that any researcher can access. In doing this, we hope to highlight the important work already being done in this area, and give researchers access to a variety of tools they can use to better bridge the gap between science and the public.

5A: Hackathon: Data Management Hackathon

Facilitator: Anna Wysocki, Michaela DeBolt, Kailey Lawson, Sarah Schiavone, Arianne Herrera-Bennett

The goal of this hackathon is to create an open-access syllabus on data management—a crucial skill that is rarely taught formally—that could be used or adapted for graduate seminars, advanced undergraduate courses, or individual study. Attendees will be provided a skeleton syllabus outlining potential modules and topics that could be included in the syllabus (e.g., data preparation, version control, data sharing). During the hackathon, attendees will collaborate to design, structure, and populate the syllabus. This will include proposing additional modules and determining which topics will be covered within each module. After creating a structure for the syllabus, attendees will add resources to each of the modules. The end product of this hackathon will be a syllabus that outlines critical components of data management and provides an integrated collection of resources for researchers to learn about the best practices in these areas.

5B: Hackathon: Developing resources to support teaching faculty and integrate open scholarship content into curricula

Facilitator: Olly Robertson, Sam Parsons, Madeleine Pownall, Flavio Azevedo, Mahmoud Elsherif, Martin Vasilev, and Alaa AlDoh

Developing educational resources is essential for facilitating engagement with, adherence to, and learning of research transparency, replicability, openness and reproducibility. To support instructors, we propose building resources which can be integrated into taught courses. Creating or changing course content can be onerous and time-consuming. We aim to make evidence-based, high-quality lesson plans and activities available to teaching faculty, thus reducing the labour required to develop and implement open scholarship content. This hackathon aims to create resources to support educators by progressing the “200+ Summaries of Open and Reproducible Science Literature” project and developing different activities and lesson plans for teaching open science. Attendees will collectively compile and review summaries of the key literature; create lesson plans/activities and categorize them based on their theme, learning outcome, and method of delivery. Summaries and activities may then be mapped onto lesson plans for ease of use and will be made publicly available.

5C: Hackathon: Finalizing a Preregistration Template for ERP Studies

Facilitator: Gisela Govaart, Mariella Paul, Antonio Schettino

During a hackathon at SIPS 2019, attendees started a preregistration template for EEG research. Over the last two years, a community of active volunteers has been working on this template asynchronously (via Google Docs and Slack) and synchronously, during hackathons organized by the Open Science initiative at the Max Planck institute for Human Cognitive and Brain Sciences. Now, the time has come to finalize the document. Three weeks before SIPS 2021, we will circulate a “minimally viable product” of the template to prospective attendees. In this clean version of the document, all lingering issues will be clearly marked as open for feedback. In preparation for the hackathon, prospective attendees can comment on the document. During the hackathon, the organizers will moderate discussions and incorporate feedback to achieve maximal consensus and have a final version of the template. Afterwards, the document will be sent to COS with the request to add it to the OSF preregistration templates.

5D: Unconference: MetaSIPS: A Metascience Un-unconference

Facilitator: Julia Bottesini

We propose a psychological metascience un-unconference: a 3 hour session made up of six 30-minute slots (15 to 20-minute talks followed by Q&A). Metascience is the examination of a scientific discipline’s processes, practices, and products using scientific methodology. Metascientific work in psychology is essential for addressing pressing questions in the field (e.g., what findings should we try to replicate? What is the optimal balance of individual versus team science? How can we improve measurement? How can we measure scientific progress within the domain of psychology?).  Some of these questions, which are often discussed at SIPS, can be addressed with existing theoretical and empirical work. As such, the primary goal of this session would be to collate a set of metascientific talks which could be used to improve future SIPS sessions and research by SIPS members. And let’s be honest, in the flurry of activity that is SIPS, a session to sit back and enjoy your coffee while you're talked at will feel like a welcome break.

5E: Unconference: How Helpful are Diversity Classifications such as WEIRD/Non-WEIRD or Global North/South for Psychological Science?

Facilitator: Sakshi Ghai, Amy Orben, Michael Muthukrishna

The time has come to rethink the study of diverse populations in psychology. Many would agree that the WEIRD acronym (Western, educated, industrialized, rich, democratic) has sensitized our field to the importance of sample diversity. Indeed, diverse populations are a necessary condition for conducting high-quality research. However, these oft-mentioned terms – WEIRD vs. NON-WEIRD or Global North vs. South – might risk overgeneralizing the extent of human diversity by inadvertently putting vastly different populations into unified boxes. This practice raises essential questions. Do we collectively assume that all non-WEIRD societies like Indians, Kenyans, and Brazilians are uneducated and poor? Do Eastern cultures like South Korea and Japan still count as non-WEIRD, given they are advanced economies? Are these terms mutually exclusive and collectively exhaustive? In this unconference, we will a) reflect on the perils and opportunities of using diversity classifications and b) discuss how we can make our science more inclusive.

5F: Unconference: Disseminating the Idea of a Standard Enabling Sustainable (Re)use of Research Data

Facilitator: Marie-Luise Müller, Katarina Blask, Marc Latz

In the course of the Open Science movement, the call for more transparency and openness within scientific research appeared. As a result, making research data accessible to the broader public, in order to enable a sustainable (re)use of data, has become increasingly important within psychological science. However, currently there exists no single standard allowing psychologists from all sub-disciplines to optimally prepare their data for reuse.. To close this gap, we have started to develop  a user-friendly curation standard which meets all necessary requirements to guarantee the long-term interpretability and reusability of research data. However, it is not enough to just develop a standard without knowing how to spread it within the research community. Therefore, a comprehensive dissemination concept is needed. The aim of this unconference is to identify and discuss strategically important action goals for the dissemination of the standard, as well as possible strategies for their implementation.

5G: Unconference: Digital Trace Data for Psychological Research – How Can We Access Data that Enable Innovative Research While Avoiding Another Cambridge Analytica case?

Facilitator: Johannes Breuer

The vast amounts of data generated by the use of digital technology are valuable resources for psychological research. Projects like mypersonality.org and numerous publications from different fields of psychology have demonstrated the great potential of these so-called digital trace data. At the same time, the Cambridge Analytica scandal has highlighted some of the risks related to such data, especially with regard to privacy and data protection. What the Cambridge Analytica incident and its consequences have also shown is that depending on commercial companies and their decisions for data access is risky for researchers. For example, data access via the Application Programming Interfaces (API) offered by many platforms can be drastically reduced or even shut off completely. Hence, there is a need for new ways of access to digital trace data for psychological research. Recently, different models have been proposed, including partnerships with companies or data donation by platform users. Naturally, all of those options have specific pros and cons, and none of them are trivial to implement. The purpose of this session is to discuss what kind of data access we as researchers need and how this can be implemented in a way that enables innovative research while also adhering to legal regulations and ethical principles. In addition to data access, these discussions also relate to questions of data sharing as privacy concerns and platform terms of service can conflict with ideals of open science (especially also regarding the reproducibility of research).

5H: Unconference: How could we create a researcher skills/time exchange platform to improve psychology?

Facilitator: Emily Corwin-Renner

Despite the rise of team science, currently, many research projects are pursued somewhat independently by one person or a small team.  As a result, the quality of these studies and projects is limited by the skills, knowledge, and perspectives of the few people involved in the project. This is because many steps of a project are often carried out by a single person and never checked/tested by others, despite the potential for highly costly mistakes while working alone on certain tasks and the potential for highly improved design while working together on other tasks. Even when people know others who would be capable of checking their work or providing advice, they often do not solicit support from others because “everyone is so busy” and people don’t like to be a burden to those who are most willing to be helpful, who likely on average receive much less help in return. In this unconference we will discuss possible benefits of and approaches to developing a platform for a research skills economy as a way to enable higher quality research. On the platform psychology researchers could help other researchers and get paid in a special currency which they could then spend when they want to get help from others.

6E: Unconference: How Psych Science Can De-racialize for Its Improvement

Facilitator: Dr. Vernita Perkins

Racism, a social construct, systemically resides in every aspect of our world and civilizational history, including psychological science. The residue and detriment can be found for over five hundred modern centuries. Identification and eradication of this systemic structure and individual cognition affords a scope to dismantle not only racism, but all other forms of oppression, inequity, and exploitation. This unconference offers a rare opportunity to openly discuss how psychological science has been deprived by the inequities and exploitation of racism and posits innovative brainstorming for how, in a psychological science without racism and its siblings, sexism, ageism, genderism, ableism, and its parent casteism and capitalism; psychological science can thrive in ideology, theory, methodology through re-imagining terminology, training, and research practices, entering a new psychological science revolution.

6F: Unconference: Large-Scale Psychological Science: Reflecting on Lessons :earned

Facilitator: Maximilian Primbs, Jessica Kay Flake, Biljana Gjoneska, Gerit Pfuhl, Jordan Wagge, Erin M. Buchanan, Patrick Forscher, Miguel Silan, Nicholas Coles

The Psychological Science Accelerator (PSA) is a globally distributed network of psychological science laboratories that coordinates data collection for large-scale research projects. A short time ago, the PSA published its first research study (Jones et al., 2021). We want to take this opportunity to reflect on lessons learned from doing large-scale psychological research. Using our recent research projects as an example, we will highlight issues arising regarding the recruitment of underrepresented minorities, the involvement of graduate and undergraduate students, translation, lab management, methodology and measurement, funding, manuscript writing and other aspects of the team science research process and give advice to researchers on how to avoid these issues. Participants will have the opportunity to ask questions to a plenum of researchers engaged in large-scale psychological research. Followingly, we will invite discussion and ask attendees to share their perspectives on these issues.

6G: Sponsored Workshop: Online Research Methods with Gorilla Experiment Builder

Facilitators: Jo Evershed, Joshua Balsters, Ashleigh Johnstone

Before COVID-19, online research was a choice, but recently it has become a necessity. Since taking the leap, researchers are enjoying the benefits of the speed, scale, and reach of online research, but worry about data quality when they can't see their participants. In this lecture we aim to cover these benefits in more detail, along with how successful pioneers have overcome some of the key challenges associated with online behavioural research. We will also provide an overview of the Gorilla Experiment Builder and Q&A session.

6H: Hackathon: Guidelines for Transparency in Open-Ended Data

Facilitators: Clare Conry-Murray

We propose to write a paper proposing guidelines for coding open-ended data in a way that is valid, transparent and reproducible.  

Friday, June 25

All times are in CEST (local time in Padova, Italy)

A

B

C

D

E

F

G

8:00

Many Modelers

GitFun: Introduction to git and GitHub

Introduction to PsychOpen CAMA: Data, Methods, and User Interface for Replicable and Dynamic Meta-Analyses

(Too) many shades of reaction time data preprocessing—a hackathon

ManyMoments - Improving the Replicability and Generalizability of Intensive Longitudinal Studies

9:30

How to Write a Plain Summary of Your Research: Gain New Perspectives and Open Up Your Research to a Wider Audience

New Publishing Format: Research Modules

Introducing the Journal Editors Discussion Interface

11:00

Break

11:30

Rolling Out The Red Carpet for Red Teams in Psychology

[12:00 CEST start] From Talk to Action: Organizing Principles to Diversify Psych

Small n but High Power? Manuscript and Preregistration Templates

Preregistration in Psychology

Reform Outside Traditional University Settings

Next Steps in Exploratory Research

13:00

Expanding the Global Reach of Scholarship: A Case Study of the Open Scholarship Knowledge Base

Theory-Building in Open Science: The Heliocentric Model of (Open) Science        

The future of SIPS

14:30

Closing Session: Adeyemi Adetula and Heather Urry

Hackathons

Workshops

Unconferences

7A: Hackathon: Many Modelers

Facilitator: Noah van Dongen, Leonid Tiokin, Adam Finnemann, Jill de Ron, Shirley Wang, Denny Borsboom

“Nothing is as practical as a good theory.” (Lewin, 1943).

Scientific theories allow us to explain the world and inform possible causal interventions. For example, the theory of evolution explains why species exist and allows us to develop causal interventions to select for less-virulent pathogens.

Unfortunately, psychology lacks strong theory (Cummins, 2000). Many psychological theories exist, but their scope, assumptions, and explanatory power are often unclear. One way to evaluate theory veracity is to build a formal model that captures aspects of the theory and observe if the model can (re)produce relevant phenomena. Yet, any given theory can be instantiated with a wide range of models.

Here, we propose that a ‘many modelers’ approach can help. During this hackathon, teams of modeler and scientists will formalize a theory and test if their model can reproduce phenomena that the theory purports to explain. This will be fun (and maybe useful).

7B: Workshop: GitFun: Introduction to git and GitHub

Facilitator: Ana Martinovici

Version control is one of the tools you can use to improve (numerical) reproducibility of your results (link). In this session you will learn how to use one the most popular version control systems: git. There are more ways of using git - in this session, you will practice using RStudio (point and click menus, no command line code) and GitHub. Target audience: Anyone who uses data and/or code in their research, but doesn’t use a version control system to keep track of changes to their files. By the end of the workshop, you will be able to: Create repositories on GitHub, Clone repositories on your device, Make changes to files in repositories, Commit & push the changes to repositories, Collaborate with others on GitHub (both co-authors and other researchers you don’t know).

7C: Workshop: Introduction to PsychOpen CAMA: Data, Methods, and User Interface for Replicable and Dynamic Meta-Analyses

Facilitator: Tanja Burgard

PsychOpen CAMA is a platform enabling the publication of reproducible and dynamic meta-analyses in psychology. It is a service of ZPID (Leibniz Institute for Psychology) and provides a template to facilitate updating and augmenting existing meta-analyses by the research community. Standardized meta-analytic datasets are available via a point-and-click interface. In the background, analyses are conducted via an Open CPU server with the help of an R package consisting of standardized data, metadata and meta-analytic functions. The workshop is supposed to introduce attendees to the need and concept of Community-Augmented Meta-Analysis (CAMA) systems. PsychOpen CAMA is presented in more detail, including the concrete architecture of the system, as well as data templates and underlying methodology. A demonstration will give an overview of available functionalities on the platform. Furthermore, ways to contribute or extend data in PsychOpen CAMA are presented and potential further ways of acquiring and extending datasets in PsychOpen CAMA are discussed.

7d: Hackathon: (Too) many shades of reaction time data preprocessing—a hackathon

Facilitator: Krzysztof Cipora

As a follow-up to our session “(Too) many shades of reaction time data preprocessing” (3H) we propose a hackathon under the same title. Participants of the session expressed their interest in working on the project further. In the session we want to develop frameworks for (1) using integrative data analysis to investigate how data preprocessing routines affect observed effect sizes for a specific cognitive phenomenon; (2) setting up multiverse analysis parameters for specific cognitive phenomena; (3) building unified protocols for “golden standards” of reaction time data preprocessing for a specific tasks / phenomena.

7E: Unconference: ManyMoments - Improving the Replicability and Generalizability of Intensive Longitudinal Studies

Facilitator: Julia Moeller

The increasing reach of the experience sampling method (ESM) and other intensive longitudinal sampling procedures have generated new opportunities to study people’s everyday experiences. At the same time, cumulative, replicable, and generalizable knowledge gain may be thwarted in some domains applying this method for various reasons, such as small, unrepresentative samples or limitations to few contexts (e.g., one school district, one specific area). This unconference discusses challenges to the replicability and generalizability of ESM findings and aims to identify and generate solutions to these problems in a collaborative brainstorming and debate. This session builds upon prior work by a group of experts who have gathered to help improving replicability and generalizability in ESM research: The ManyMoments Consortium, following examples of other multi-lab collaborations, such as the ManyLabs study (Moshontz et al., 2018; Klein et al., 2018), the ManyBabies study (Frank et al., 2017; ManyBabies Consortium, 2020) and the ManyPrimates study (Altschul et al., 2019). In this unconference, we first summarize the challenges to replicable and generalizable ESM research that we have identified so far to be specific to the work with intensive longitudinal data. We then give an overview of existing solutions, suggest new ones that help solving these challenges to increase replicability, and hope for much creative input from the participants in an open debate. With this unconference, we hope to start a debate about needs and solutions for replicable ESM research and get participants interested in joining a collaborative ESM study.

8C: Workshop: How to Write a Plain Summary of Your Research: Gain New Perspectives and Open Up Your Research to a Wider Audience

Facilitator: Marlene Stoll, Anita Chasiotis

In this session, we will explore ways to communicate (psychological) scientific results in a lay-friendly, but not oversimplified or lurid manner. Working with your own examples, I will guide you with evidence-based rules regarding linguistic and formal aspects. At the end of this workshop, you will be able to formulate your own plain language summary (PLS). Not only does the provision of such PLS open up your research to a larger audience - the PLS writing process can also give you a new perspective on your own work.

8D: Workshop: New Publishing Format: Research Modules

Facilitator: Chris Hartgerink

In this workshop, you will learn about research modules (i.e., individual components of a research project like theory, materials, data, code), how they can help you be a more effective researcher, and how to start publishing your own research modules.We start off by recapping some of the issues of research articles in light of reproducibility, after which we introduce research modules as a concept. You will learn what a research module is, when you would publish research modules in relation to research articles, and how it helps you document your work in a more complete and intuitive manner. We will introduce the infrastructure (peer-to-peer commons) and software (Hypergraph) used to publish research modules, and what benefits you get in terms of control and innovation. After installing the software, and an initial walkthrough, you will have time to publish your first research module during the workshop. Optional: Bring files for a recent step in a research project that excites you (e.g., collected data, analysis script).

8F: Unconference: Introducing the Journal Editors Discussion Interface

Facilitator: Priya Silverstein

This unconference session will introduce (and discuss ideas for further developing) the Journal Editors Discussion Interface (JEDI): a new community for social science journal editors to ask and answer questions, share information and expertise, and build a fund of collective knowledge. Although JEDI has been designed for discussing all issues related to editorial practices, a large part of discussions will focus on issues surrounding transparency, reproducibility, and diversity in publishing. Given the many demands on editors’ time – and given that most editors face similar processual challenges – there is great value to their interacting with each other about these key issues, and pooling their collective wisdom, sharing lessons, examples, insights, and solutions. The benefits can be further multiplied if experts on relevant topics (e.g. data management personnel, open science advocates) are included in the conversation. JEDI seeks to generate that interaction and those benefits.

9A: Hackathon: Rolling Out The Red Carpet for Red Teams in Psychology

Facilitator: Thomas Rhys Evans

Red Teams are individuals or groups who provide feedback from the perspective of an outsider or competitor, and are expected to take an active role in challenging decision-making and actions to improve the quality of the final work produced. Whilst norms of introducing critique early in the research cycle are slowly changing through initiatives such as Registered Reports, the use of Red Teams in psychological research is highly uncommon and are often perceived as threatening (e.g. by having ideas “scooped” or from receiving excessively critical critique). Red Teams could be a valuable source of feedback and support (e.g. through research design, analysis code, measurement practices, etc.) yet little is known about how best to foster such positive collaborations and outcomes. The primary aim of this Hackathon is to develop open resources (a how-to-guide and discussion manuscript) to support and change norms on implementation of Red Teams in Psychology.

9B: Hackathon: From Talk to Action: Organizing Principles to Diversify Psych

Facilitator: Emily Gwynn Turner, Aradhana Srinagesh

If academic psychology hopes to respond to declining mental health across communities, it must diversify who does psychology in order to diversify who and how it serves. The current global moment, if harnessed effectively, can be a portal to transforming psychology research, training and practice. The hackathon will train hackers in foundational principles of social organizing to empower the psychology community to actualize substantive diversity within its own ranks. During the event, hackers will crystallize diversity campaign goals, benchmarks, calls to action, and mobilizing strategies for harnessing struggle into collective power. Hackers will also be matched with an organizing mentor and other hackers with shared purpose to maximize impact through coalition building. This programming not only helps build skills that are important for collaboration and project management,  but delivers crucial, urgent activism.

9C: Hackathon: Small n but High Power? Manuscript and Preregistration Templates.

Facilitator: Alex Holcombe, Sarah McIntyre

Many journals have adopted policies encouraging, or even requiring, statistical power and sample size planning. However, the conventional power analyses commonly taught do not sit well with small-N, many-trials-per-participant studies, many of which are largely exploratory. This hackathon aims to provide templates for sample size planning and reporting for under-served designs. We are imagining that one template might provide text that would resemble the following: Psychophysical studies can be seen as providing strong evidence for a result within individual participants, with each participant being a sort of replication (Smith & Little, 2018). This comes from the ability to run large numbers of trials in multiple conditions on individual participants. Statistics can also be used, however, to license generalizing to a broader population, by using the between-participants statistical tests that are more popular in psychology broadly. Here we will use both approaches, by both testing individual participants extensively, and using a large enough sample that between-participant statistical tests may also be statistically significant.

9D: Workshop: Preregistration in Psychology

Facilitator: Lisa Spitzer

This workshop is aimed at psychological researchers that are relatively new to preregistration or who would like to learn more about different options for creating preregistrations. Specifically, this workshop will be divided into two parts: In the first part, I will illustrate what a preregistration is and why it is important that researchers preregister their studies. In the second part, I will guide the participants through the preregistration process and give practical advice. I will present various possible routes for creating preregistrations before narrowing down on a practical example. In this example, I will use the R package “prereg” and the PRP-QUANT template that has recently been published by a collaboration of psychological societies (APA, BPS, DGPs). I will walk you through the process of creating the preregistration by using the template until submitting it to the preregistration platform “PreReg in Psychology” (prereg-psych.org).

9E: Unconference: Reform Outside Traditional University Settings

Facilitators: Evan Nesterak, Alex Uzdavines, Natasha Tonge, Haijing Wu Hallenbeck, Lauren Ashley Anker, Chiara Varazzani, Paul E. Plonski

Around the world, behavioral science is being applied outside of traditional academic institutions. Working with governments, businesses, and not-for-profit organizations presents unique challenges to people interested in open science and improving research practices – movements which have often focused on academic institutions.  Despite opportunities to produce robust and transparent research in applied settings, there are barriers.  Without awareness, education, and incentives to implement best practices, there is a risk of conducting research that will lead to more scientific crises (like those that motivated the formation of SIPS).  This session will be focused on brainstorming how to develop, implement, and incentivize best practices.  We will aim to create a living resource that is accessible to applied behavioral scientists as they become interested in improving the quality of their work and the culture of their institutions.  Example questions we hope to discuss:  What are the main barriers to pre-registration, data sharing, and open access to information outside of academia? How can we incentivize a culture of transparency in the application of behavioral science? What would it take for applied groups to come together in a community around best practices in behavioral science?

9F: Unconference: Next Steps in Exploratory Research

Facilitator: Marjan Bakker

The unclear distinction between confirmatory and exploratory research is stated as one of the main reasons for the reproducibility crisis in psychology (Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012). As a response, we have focused mostly on confirmatory research (e.g., multiple preregistration formats, preregistration challenge). And although most papers mention that exploratory research is important in its own right (i.e., to generate hypotheses or when analyzes are complex), we have not much guidance on how to do these explorative analyses or how we should report them. Model validation techniques and blind analysis are proposed but are not always applicable. Or is transparency the key? In this unconference session, we will start with creating an overview of the different ways to do exploratory research, how to report this research, and for which research situations this is and is not applicable.

10E: Unconference: Expanding the Global Reach of Scholarship: A Case Study of the Open Scholarship Knowledge Base

Facilitator: Marcy Reedy

Despite a professed desire by many to increase inclusion in scholarship, there remains a paucity of representation from the Global South. How can we avoid the implications felt in teaching, learning, research, and scholarship by failing to achieve this goal? How well do infrastructures support output from the Global South to reduce risks of Vandana Shiva’s “monocultures of the mind”? The Open Scholarship Knowledge Base (OSKB), an open education resources repository, can potentially address some of these issues as a centralized hub for sharing scholarly resources; however, there are concerns that the OSKB and other OER tools may inadvertently perpetuate existing barriers to inclusion. This guided unconference invites the audience to explore barriers to representation from the Global South and develop strategies for increasing participation of a broader range of global scholars in knowledge exchanges. We hope to produce a list of recommendations that can be implemented by those seeking to increase inclusivity in scholarship.

10F: Unconference: Theory-Building in Open Science: The Heliocentric Model of (Open) Science

Facilitator: Monica Gonzalez-Marquez

We propose a theoretical model to describe the shift from a paper-centered scientific documentation ecology to one where the scientific process itself become the orchestrator of scientific documentation. We discuss implications for the division of labor in science and how this model supports greater transparency and accountability.

10G: Unconference: The future of SIPS

Facilitator: Julia Strand, Jennifer Gutsell, Randy McCarthy

The first SIPS meeting (in 2016) had 100 in-person attendees. SIPS2021 has 1000 scientists participating remotely around the globe. What should SIPS meetings look like in the years to come? In this unconference session, the SIPS program committee is eager to hear your input about what you’d like to see retained or changed in future meetings. How might we maintain continuity between SIPS meetings, make participation maximally accessible, and continue to facilitate this dynamic, collaborative environment? How might we balance the inclusivity of remote conferences with the opportunities of in-person ones? Come and chat with us to share your feedback about this year’s meeting and the future of SIPS!

Asynchronous sessions

Async1: Hackathon: How to Start a Revolution

Facilitator: Simine Vazire

Many fields are working towards similar improvements as SIPS is striving for in psychology. Some have started similar societies (e.g., SORTEE in ecology and evolutionary biology, STORK in sports science). This hackathon will be a place for anyone interested in bringing the "improving ____" movement to their field. We will  work together to learn from each other's efforts, and help each other take concrete steps to organize similar groups in new fields. This hackathon is especially relevant for people in fields other than psychology who would like to do, or are already doing, something SIPS-like in their field.

Async2: Hackathon: Developing a Global (Yes, Really!) Meta-Data Inventory for Psychological Science

Facilitator: Monica Gonzalez-Marquez

We propose a theoretical model to describe the shift from a paper-centered scientific documentation ecology to one where the scientific process itself become the orchestrator of scientific documentation. We discuss implications for the division of labor in science and how this model supports greater transparency and accountability.

Roundtable discussions

  1. Cogs in the Machine: Psychologists in Bureaucracies Outside Universities (Alex Uzdavines)
  2. Increasing engagement in environmentally sustainable behaviours (Alaa)
  3. Low Power, Lots of Responsibility: The challenge of balancing good science with career progress for graduate students and other early career researchers (Sarah Heuckeroth)
  4. The sooner the better. Doing Open Science when you’re an undergraduate (Michalina Tańska)
  5. ReproducibiliTea – For anyone interested in or is already a member of a ReproducibiliTea Journal Club (William Ngiam)
  6. From vision to reality: discussing the hidden struggles and winning strategies of open-science start-ups (Anton Lebed)
  7. Happiness and Well-Being Researchers (Elizabeth Jiang)
  8. Researchers studying the Psychology of Sex (Noelani del Rosario-Sabet)
  9. A micropublishing journal for meta-science? Smaller, faster articles and dynamic research campaigns (Nate Jacobs & Sarahanne Field)
  10. How can open scholarship support evidence-based learning in people with neurodiverse conditions? (Mahmoud Elsherif)
  11. Measuring the measurers: a call for all clinical measurement enthusiasts to discuss how we can better evaluate and compare the plethora of clinical progress tracking instruments (Benjamin Armstrong)
  12. Library Support for Improving Psychological Science: How can libraries and library/information professionals support people engaged in the work of Improving Psychological Science?  (Julie Vecchio)
  13. Negotiation and Conflict Management as Research Skills (Elliott Kruse)
  14. Measurement Enthusiasts: A place to discuss the idea of a measurement crisis in psychology; best practices in assessing latent construct validity; tools for improving psychological measurement, such as common method bias testing and measurement invariance assessment; and opportunities to work collaboratively to thoroughly evaluate measures commonly used in psychology. (Joseph McFall)

Pre-data Posters

#

Title

Authors

Abstract

1

Perception and justification of inequality as predictors of social demands for redistribution

Dayana Amante , Franco Bastias - Juan Carlos Castillo

Latin America has historically been characterized by high levels of poverty and inequality compared to the rest of the world. In this context, research programs on the perception and justification of inequality contribute to the understanding of behaviors and attitudes towards the redistribution of wealth in societies. Objectives. This research project seeks to make an original theoretical contribution at the Latin American level with respect to inequality. It evaluates social demands for redistribution and takes into account the predictive power of variables associated with perception and justification of inequality. Method. The research design will be of a causal correlational type. At least 300 individuals from each country under study will participate: Argentina, Chile and Peru. The collection is scheduled for 2022, which will be carried out through a virtual survey. Discussion. It is intended that the findings of this research may contribute to a better understanding of social attitudes and actions when it comes to inequality.

2

“You’re awesome” vs. “Your actions are awesome”: Does implicating an actor’s prosocial personality (vs. behavior) in a thank-you note increase subsequent helping?

Anurada Amarasekera, Lara B. Aknin

Past research has shown that receiving a gratitude message can inspire subsequent generosity (Grant & Gino, 2010), but what features of a thank-you note make future assistance more likely? We plan to explore whether referencing the helper’s kind personality (as opposed to the helper’s kind actions) leads to a greater willingness to help in the future. We will conduct a two-part study in which participants provide help to a peer and then receive a thank-you message from the student they assisted. Importantly, participants will receive one of two randomly assigned thank-you messages that reference either the participant’s kind personality or action. Afterward, participants will receive a survey to assess their willingness to help the same person again and to help other students in the future.

3

Playstation Gaming and Well-being: A Panel Study of Objectively Tracked Playtime

Nick Ballou, Craig Sewall, David Zendle, Laurissa Tokarchuk, Sebastian Deterding

Recent years have seen intense research, media and policy attention paid to questions of whether the amount of time spent playing video games affects players’ well-being and/or mental health, with evidence for both positive and negative associations. However, the vast majority of this research has used self-report measures of play, which evidence shows are highly unreliable. Even more concerningly, how inaccurate people are when they self-report their technology use might itself be affected by their well-being, a crucial confound. To address this, we will conduct a 6-wave panel study of 500 participants over the course of 10 weeks, investigating within-person relationships between objectively-measured Playstation gaming, well-being, and self-report inaccuracy.

4

Assessing the short-term effects of detached mindfulness: A micro-intervention for repetitive negative thinking.

Teresa Bolzenkötter

When was the last time you ruminated or worried about something? It’s normal to do so occasionally. Excessive forms of repetitive negative thinking, however, can be harmful for mental health. Detached mindfulness is an intervention teaching to observe and release one's thoughts. In my PhD project, I aim to implement detached mindfulness as a micro-intervention and assess its short-term effects on repetitive thinking and affect using experience sampling methodology. Participants with high levels of repetitive thinking will either practice detached mindfulness or a placebo intervention for several days. Repetitive thinking and affect will be assessed 15 and 30 minutes after each intervention. The effects of detached mindfulness will be compared to those of the placebo intervention as well as a non-intervention baseline phase. I am looking forward to getting feedback on the current study ideas and to discuss possible improvements with you.

5

On the individual prevalence of cognitive phenomena: integrative reanalysis of multiple unit-decade compatibility studies

Hannah Connolly, Julia Bahnmuller, Kristen Bowman, Thomas Faulkenberry, Krzysztof Cipora

Cognitive phenomena have typically been studied at the group level, with little consideration of individual differences. In this project, we propose a comprehensive framework for investigating the individual prevalence of cognitive phenomena which are indexed by differences between compatible and incompatible experimental conditions. To illustrate the approach, we will use the Unit-Decade Compatibility Effect (UDCE), a widely established and well-replicable phenomenon in numerical cognition. Despite its replicability at the group level, little is known about its prevalence at the individual level. In the planned project, we seek to leverage data from multiple research groups to robustly examine the individual prevalence of UDCE using four different approaches: a psychometric and two bootstrapping methods, as well as hierarchical Bayesian models. This framework allows for answering the “does everybody” question across cognitive phenomena, and for checking the robustness of individual prevalence estimates across analytical approaches.

6

Students Under Pressure: How Situational Features Influence Judgments About Cheating

Fiona DeBernardi, Talia Waltzer, Audun Dahl

Although students report that academic cheating is generally wrong, they also report that it is more acceptable in some circumstances. Students' evaluations of cheating in studies 1-3 were dependent on the context; high pressure situations (high obligations to others, low access to resources, and low teacher flexibility) were rated more positively. In order to examine the boundary conditions of this effect, study 4 will use different forms of cheating (fraud, faking drug tests, cheating in sports), ranging from severe cases (harmful to others) to not as severe (victimless cheating). In each of these scenarios, two variables will be manipulated in the vignettes (high versus low access to resources and flexibility), and participants will be asked to rate whether the cheating in the scenario is understandable, good versus bad, ok or not ok, and if they themselves would cheat in those circumstances.

7

Leader’s and Follower’s career calling

Sophie Gerdel,

In this three-wave longitudinal study I would like to investigate whether Leader’s career calling trickles down to Follower’s career calling. Based on Social Exchange Theory, I hypothesize that leader-member exchange (LMX) fully mediates the relation between leader’s and follower’s calling. I further predict that perceived supervisor support partially mediates the relation between leader’s calling and LMX. I will collect data among newcomers that are nested within leaders in a large organization. To analyze the data, I will make use of multi-level structural equation models. This study will inform on how leaders influence the development of calling in employees.

8

Does the Policy Work? A Survey of College Administrators’ Views on their Schools’ Academic Integrity Policies

Dakota B. Hughes, Talia Waltzer

Colleges across the US take a variety of approaches to curbing cheating and promoting academic integrity. This project builds on a content analysis of US colleges’ misconduct policies by surveying administrators at a range of schools (N = 60) about their views on their institution’s practices surrounding academic integrity. In 5-minute online surveys, administrators will report their institution’s number of academic dishonesty cases, perceived student understanding of the policy, and how punitive or restorative their policies are. We will compare these responses to characteristics of the schools (e.g., enrollment, private/public) and their policies (e.g., amount of information, extent of punitive language) to assess whether administrators’ perspectives on their policies are reflected in the policies utilized by their institutions. In doing so, this project will reveal the perceived and actual effectiveness of the different types of academic dishonesty policies and provide insight into top-down perspectives of academic integrity policies.

9

Developing a computer game for participatory simulation to explore parents’ strategies to feed preschool children

Dr. Megan Jarman, Professor. Jacqueline Blissett

Parental feeding strategies and children's eating behaviours are likely to exist in a feedback loop, however traditional research methods have not allowed an exploration of the dynamic nature of these. This project aims to examine the feasibility of using interactive computer game simulations to explore the interactive nature of parents feeding behaviour and children's dietary intake.We will create a computer game in which parents can create a child avatar and home environment similar to theirs in real life and play out mealtimes and feeding strategies they use. Parents will play multiple times and the child avatar responses will be based on what it's 'learned' from previous plays. We will pilot the modified game to collect data on strategies parents’ use, the consequences on child dietary intakes and how these interactions play out over time.

10

Compliance to Government's Orders during Covid-19 Pandemic: Does Religiosity Matter?  

Alma Jeftic

During the COVID-19 pandemic most countries have announced restrictions to prevent citizens from spending too much time outside. One of the measures was to ban religious activities, such as Friday prayers and Sunday mass/service which caused additional stress to believers. Religious coping refers to the use of religious beliefs or practices to cope with stressful life situations (Pargament et. al, 2005). The purpose of this research is to analyse if religiosity can mediate relationship between COVID-19 related stress and compliance to governments’ orders during the pandemic. The large sample consists of 12000 participants from 24 countries collected as a part of from COVIDiSTRESS survey. Participants filled out PSS (Cohen et al., 1983), six-item scale that measures overall compliance with preventive measures, and two-item scale that measures level of religiosity. This is a quantitative study using a cross-sectional survey design. Mediation analysis is planned to test if religious coping influences the relationship between stress and compliance to government orders. Results will be discussed in line with the theories on religious coping.

11

Understanding Pandemic-Related Experiences Through an International Survey-Based Collaboration: The iCARE Study

Keven Joyal-Desmarais, Kim Lavoie; Simon Bacon; on behalf of the iCARE Team

In March 2020, the Montreal Behavioural Medicine Centre (MBMC) launched the “iCARE Study” (https://mbmc-cmcm.ca/covid19/). This is an international collaboration that involves a series of surveys that have been monitoring people’s experiences (e.g., behaviours, mental health) around the globe in relation to the COVID-19 pandemic. Every 6 weeks, we launch a new survey, and are consistently updating the content. As we prepare new surveys, we are always looking for feedback in areas such as: suggestions for what to measure in new survey waves (e.g., health belief constructs), methods to improve the quality of the survey itself (e.g., quality checks), insights on new avenues for participant recruitment (e.g., reaching participants in low-income countries), tools to improve the validity/reproducibility of analyses (e.g., when working with 200+ collaborators/shareholders with diverse levels of research expertise), and statistical insights (e.g., advanced predictive modelling, dealing with missing data).

12

Do people spontaneously mention more negative emotions when recalling a self-directed (vs. generous) spending experience?

Zohra Kantawala, Dr. Lara Aknin

Past research suggests that generous behavior, such as pro-social spending leads to higher levels of self-reported positive emotion than self-beneficial behavior (e.g., Dunn, Aknin & Norton, 2008). However, does generosity influence spontaneous expression of negative emotion? To examine this concept, we will code 5,199 recollections of spending for spontaneous mentions of negative emotions (e.g., anxiety, sadness, hostility). Using LIWC (Linguistic Inquiry and Word Count) and third-party human coders we will compare impromptu expressions of negative emotions between generous (pro-social) and self-directed (personal) spending recollections. This proposed study’s hypothesis suggests that through the utilization of different coding mechanisms, spontaneous mentions of negative emotions will provide greater insight into how pro-social action influences other types of emotions and moves beyond self-reported feelings.

13

Communicating psychological evidence to non-scientists. How to deal with the complexity of psychological science?

Martin Kerwer, Mark Jonas, Gesa Benz, Marlene Stoll, Anita Chasiotis

Plain language summaries (PLS) aim to communicate scientific evidence to non-scientists in an easily understandable manner. In project PLan Psy, we aim to develop empirically-validated guidelines on how to write such lay-friendly summaries for psychological meta-analyses. Two pre-registered experimental studies have been conducted so far and generated interesting insights on how to structure psychological PLS. Some fundamental research questions remain, however, unanswered and we would like to discuss our ideas for addressing them. More precisely, this poster outlines our plans regarding our next study which will examine how target audience characteristics interact with the complexity of PLS to maximize the impact of our PLS. Against this background, we would like to discuss our ideas on theoretically sound and not overly simplistic ways for (1) assessing empowerment (i.e., laypeople’s ability to use PLS efficiently), and (2) communicating the risk of bias or the trustworthiness of psychological meta-analyses.

14

Test-retest reliability of model parameter estimates in human reinforcement learning.

Owen James Lee, Brendan Williams, Lily Fitzgibbon, Daniel Brady, Owen Lee, Paul Vanags, Niamh Bull, Safia Nait Daoud, Aamir Sohail, Anastasia Christakou

Computational Modelling is increasingly used in psychological and neuroscience research, notably in Reinforcement Learning, to make inferences about cognitive characteristics (Lewandowsky & Farrell, 2011 and Lockwood & Klein-Flügge, 2020). Often, the modelling process assumes that the parameter estimates remain stable over time and relate to individual differences. Like any methodology, it is important to establish that our measures have test-retest reliability, which will in turn verify the stability assumption. We aim to test the stability of estimates of model parameters, using an established reinforcement learning model as an example (Kanen et al., 2019). We will estimate the value of model parameters on two separate occasions for participants completing a probabilistic reversal learning task (Izquierdo & Jentsch, 2012), which has been shown to have good test-retest reliability in terms of participant performance (Freyer et al., 2009). We will then assess the reliability of these estimates over time within participants.

15

Urgent and irresistible: evaluating how time pressure and incentives influence fraud likelihood

Huanxu Liu, Yuki Yamada

Many studies reported that time pressure affected fraud and high cognitive load was deemed as a possible cause. However, inconsistent results have been found in the previous research and our prior study, which indicates that incentives might play a decisive role in the process of time pressure affecting fraud. Thus, to further clarify the effect of time pressure on fraud, we designed an experiment and planned to perform a two-way mixed-design analysis of variance with time pressure (presence vs. absence) as a between-participant factor and incentives (low vs. mid vs. high) as a within-participant factor. Based on a power analysis detecting the interaction effect, we plan to recruit 22 participants per group (i.e., 44 in total), and use a "coin flip paradigm" to observe participant's tendency to commit fraud under different conditions. We predicted that a significant interaction between time pressure and incentives on fraud could be observed.

16

Testing arithmetic competences adaptively

Hannah Lönneker, Julia Huber, Krzysztof Cipora, Hans-Christoph Nuerk

Numerical cognition researchers currently use different instruments to measure arithmetic competences, relying on distinct definitions of the underlying construct and largely varying operationalizations thereof. A standardized, time-efficient and theoretically sound instrument is needed to validly and reliably assess arithmetic competences and to ensure comparability between studies.

The aim of this project is to develop an adaptive computerized instrument which assesses performance in the four basic arithmetic operations separately. Items will gradually vary regarding difficulty (e.g., using carry/ borrow operations, increasing problem size) to allow for a precise estimate of the participant’s competence. A test theoretical approach such as the Item Response Theory will be used to assess person and item parameters. Convergent (arithmetic tests), divergent (reading test) and criterion-related (self-reported math grade) validity of the instrument will be estimated as well as reliability (re-test). All material will be openly available so that the instrument can be standardized in different populations.

17

The role of physiological arousal in media induced stress recovery

Tamas Nagy, Éva Kovaliczky, Virág T. Fodor

People often use media to recover from the negative effects of daily stress and mental fatigue. However, the mechanism behind media-induced stress recovery is not well known, and some observations are counterintuitive. For example, leisure activities that elicit further stress — such as watching a frightening movie or playing a mentally challenging video game — may be the most effective for stress recovery.

In this study, we want to investigate if physiologically and emotionally challenging media content can aid stress recovery and whether arousal has a moderating role. In a double-blind, parallel groups experiment, we will induce fatigue by asking participants to complete a set of challenging tasks. Then we will manipulate physiological arousal by administering caffeine or placebo to participants. They will then play a video game, that will either elicit negative emotions (horror game) or not. Then participants will again solve similar challenging tasks as before, serving as an outcome.

18

Memory Performance on Social Media: The Effect of Retrieval Type and Attachment Dimensions

Aylin Ozdes, Koc-Arik, G., Kirman-Gungorer, S.

The proposed study will test the effect of retrieval types used in social media on memory performance using an experimental design. The first aim of the study is to examine the effects of the types of retrieval used in social media (recording information to an external source, sharing information with an uncertain audience) on memory performance. Moreover, we aim to determine the moderation effect of the attachment dimensions (anxious, avoidant) on the relationship between retrieval types and memory performance for close relationship-related experiences. To reach these aims, participants will be asked to complete a recall task on a computer screen and complete a scale to measure the attachment dimensions. The findings will help to understand the negative effects of social media use on memory performance. In addition, it will contribute to the intervention programs to prevent these effects.

19

Lateralization shift: Can a phonological intervention shift the pattern of cerebral lateralization of written language in children at risk for dyslexia?

Nantia Papadopoulou, Marietta Papadatou-Pastou

A plethora of studies on the cerebral lateralization of language has established the dominance of the left hemisphere for oral language production in the majority of people. Neuroimaging studies have shown that this pattern is altered in cases of learning difficulties, such as dyslexia. Moreover, it was shown that it is possible to shift lateralization patterns in dyslexia to approximate the lateralization pattern of typically developing individuals through appropriate interventions. However, lateralization of written language has been investigated in very few studies and without including a sample of children, neurotypical or not, or assessing the effects of an intervention. The aim of this study is to examine the effect of a phonological intervention on the cerebral lateralization of written language in children at risk for dyslexia compared to typically developing children using functional Transcranial Doppler ultrasonography.

20

[Withdrawn]

21

Therapeutic Support for Racial Trauma and Substance Use: A DBT Group Approach

Krithika Prakash, Ellen Koch, PhD

"Oppression is the overarching umbrella for all sickness with drugs and alcohol", said a participant when looking at the link between racial trauma and substance use in American Indian communities (Skewes & Blume, 2019). Often substance use treatment tends to focus on the problem behavior itself; however, looking to address the socio-cultural context in which said behavior occurs might be beneficial.

Dialectical Behavior Therapy (DBT) uses a biosocial approach to understand and deal with problem behavior. In our study, I am looking to tailor and implementing DBT to address substance abuse within the context of racial trauma. Racial and ethnic minority groups deal with societal and personal invalidation and discrimination. DBT may provide to be beneficial to address these concerns for minority groups, thereby alleviating distress - eventually leading to decreased substance use.

22

WARN-D. Designing a large-scale longitudinal online study on forecasting depression: How can we prevent drop-out and errors?

Carlotta Rieble, Ricarda Proppert, Eiko Fried

As depression treatment efficacy remains disappointing, focusing on prevention is crucial. We aim to develop a personalized early warning system, WARN-D, that forecasts depression reliably before it occurs. Starting fall 2021, we will prospectively follow 2,000 students from universities and vocational schools for 2 years. In the first 3 months, we will measure students’ daily mood and lifestyle, combining ecological momentary assessment and smartwatch activity tracking, followed up by quarterly surveys on their mental health and circumstances. As some students will likely experience substantial symptom increases during the study, we can capture the onset of depression. Based on these data, we will build state-of-the-art models that predict individuals’ risk of soon becoming depressed, combining insights from psychological networks, complex systems theory, and machine learning. We hope for input on reaching a diverse student population and achieving high retainment, while implementing efficient, error-tight processes for this large-scale longitudinal online study.

23

Neuromodulatory role of of context in a social perception task

Alejandra Rossi, FJ Parada, Stefanella Costa-Cordella.

The processing of social keys is a fundamental condition of communication between agents. The effective use of these signals is an essential requirement when accessing the social world of which, as an intensely gregarious species, we are part. Furthermore, the social and cultural environment in which we develop is inseparable from cognitive exercise, so the context of socio-affective interaction should modulate cognitive activity. This project aims to analyze the behavioral and neurophysiological changes related to the affective and social support context in a robust experimental paradigm of social perception.

This project seeks to deepen the knowledge of the neuromodulatory role of context in social perception through a novel experimental design in order to demonstrate the effects of context modulation at different levels of complexity: behavioral responses, neuroendocrine and neurophysiological

24

Is working memory differently loaded by specific verbal stimuli depending on individuals’ anxiety profile? A dual-task study.

Serena Rossi, Iro Xenidou-Dervou, Krzysztof Cipora

A negative anxiety-performance correlation is attributed to various cognitive factors. According to the Attentional Control Theory (ACT), anxiety raises an individual’s attention to threat-related stimuli, consequently facilitating the processing of task-irrelevant information and reducing resources - e.g., Working Memory (WM) capacity - necessary to perform an assigned cognitive task. We also know that there are different types of anxiety (e.g., general anxiety, test anxiety, or mathematics anxiety). This study will investigate whether the WM of individuals with different individual anxiety profiles (i.e., configurations of different anxiety types) is differentially affected by specific verbal stimuli. We will use a dual-task design consisting of a primary cognitive task, during which we will load participants’ WM by manipulating the valence of the presented verbal stimuli (neutral, emotional-related, and mathematics-related words). Results can help us identify ways to mitigate the negative link between anxiety and cognitive performance especially in the context of mathematics anxiety.

25

[Withdrawn]

26

Cerebral laterality as assessed by functional transcranial Doppler ultrasound in right-and left-handers: A comparison between pen-and-paper writing and typing.

Christos Samsouris, Marietta Papadatou-Pastou

Written language is traditionally produced using pen and paper, but typing on a PC keyboard has gained widespread popularity in the last decades and has become an equally (if not more) important form of transcription. Regardless, the cerebral laterality of written language production has received little attention, in contrast to the cerebral laterality of oral language production that has been studied extensively. Handedness is an indirect index of cerebral laterality, with right-handers and left-handers exhibiting differences in cerebral laterality during oral language production tasks. In the present study we aim to compare keyboard typing and pen-and-paper writing regarding cerebral laterality. We will use functional Trans-Cranial Doppler (fTCD) ultrasound technology which allows for reliable measurements of hemispheric dominance during language production tasks and is not affected by movements, such as the ones generated during writing. The differences between pen-and-paper writing and typing will further be examined between right-handers and left-handers.

27

Modeling Student Math Achievement Across Countries Using TIMSS 2015 and 2019

Apoorva Shivaram, Elizabeth Dworak

Children’s early math skills are critical for future academic success. To profile the most important predictors of student math achievement, we propose to explore a large-scale secondary dataset (TIMSS) by using empirically driven supervised machine learning methods on nested data across 34 countries. By using these iterative techniques, we seek to determine what features of student, home, teacher, and school characteristics are critical in predicting math achievement in 8th grade students. We are currently piloting these analyses on 4th grade data from 2015 and 2019 to assess the feasibility of these methods. We seek feedback on the methods used in this project prior to submitting a Stage 1 Registered Report. These methods may help us shed light on the contextual factors and/or culture that may account for differences in student math achievement, how analogous these modeled traits are across countries, and how stable these models are across time.

28

Published or lost in the file drawer? Publication rate of preregistered studies in psychology

Lisa Spitzer, Stefanie Mueller

Although publication bias can be investigated indirectly by measuring the proportion of positive results in published literature, it is more difficult to examine directly how many conducted studies are not published. In other scientific disciplines, mandatory registries or ethics applications have been used for this purpose, but no such research has been conducted with respect to psychological studies.

Using preregistrations, we aim to assess the publication rate and bias of psychological studies: For the N = 382 studies that were preregistered on OSF Registries between 2012-2018, we will search for corresponding publications in journals. We want to investigate the proportion of preregistered studies published in journals and whether the significance of results has an impact on the time until publication. Furthermore, a survey will be conducted among authors of preregistrations for which no publication in a journal can be identified to assess reasons for non-publication.

29

Testing Methods to Capture Dynamic Social Context

Marie Stadel, Anna Langener, Laura Bringmann, Gert Stulp, Martien Kas, Marijtje van Duijn

Social context is an essential factor impacting mental health and well being. Yet, the ability to comprehensively capture social context has been challenging. First, social context is dynamic. Yet, most traditional methods involve static measures and do not focus on individual variation. Second, there are several methods that capture different parts of social context such as daily social interactions, a person’s social network, or online social activity. Research that attempts to combine these different methods is scarce. With this study, we aim to investigate how experience sampling methodology (ESM), personal social networks (PSN) and digital phenotyping (using the BEHAPP app) can be combined. Our aim is to find the most participant- and researcher-friendly way of obtaining a complete picture of a person’s dynamic social context.

30

Assessing the Reliability of Congruency Sequence Effect in Confound Minimized Online Tasks

Zsuzsa Szekely, Marton Kovacs

The examination of individual differences in connection with cognitive control has shown an increasing trend lately. However, the reliability of congruency sequence effect (CSE), one of the most used indicators of cognitive control, is questionable. The lack of clear evidence regarding the reliability of CSE implies theoretical and methodological concerns for the study of theories based on this construct. In our study, we will examine the reliability of CSE through four confound-minimized, online conflict tasks (Stroop, Simon, flanker, prime-probe). We plan to investigate the question from two perspectives. First, using a between-subjects design by measuring the CSE in each task at two different times. This approach will provide information on the test-retest reliability of the construct. Second, using a within-subjects design, in which participants will complete all four tasks once. By using this method, we can examine whether CSE effect sizes correlate between different conflict tasks.

31

A Registered Report on Registered Reports: Investigating Potential Benefits of and Barriers to Adopting Registered Reports

Tristan Tibbe, Amanda Montoya, William Krenzer

Authors of registered reports and traditional peer-reviewed articles will be surveyed about their papers and research practices. The research goal will be to compare the processes of publishing registered reports versus traditional papers, and how authors' research practices differ across methods of publication controlling for publication date and journal prestige. The findings of this research will contribute to the understanding of possible long-term benefits of adopting registered reports, such as what open science practices registered report authors adopt. The results will also reveal any differences that may exist in the publication processes experienced by authors of registered reports and traditional peer-reviewed articles (e.g., time to publication, number of journals submitted to).

32

Advancing knowledge on the development of child temperament

Lisa Wagner

Dimensions of temperament in children and of personality in adults are conceptually similar and may be integrated (e.g., Donnellan & Robins, 2009). However, in developmental psychology, temperament is frequently conceived as inborn and “stable”, whereas in personality psychology, there is growing interest in personality development. If anything, child temperament is typically seen as a predictor of ability attainment (e.g., Pérez-Pereira et al., 2016). I argue that influences in the other direction (and, of course, bidirectional relationships) are equally conceivable and that considering the relationships between individual differences in other areas of development could be key in understanding early development of personality. To address this question, I plan a three-wave panel study with parents of young children who will report on their children’s temperament repeatedly. Between waves, they will use the kleineWeltentdecker-App (Daum et al., 2020), a smartphone-based developmental diary assessing the age of attainment of developmental milestones in different areas.

33

Examining the role Feedback and Metacognitive Judgement Play in Post Error Slowing

Yiqiong Yang, Michelle Ellefson

Error monitoring helps learners make sense of their responses toward errors, and propose ideas to use external stimuli, such as feedback to aid learning processes. Error motioning is indexed by post-error slowing, a delayed response in actions after error commissions. It is necessary to gain a deeper understanding of individuals’ post- error adaption, and to what extent it reveals one’s metacognitive abilities. My research project will focus on identifying the role of feedback and metacognitive monitoring in post-error slowing by tasks that incorporate numeracy and science knowledge judgement. Four groups of participants will conduct a set of computerised tasks same in content, but different with the provision of trial-wise feedback and block-wise performance prediction. They will also go through the State Metacognitive Inventory afterward to record their task-related metacognition. I will use 2 × 2 × 3 mixed ANOVA and hierarchical regressions to answer my research question.

Lightning talks

Room 1

  1. What's new in PsychoPy (Jonathan Peirce)
    PsychoPy has been developed a great deal recently. While it was once developed purely by volunteers in evenings and weekends, it now has a development team of several full-time staff. That means bugs are being fixed and features being added faster than ever, but it’s still open source! Here we will highlight some of the developments and additions over the last year as well as some of the upcoming features.
  2. trackdown: an R package for collaborative writing and editing (Filippo Gambarota)
    R Markdown allows creating high quality and reproducible documents but collaboration in writing and editing is tricky. Common word processors (e.g., Microsoft Word or Google Docs) offer a much smoother experience in terms of real time editing and reviewing that is not available in R Studio. trackdown offers a simple answer to collaborative writing and editing of R Markdown documents. Using trackdown, the local .Rmd file is uploaded as plain-text in Google Drive where, thanks to the easily readable Markdown syntax and the well-known online interface offered by Google Docs, collaborators can easily contribute to the writing and editing of the document. After integrating all authors’ contributions, the final document can be downloaded and rendered locally. In this contribution, we will present the package and its main features. trackdown aims to simplify and improve the collaboration on scientific contributions adopting a reproducible, efficient, and high-quality literature programming workflow.
  3. Fostering robustness and transparency in research by developing a point-and-click Multiverse tool (vera Heininga)
    Multiverse Analysis is an approach to data analysis in which the outcomes of all reasonable analytic decisions (e.g., control variables, (in)dependent variable, model, and subsets) are evaluated and interpreted collectively, fostering robustness and transparency. A multiverse analysis demonstrates the extent to which conclusions are robust to arbitrary analytic decisions. However, performing a multiverse analysis is demanding and requires good programming skills in R or Python. We want to take advantage of current multiverse packages and develop a point-and-click Multiverse tool. Such a tool greatly increases the accessibility of the multiverse analysis for a large share of the scientific research community.
  4. SampleSizePlanner: A Tool to Estimate and Justify Sample Size for Two-Group Studies (Marton Kovacs)
    Planning sample size requires researchers to identify a statistical technique and to make several choices during the calculation. There is currently a lack of clear guidelines on how to choose the appropriate technique and on how to justify your choices. This presentation introduces the SampleSizePlanner a Shiny app and R package that helps researchers to determine and justify their sample size with nine different statistical methods for independent two-group study designs. The application highlights the most important decision points for each procedure and suggests example justifications for them. The resulting sample size report can be easily downloaded from the app and added in a manuscript or preregistration.
  5. Autocorrelation screening of repeating response patterns in data (Jaroslav Gottfried)
    Data quality is pivotal for valid and reliable research results. Yet, screening for invalid data is not always a common practice. In this regard, we propose a new technique called an “autocorrelation screening” which is theoretically able to detect repetitive or highly homogeneous response patterns in data much better than any existing technique. The fundamental idea behind autocorrelation screening is to search for the highest absolute correlation of respondent's answers with time-shifted copies of themselves, then sorting respondents by their resulting value and selecting those with the highest scores for a thorough validity check. Autocorrelation screening runs fast even on large-N datasets and should be able to efficiently detect certain careless respondents. In addition, we present our shiny application based on this idea, in order to allow psychological researchers to perform autocorrelation screening on their data easily and for free.
  6. Opening up Open Science: An adaptive toolbox to screen open science knowledge in students and reasons for the lack thereof (David J. Grüning)
    Over the past decade, open science has been gaining traction across disciplines. Strikingly, however, there is one academic group that is frequently under-represented in discussions revolving around open science: students in Bachelor and Master programs. Responding to this issue, we will, first, understand how much knowledge students have regarding open science. Second, we will evaluate potential barriers for students to get involved with open science. The ultimate goal is to provide guidance for the creation of informed open science offers specifically tailored to students’ needs in different professional tracks and at different universities. We present a preliminary study template for this purpose. Essentially, the template is open for adjustments as we reach out to researchers to not just reuse but also adapt the template. Thereby, in a collaborative fashion, an adaptive toolbox will be created that assesses open science knowledge and barriers.
  7. Statistical Power for Mediation and Condition Processes (Chris Aberson, Danielle Siegel, Josue Rodriquez)
    This talk presents newly developed tools with the goal of making power and sensitivity analyses for mediation models more straightforward for users. The talk introduces the pwr2ppl R package. In addition to power analyses for a broad range of designs, pwr2ppl provides functions for estimating required sample size (i.e., a priori power) or sensitivity for a variety of mediator models. The package currently estimates power for serial mediation models with up to two predictors, up to four mediating variables, and a single dependent measure, serial mediation, and conditional processes. Several applications from pwr2ppl are also available as shiny apps for those not familiar with R.
  8. Error Tight: Exercises for Lab Groups to Prevent Research Mistakes (Julia Strand)
    No one is immune from making mistakes. However, it is possible to set up our labs and research systems in ways that reduce the likelihood that we will make mistakes and make it more likely that we’ll catch the ones that slip through. Error Tight is a set of exercises for lab groups to identify places in their research workflow where errors may occur and pinpoint ways to address them. It is intended to be completed during a lab meeting. This talk will briefly introduce the exercises and benefits to implementing them with your lab group.
  9. Introducing the Critique of Research Ideas Collective (CRIC) - a new possibility for early error detection in research projects (Dwayne Lieck)
    Even if publications are subjected to a critical quality check by peer reviews, errors are often noted when it is already too late to prevent them. Therefore, we propose the Critique of Research Ideas Collective (CRIC) as a potential way to improve error detection in an earlier stage of research projects. The idea for CRIC is based on the Red Team Challenge – a call to give incentivized critique of studies – and the Pre-Mortem approach in businesses – where a failure is imagined and possible causes identified. In CRIC, a group jointly and actively searches for problems with research ideas and finds potential solutions for these problems. The critical analysis of planned studies takes place before preregistration, so that errors can still be improved upon. We hope CRIC can offer a free, open, and inclusive community for providing early error detection that has the potential to improve psychological science.
  10. Combining Data from Different Panels: A Valuable Resource for Research (Alexandra Bagaini)
    In this short talk I will discuss the value of combining data from several panels. Such datasets are generally free, well-documented, cover a wide range of topics (e.g., life satisfaction, cognition) and contain responses from large and representative samples of the population. A workshop at the 2019 SIPS conference already addressed the importance of such datasets for research (and teaching), and I’d like to highlight this again. As an example, I will show how such datasets can be used to assess the temporal stability of risk preference as there is an ongoing debate on whether risk preference is a stable psychological trait (e.g., intelligence), or rather a contextual psychological state, (e.g., affect). I will be highlighting how the operationalisation of risk preference matters and how that can inform and hopefully improve future psychological research.

Room 2

  1. Outlier Exclusion Procedures Must Hypothesis-Blind (Quentin André)
    When researchers choose to identify and exclude outliers from their data, should they do so across all the data, or within experimental conditions? A survey of recent shows that both methods are widely used, and common data visualization techniques suggest that outliers should be excluded at the condition-level. However, removing outliers by condition runs against the logic of hypothesis testing, and that this practice leads to unacceptable increases in false-positive rates. I demonstrate that this conclusion holds true across a variety of statistical tests, exclusion criterion and cutoffs, sample sizes, and data types, and show in simulated experiments and in a re-analysis of existing data that by-condition exclusions can result in false-positive rates as high as 43%. I finally discuss that any outlier exclusion procedure that is not blind to the hypothesis that researchers want to test may result in inflated Type I errors.
  2. The effect of the analyst on statistical results (Szaszi Barnabas)
    Recent multi-analyst projects have revealed substantial variability in analytical strategies and conclusions when several analysts independently addressed the same question using the same dataset. Consequently, the conclusions of a single analyst usually leave us uncertain about the extent to which other acceptable analysis paths would have resulted in a different outcome. Multi-analyst projects map the analysis space and help bring to light inferential uncertainty that usually remains hidden. Here we introduce a recently developed consensus-based guidance for conducting and documenting multi-analyst studies. In addition, we introduce an ongoing study, Multi100, in which multiple analysts will be recruited to independently test a selected hypothesis from 100 published papers from the behavioral and social sciences. Results from the Multi100 project will provide insights into the extent to which different analysts arrive at the same conclusions and at the same effect estimates.
  3. How do causal mechanistic explanations affect perceptions of research findings? (Randy T. Lee, Yuichi Shoda and Vivian Zayas)
    In psychological science, requests for an experimental analysis of mediation are not uncommon (see Bullock & Green, 2010). But what are the implications for perceptions of research findings when they are presented with vs. without mechanisms? Are they seen as more interesting, surprising, and important? Are they seen as more worthy of funding? In the present work, we examined how social psychological and personality findings are perceived with vs. without mechanisms. In a preliminary pre-registered study with college students (N = 205), we examined perceptions of research findings using six articles from the Journal of Personality and Social Psychology. Research findings that described mechanisms were judged as more obvious and less surprising, yet viewed as more important and worthy of funding received. These effects on judgments of fundworthiness were moderated by science orientation and the specific finding presented. We raise possible future directions for this program of research, and discuss the seemingly contradictory set of findings.
  4. Qualitative Registered Reports: The Hypothesis Paradox (Veli-Matti Karhulahti)
    Registered reports (RR) operate primarily with quantitative methods, often expecting testable hypotheses. This is rarely possible with qualitative methods, as there are few, if any, reliable means to collect and quantify qualitative data for hypothesis testing. Some have argued that, because of this, RR should not be applied to qualitative studies. I argue that qualitative RRs should be applied, however, with specific “qualitative hypotheses” (QH). The goal of a QH study is not to seek evidence for an alternative hypothesis (or null), as qualitative research is not well suited for null hypothesis significance testing -- in qualitative research, goals are usually nonconfirmatory. However, such research, too, comes with existing basis, which can be combated by the RR. In qualitative RRs, QH are needed for disclosing biases: what is expected from the data based on the literature, pilot data, etc. QH serve not to test hypotheses, but to disclose hypothetical biases.
  5. Accelerating cultural change with collective action: A Project FOK update (Cooper Smout)
    Academia functions like a ‘tragedy of the commons’ dilemma: Open Science practices have the potential to benefit the entire research community (and beyond), but remain underutilised due to incentive structures that prioritise novelty over reliability and reward publications in high-impact journals. In recent years, ‘conditional pledge’ platforms (e.g., Kickstarter, Collaction) have been increasingly used to tackle comparable collective action problems, but this concept remains to be implemented within academia. Here, I will present Project Free Our Knowledge, a new platform that aims to overcome cultural inertia by organising the collective adoption of open and reproducible research practices. I’ll give a quick background to the project, introduce some new campaigns that have been posted in the past year, and invite conference attendees to propose additional campaigns for the future.
  6. Adversarial collaboration for advancing knowledge, not perpetuating debates (Robert H Logie)
    Scientific debates can be a major driver for advancing science, but debates can also perpetuate indefinitely. Scientists tend to work with like-minded colleagues motivated by promoting a particular theory and demonstrating how alternative views are inadequate, rather than seeking resolution. Newell (1973) commented “You can’t play 20 questions with nature and win”, and listed 24 binary oppositions reflecting decades of debate in cognitive psychology. Many of those binary oppositions remain unresolved and might be considered ‘stalled’ by debate. Watkins (1984) viewed cognitive theories like toothbrushes: we all need one but would not want to use one belonging to someone else. In this lightning talk, I will illustrate the approach of adversarial collaboration in which researchers from different sides of a debate agreed to work on a common project aimed at resolving debate to advance understanding rather than self-perpetuate with no resolution in sight.
  7. A peer-review intervention to improve transparent reporting (Robert Thibault)
    Preregistration aims to increase the trustworthiness of research; in part, by clearly demarcating exploratory and confirmatory design choices and analytic decisions. Clinical trial research uses a comparable procedure that several organizations mandate—prospective registration. In practice, departures from registered study plans often go undisclosed in publications. We conducted a systematic review and meta-analyses finding that 10-68% (95% prediction interval, I2 = 86%)) of studies have at least one discrepant primary outcome between their registration and associated publication, and 13-95% (95% prediction interval, I2 = 90%) have at least one discrepancy secondary outcome. We then ran a feasibility study to test the implementation of a peer review intervention we call ‘discrepancy review’. For this intervention, journal editors invited a member of our team to peer review submitted manuscripts specifically for discrepancies. If successful, discrepancy review may present one feasible solution to improve the trustworthiness of published research.
  8. Personal and Interpersonal Uses of Peer Review Checklists (Jay Patel)
    Each encounter with a scholarly poster, talk, or paper requires a careful assessment of its rigor, usefulness, and clarity. In both academic and lay communities, such assessments are unfortunately made without comprehensive checklists that ensure reliable and comprehensive judgements. If a variety of peer review checklists were developed and iteratively tested to guide reviews, scholars could ensure that publications meet basic thresholds of transparency, openness, rigor, and clarity (the interpersonal use). Checklists could benefit authors too, by enabling them to self-monitor their adherence to domain-general principles of sound scholarship and communication (the personal use). In this talk, I will outline a year's worth of thinking and prototyping of peer review checklists and associated visualizations that summarize them. I believe that they can stimulate peer review reforms across the psychological sciences.
  9. Bottom-up Curriculum Change: Student Recommendations on More Effective Education about Better Research Practices (Izabelė Jonušaitė, Michelle Kühn, Ava Q. Ma de Sousa, Andreas Pingouras, Mona M.L. Zimmerman, Judit Campdepadrós Barrios, Destiny Carbello, Margot Steijger, Marta Stojanović)
    One of the goals of the student-led Open Science Initiative at the Brain and Cognitive Sciences Research Master (University of Amsterdam) has been to increase awareness of good research practices within the student body. In addition to organising extracurricular workshops and journal club sessions on various aspects of Open Science for students already interested in these topics, we also worked together with the programme coordinator on reviewing and providing suggestions on the core curriculum of the programme. That way, we have aimed to ensure that every student of the programme has the chance to learn about the existing issues within the field and the potential ways of overcoming them. In this lightning talk, we will discuss three main recommendations that we made on the curriculum. We also discuss how these suggestions were received by the course convenors, and the extent to which they have been successfully implemented in the programme.
  10. The Junior Researcher Programme - An initiative providing opportunities to early-career researchers in psychology (Hannes Jarke)
    Opportunities to gain hands-on experience developing a project from research question to publication are often limited in psychology degrees. This talk introduces the Junior Researcher Programme (JRP): a volunteer-led initiative that enables students from across the world to work together on an international research project under the supervision of early-career researchers. Each cohort starts with a one-week summer school, where students and supervisors meet to plan their projects. Supported by the JRP, teams carry out their research over the next 13 months, after which they meet again to present their findings. The programme has been successfully running for the past ten years, and we are proud to have an active international alumni network, which provides further opportunities for career development. Whether you are interested in becoming a supervisor, participating as a student, volunteering with us, or becoming an independent reviewer, join us at the lightning talk to learn more.

Room 3

  1. Meta-analysis, pre-registration, and behavioural priming (Robert Ross)
    Meta-analyses of psychology studies typically find that effect sizes are highly heterogeneous. Moreover, this heterogeneity tends to be very difficult to interpret because meta-analytic methods that aim to correct for biases rely on untestable assumptions about the processes that generate biases and the magnitude of biases. In this talk I argue that a useful (if imperfect) approach for dealing with biases is to conduct meta-analyses that include only those studies that were pre-registered. I make my case using examples drawn from the behavioural priming literature.
  2. Meta scientific project: Object orientation effects across languages. (Sau-Chin Chen)
    Object orientation effects associated the mental simulations of object orientation while reading a sentence. Original studies in English and some European languages showed the weak effects. Researchers have yet identified causes resulted in the relative low evidential level of object orientation effects. Based on the framework of Psychological Science Accelerators, we had collected the data from more languages than the past studies. Our initial analysis confirmed the effects were as weak as the past studies in English and European languages. On the other hand, the recent database showed the relative larger effects in Asian languages. To facilitate the researches investigate the divisity among languages, this project would be the basis for the advanced meta analysis and the initial estimation points for the experimental design in the particular languages.
  3. Experiencing open neuroscience from the reanalysis of two fMRI studies: Tom et al., (2007, Science) and Botvinik-Nezer et al. (2020, Nature) (Chun-Chia Kung, Hanshin Jo, Anastazija Popesku, Yan-Ru Chen, Chun-Yi Chien, Siao-Shan Shen, and Le-Si Wang)
    While the open science movements, if dated from the Nosek et al., 2015 Science paper, has sprouted so many reforms, neuroscience (or neuroeconomics in focus) has yet to catch up. In this talk, we (the instructor and class-takers of 2019_spring and 2020_fall fMRI grad courses) reanalyzed the released raw data (dataset 0007 and 1734, respectively) in openneuro.org. Since the 2nd study (half of it, N=54) was actually a registered replication of the 1st one (N=16), and only 1 hypothesis (out of the 9 pre-registered ones) was partially supported, we dug further by first replicating both papers' main findings with the identical software/procedures (sanity check), then demonstrating that, by comparing both studies on the common ground, the shrinking effects of the replications, rather than the method differences, as the more possible reason behind the discrepancies. These activities enrich students’ learning by hands-on experiences to the contemporary papers, methodologies, and suggested practices.
  4. Too WEIRD, Too Fast: Preprints about COVID-19 in the Psychological Sciences (Arathy Puthillam)
    Previous research has shown that an overwhelming majority of research published in the psychological and behavioral sciences focuses on WEIRD (Western Educated Industrialized Rich Democracies) countries. This is a cause of concern especially when one considers how research is utilized in applied and diverse contexts. This was highlighted especially in the past year, where responses to the global pandemic were concerned. The present study aims to understand preprints about the pandemic published on PsyArXiv. Specifically, we collected data about the sample sizes and nationality of participants, along with gender (if known) and nationality of the authors in two waves: published between February and April 2020, and between April and December 2020. We also test which of the papers from the first wave were published and in which journal; we contend that papers with WEIRD and samples and authors are more likely to be published and in journals with higher impact factors.
  5. Caution, preprint! Brief explanations allow non-scientists to differentiate between preprints and peer-reviewed journal articles (Tobias Wingen, Jana Berkessel, Simone Dohle)
    A growing number of research findings are initially published as preprints. Preprints are not peer-reviewed and thus did not undergo the established scientific quality control process. Many researchers hence worry that preprints might reach non-scientists (e.g., policymakers or practitioners), who do not differentiate them from the peer-reviewed literature. Across 5 studies in Germany and the US, we investigated whether this concern is warranted and whether this problem can be solved by providing non-scientists with a brief explanation of preprints and the peer-review process. Studies 1 and 2 showed that without an explanation, non-scientists perceived research findings published as preprints as equally credible as findings published as peer-reviewed articles. However, an explanation of preprints and the peer-review process reduced the perceived credibility of preprints (Studies 3 to 5). Adding such an explanation to preprints thus allows harvesting the benefits of preprints while reducing concerns about public overconfidence in the presented findings.
  6. Open Science cross-culturally: First results of the OSCC-project and their implications for Open Science (Myriam A. Baum, Alexander Hart)
    Researchers apply Open Science (OS) practices differently - some practice them frequently, while others have never heard about (many of) them at all. Reasons for these differences - across disciplines and across countries - might be manifold and dependent on local peculiarities. In the on-going OSCC-project, the frequency of use of certain OS practices as well as obstacles that might prevent a researcher from practicing them are investigated in an international survey. The intended sample consists of researchers from different career stages and various disciplines all over the world and aims at providing valuable insights into the current state of OS dissemination. In this lightning talk, we will present first results, showing which practices are more and which are less common in contemporary science as well as which barriers prevent researchers from engaging in certain OS practices more frequently.
  7. Measuring ideology: Current practices, its consequences, and recommendations (Flavio Azevedo)
    Political ideologies are foundational to a broad range of social science fields such as Political Science, and Social and Political Psychology. While scholars use diverse and wide-ranging approaches to its study, all have in common the measurement of an individual’s (latent) political ideology. We sought to investigate this practice in detail for which we conducted an exhaustive literature review of over 400 scientific articles, spanning from the 1930s to 2020s, across a wide range of social sciences subfields. Furthermore, and importantly, it is a standard practice to assume ideological inventories can be used interchangeably. This untested assumption, if shown not to hold, may pose a threat to the comparability and generalizability of findings. Indeed, we show empirically with a high-powered nationally representative sample that at least five established 'traditional' findings in ideological research can change as a function of the instrument used. We then discuss its consequences and recommendations.
  8. How Can We Do Experiments on How People Learn Hard Things? (Laura Fries, Ji Y. Son, James W. Stigler)
    Psychological science has long been interested in understanding and improving how people learn. Constraints in time, resources, and of experimental design have meant that much of what we know comes from laboratory studies occurring over relatively short periods of time with contrived or simple subject matter. Our work sets out to change this with a new approach to studying teaching and learning in a novel technology platform, CourseKata.org, allowing for large scale experiments in real courses. We use this approach in an introductory statistics course in which students learn complex concepts over the span of weeks and months. We are currently conducting large scale A/B tests with hundreds of students, and are able to examine the variability that occurs at the level of the instructor, course, and student as we make incremental improvements to the curriculum and, on a larger scale, our theories of teaching and learning.

Room 4

  1. Role of health technology assessment for strengthening public mental health system (Apurvakumar Pandya)
    There has been rapid generation, continuous innovation, and incremental improvement of health technologies to prevent, diagnose, and treat mental ailments. Public mental health systems worldwide need to ensure safety, clinical and cost-effectiveness, equity, and sustainability. However, not all technology innovations result in overall health gains, nor does their implementation result in improved cost-efficient solutions. Health technology assessment (HTA) is a well-acknowledged method to assess the cost-effectiveness of two or more health technologies. HTA is concerned with the medical, organizational, economic, patient-related, legal, ethical, and societal consequences of implementing health technologies or interventions within the health system. HTA is increasingly seen as an innovative way to sustain and improve health systems. However, very few HTA studies assess technologies for public mental health. This lightening talk presents a framework for identifying technologies best suited for HTA, the process of carrying out HTA study and advocating uptake of study findings by policy-makers.
  2. Addressing Multiplicities in the Experiences of Marginalised People: The Application of Pluralistic Qualitative Analysis (Siobhan Thomas)
    Pluralistic qualitative analysis refers to the process of analysing a single dataset through two or more qualitative methods, thus allowing researchers to draw on multiple epistemological frameworks. This approach to qualitative analysis enables a comprehensive approach to complex topics by scaffolding a multi-dimensional and more nuanced understanding of the data. The novelty of the approach, however, means that potential applications to different populations have yet to be explored. I argue that pluralistic qualitative analysis can be particularly beneficial to research which explores the experiences of marginalized and stigmatized populations. By providing a methodological and epistemological reflection of the varied ways that marginalized people experience themselves and the world around them, pluralism lends itself to an analysis that better represents the intricacies of stigmatized people’s experiences. In this way, it offers researchers the chance to understand experiences of marginalization more thoroughly.
  3. Open science awareness and practices in ethnic minority and cultural psychology (Linh Nguyen, Jocelyn Li, Wendy Schlinsog, Qilin Zhang, Moin Syed)
    Previous research provided descriptive results on the prevalence of use and opinions of researchers on questionable research practices as well as proposed reforms to research practices. We investigated these questions among researchers in ethnic minority and cultural psychology. Although it has been suggested that this subfield faces unique challenges to adopting proposed reforms, such as confidentiality concerns with qualitative sensitive data, this claim has not been empirically tested. Data were collected from 352 participants who recently published in relevant journals in the subfield, 265 of whom had complete quantitative responses. Among the ten questionable practices included, researchers indicated lowest endorsement and engagement in undisclosed data imputations and highest in selective reporting of statistical models. Most researchers were aware of the proposed reforms: posting data, posting instruments, and preregistration; yet, less than a quarter reported active engagement in these practices. Prevalent reasons for disengagement included proprietary materials, risk of deidentification, and lack of time. Notably, 44% reported not having preregistered due to unfamiliarity with the practice. Thematic analyses of open-ended responses highlighted themes of extrinsic/intrinsic motivation, practicality, lack of knowledge, relational ethics, and perceptions of open science group identity. This descriptive research provides important insights to calibrate outreach efforts and understand researchers’ concerns in adopting proposed reforms to scientific practices.
  4. [Withdrawn]
  5. Collecting Data in Collaboration with Non Profit Organizations (Ruth Ditlmann)
    Collecting data in collaboration with Non Profit Organizations has a lot of potential for advancing psychological science. It allows us to test if our theories hold up in the field, collect data with greater ecological validity than in lab or survey research, improve the outreach or our science and include the perspectives of the communities who participate in our research. I will share my experience from two multi-year collaborations with NPOs including the collaborations came about, how we funded the research, digital tools that I found useful and challenges I encountered.
  6. The future of digital mental health care amidst the psychologisation of the COVID 19 crisis (Pragya Lodha)
    The COVID-19 pandemic pushed and enforced various lifestyle choices to go digital. The global lockdown brought along the digitalization of psychological services. The feasibility, affordability, and cost-saving solutions offered by digital technology are well received by mental health practitioners and amplified by the surge in research literature claiming the future of mental health to be digital. In the devastating COVID-19 pandemic, numerous researchers and practitioners characterized the pandemic with a psychological crisis. It calls to question the difference between the digitalization of mental health and psychologization of COVID-19 crisis. The psychologization of COVID-19 crisis means to look at the pandemic (broadly) from a psychological standpoint and aggrandize the parallel pandemic of mental health concerns. Digital mental health care contributes to the pathologization of individual's psychological reactions to the pandemic leaving no room for subjectivity of experiences. Understanding that COVID 19 is a collective trauma and that an array of psychologically distressed reactions are normative responses to a pandemic, the need for datafying and digitizing our subjective experiences in the 21st century should be met with questions. Inevitable, the technological revolution is to forge ahead with time and present us with advanced ways of understanding mental health. Thus, the potency of digital mental health contributing to mental health literacy cannot be denied. However, it is important to reflect upon how much we know about being human, and thence, can digital ways be enough for us to satiate our mental well-being in the coming time? The need for a returned attention to the bio-psycho-social model in times where biomedical and neurobiological focus on mental health are extending. This calls for us as mental health practitioners to brainstorm and grow the sanctity of psychotherapeutics that acknowledges and celebrates individual differences. We invite you to think about securing the future of mental health care in a world that is digitally advancing.
  7. Community-Engagement Pedagogy: Inviting Community Professionals into the Classroom and Guiding Students' Involvement in their Communities (Shlomit Flaisher-Grinberg)
    The employment of community-engagement practices within academia has been demonstrated to facilitate research, professional training, and instructional pedagogy. In an attempt to impact students’ learning outcomes in the context of civic-engagement a new community-engaged curricular item was created within the undergraduate psychology program. The “Canine Learning and Behavior” course offers students with the opportunity to foster shelter dogs, train them, and improve their behavioral repertoire using the knowledge offered by the field of psychology. While community-professionals teach within the academic classroom, enrolled students engage in the community and learn to understand its dynamics and needs. An assessment of course effects demonstrated that it facilitated students’ success, materials comprehension, and graduate school/professional career preparation. In also increased students’ confidence in acquired skills, positively influenced their attitudes towards community-involvement and created high satisfaction rates among the community partners. The implications, limitations and future directions of this pedagogy will be discussed.