2021 Conference Program
Remote Edition
June 23-25, 2021
with support from
The 2021 SIPS conference is not like a regular conference. There are no symposia or keynote speakers: the meeting will be action-oriented and will focus on initiating and conducting projects. In other words: no passive listening to boring talks. Instead, get your hands dirty and start improving psychological science!
The general principles:
The Society for the Improvement of Psychological Science (SIPS) aims to provide a harassment-free event experience for everyone, regardless of gender, gender identity, gender expression, race, ethnicity, caste, national origin, citizen status, age, sexual orientation, disability, appearance, body size, religion, socioeconomic status, other group status, or their intersection. We do not tolerate harassment of event participants in any form. Event participants violating these rules may be sanctioned, including being expelled without a refund. The full SIPS Event Code of Conduct is available here. Please see also the conference rules for the online 2021 meeting. In short:
Wednesday, June 23 | ||||||||
All times are in CEST (local time in Padova, Italy) | ||||||||
A | B | C | D | E | F | G | H | |
15:00 | Opening Session: Ivy Onyeador | |||||||
16:00 | Getting Practice with Theory Building: Starting with What you Know…or What You Think You Know | Increasing Researcher Transparency and Reflection Through Positionality Statements: Lessons From Qualitative Research | Robust Mediation Analysis | How Do We Meaningfully Interpret Effect Size Indices? | Decolonizing Science | How to Dip Your Toes in Open Science and Bring It to Your Department | Accelerating the Adoption of Registered Reports in Clinical Psychology and Psychiatry | Getting Psychology to the People: Creating Topical 'Wikis' That Target Crisis-Relevant Issues |
17:30 | Break | |||||||
18:00 | Decentering Whiteness Within Research Methods Courses | Best Practices for Addressing Missing Data through Multiple Imputation | Developing Best Practices for Publishing Theses, Dissertations, and Other Student Scholarship | Understanding and Incorporating Data Simulation into the Research Pipeline: A Practical Guide for the Novice Simulator | Machine Learning for Exploratory Research | Interaction Effect: Doing the Right Thing | Minimum-Effect Significance Testing (MEST) and Equivalence Testing: A Unified Framework and a Hands-On Tutorial | How Can We Improve Registered Reports for Authors, Reviewers, and Editors? |
19:30 | Statistical Frontiers for Selective Reporting and Publication Bias | Transparency in Coding Open-Ended Data: Best Practices from Interrater Reliability to Dissemination | (Too) Many Shades of Reaction Time Data Preprocessing | |||||
21:00 | Social Hour in gather.town! | |||||||
Hackathons | Workshops | Unconferences |
Facilitators: Natasha Tonge, Marilyn Piccirillo
The “theory crisis” in psychological science is a longstanding issue that has been recently spotlighted in several papers. Although previous literature has contributed helpful outlines of building and testing stronger psychological theories, beginning a reproducible workflow can be daunting and may pose a barrier to implementing these practices. We draw from the first step of the TCM in Borsboom (2020): identifying empirical phenomenon. Our workshop will use our own experience of building a theory of social anxiety/depression comorbidity and will highlight the tools we used to create a replicable workflow for reviewing and consolidating the literature around our phenomenon of interest. The first part of our workshop will present our workflow in connection to theory building. In the second part, we will guide attendees as they brainstorm and plan their own workflow. We will conclude with a discussion of the proposed workflows, areas for improvement, and solicit feedback from attendees.
Facilitators: Crystal Steltenpohl, Jaclyn A. Siegel, Kathryn R. Klement
Positionality statements are an excellent way at improving the transparency and reflexivity of any qualitative, quantitative, or mixed methods project. In this workshop, learn about how unchecked and biased assumptions have led us astray as a field, why and how people use positionality statements, who positionality statements are for, and how to use them to increase the transparency and rigor of your own work. Participants will be able to begin drafting one for a project of their choosing through guided activities, and are encouraged to use these processes to examine the tools we use, questions we ask, and how we interpret our results.
Facilitators: Andreas Alfons, Nüfer Y. Ates
Mediation analysis is one of the most widely used statistical techniques in the behavioral sciences. The simplest form of a mediation model allows to study how an independent variable (X) affects a dependent variable (Y) through an intervening variable called a mediator (M). The standard test for the indirect effect of X on Y through M is a bootstrap test based on ordinary least squares (OLS) regressions. However, this test is very sensitive to deviations from normality assumptions, such as outliers, heavy tails, or skewness. This poses a serious threat to empirical testing of theory about mediation mechanisms. The R package robmed implements a robust test for mediation analysis based on the fast and robust bootstrap methodology for robust regression estimators. This procedure yields reliable results for estimating the effect size and assessing its significance, even when the data deviate from the usual normality assumptions. In addition to simple mediation models, the package also provides functionality for mediation models with multiple mediators as well as control variables. Furthermore, several alternative bootstrap tests are included in the package.
Facilitators: Kevin Peters, Rob Cribbie, Nataly Beribisky
Researchers are encouraged to report effect size indices in their work and advised to interpret these indices within the context of their research area. This advice seems very reasonable and an improvement over relying on fixed cutoffs for small, medium, and large effect sizes. Upon reflection, however, this advice is difficult to apply in practice. What exactly does context mean here, and how does a researcher go about interpreting their effect size indices in this manner? What factors do researchers consider (and not consider) when interpreting their effect size indices? The goal of this session will be to have an interactive discussion of issues surrounding effect size interpretation. In addition to discussing how they approach effect size interpretation, participants will also be asked to brainstorm ways in which our discipline can promote meaningful interpretations of effect size indices.
Facilitators: Sarah A Sauvé, Elizabeth Phillips, Wyatt Schiefelbein
Many of the scientific practices upheld in Universities and research institutions today are products of European rationalism, and are deeply tied to colonialism. Searching for a single objective “truth,” constructing exclusive hierarchies and benchmarks of academic success, and performing research on minority populations while systematically repressing and ignoring their own knowledge creations and canons are just some of the ways colonialism manifests itself in Western science. In contrast, decolonized science emphasizes the value of different types of knowledge, doing research with minority populations, and including these populations and their traditional practices in knowledge creation.
In this unconference, the session leaders (two white settlers and a Métis scholar from Canada) will give an overview of how science – often unwittingly – upholds colonialism and perpetuates harmful patterns of extraction and power imbalance. We will then spend the remaining time discussing anti-racist, open science, and community-based methodologies that can change the way we do science and mitigate these harms.
Facilitators: Naseem Dillman-Hasso, Ummul-Kiram Kathawalla, Lena Ackermann
Targeted towards helping graduate students and early career researchers, this session builds on the Easing Into Open Science: A Guide for Graduate Students and Their Advisors paper. Wanting to improve science practices and methodology is not something that always comes naturally to the world of academia, and can sometimes be intimidating if you lack the support around you. We will talk about how to dip your toes into open science practices while bringing your department with you, dealing with the intricacies of publishing quotas and departmental push-back, and hear about people’s personal anecdotes of what has worked and what hasn’t. There will be a working document for this session where people can add resources and anecdotes that will be published on the OSF page of the paper.
Facilitators: Olivia Kirtley, Ginette Lafit
Registered Reports (RRs) have been adopted in 250 journals as an article format that prioritises the quality of research questions and methods over results. RRs have specific requirements, including minimum statistical power, focus on specific data types, and timing of submission in relation to ethical approval. Clinical psychology and psychiatry journals have not yet widely adopted RRs, perhaps because some requirements raise particular challenges for these fields. Phenomena of interest may be rare, threatening statistical power. Pre-existing data is widely used, yet many journals do not support RRs for such data. Furthermore, ethical approval for clinical studies is often slow and intensive, and amendments required following RR peer-review may significantly delay the start of studies. In this unconference, we will 1) critically discuss barriers to RR use in clinical psychology and psychiatry; 2) discuss the creation of support structures to reduce/remove these barriers, whilst preserving the goals of RRs.
Facilitators: Dawn Holford, Ulrike Hahn
Our session asks how psychology as a discipline can better respond in a crisis. We will lead a discussion on some of the underlying barriers impeding psychologists from responding effectively to crises like COVID-19, including the difficulty of consolidating rapidly-emerging information from many sources, the challenges in generating consensus within the field, and the lack of contributions from a wider range of expertise—often due to lack of opportunity for diverse input— among others. We will also share how we created a living resource with expert-led consolidation of information to tackle the (then imminent, now real!) behavioural issue of vaccine hesitancy (the “COVID-19 Vaccine Communication Handbook and Wiki”), and use it as a springboard to discuss other grassroots projects with similar aims. We encourage attendees to identify other opportunities that could be addressed with a topical “Wiki” and give us feedback on how these ideas and processes could be improved.
Facilitators: Margaret Echelbarger, Tissyana Camacho
Much of psychology is WEIRD and White cultural norms dominate how we teach the science. These norms often center the experiences of White, WEIRD participants and privilege some methodologies over others (i.e., deeming some methods more rigorous than others). Further, centering the experiences of White, WEIRD participants can signal to students of color that their own experiences are less relevant to the science and/or are better discussed as “special topics.” We call for more culturally-affirming curricula that decenters Whiteness from research methods courses and offers students, especially students of color, the opportunity to see themselves in a science in which their own experiences are underrepresented (Camacho & Echelbarger, 2021). During this hackathon, we will generate a list of resources from which instructors can draw to: 1) develop more culturally-affirming research methods curricula, and 2) interrogate their own teaching practices in the service of positively moving psychological science forward.
Facilitators: Adrienne D. Woods, Pamela Davis-Kean, Jessica Logan, Max Halvorson, Kevin King, Menglin Xu
Adequately addressing missing data is a pervasive issue in the social sciences. Failure to correctly address missing data can lead to biased or inefficient estimation of parameters, confidence intervals, and significance tests. Multiple imputation (MI) is a statistical technique for handling missing data that involves using existing data to generate multiple datasets of plausible values for missing data that each incorporate random components to reflect their uncertainty. Each dataset is analyzed individually and identically, and parameter estimates are pooled into one set of estimates, variances, and confidence intervals. Although this technique is widely used, there is little consensus on what constitutes best practices in MI, including with regard to assessing the extent of missing data bias and reporting MI procedures in publications. The goal of this session is to collectively procure a list of resources or citations on multiple imputation and missing data, as well as to create MI coding templates for several prominent software languages (Stata, Mplus, R, SAS, Blimp). We will crowdsource these resources and templates to create an academic paper that can be used as a “roadmap” to MI, similar to previous SIPS products on preregistration and open science. Interested participants will be invited to coauthor this paper.
Facilitators: Kathleen Schmidt, Hannah Moshontz
Transforming a thesis or dissertation into a journal article presents a number of practical and ethical challenges. Graduate students, postdocs, and early career faculty may struggle to find time, motivation, or guidance for adapting their theses and dissertations into publishable article manuscripts. Revising these works often requires substantial changes to their length, scope, and intended audience or focus. Authors may need to revisit analyses, add or emphasize arguments in light of new or unconsidered literature, or remove aspects of the work that were tailored to the idiosyncratic requests of a committee. Many student works are never published in any format or are published in part (e.g., including only positive results) or without full transparency. This hackathon will produce a guide outlining best practices to adapting theses, dissertations, and other academic projects into rigorous, transparent manuscripts suited for publication. We intend to submit this guide as a manuscript for publication shortly after the conference. We will draw on collective knowledge and experiences to identify and summarize problems and solutions for adapting both recently completed and long neglected student scholarship for publication.
Facilitators: Mark C. Adkins, Udi Alter, Nataly Beribisky, Phil Chalmers, Y. Andre Wang
The proposed workshop will introduce researchers to data simulation methods in psychological research. Methodologists frequently rely on simulation experiments to create tools and make recommendations for research practices aimed at improving psychological science. Yet, empirical researchers often have little experience in, or knowledge of, data simulation techniques, which create barriers to critically assessing simulation results and effectively using simulation-based tools. We seek to lower these barriers in the proposed workshop. The first half of the workshop will introduce the concept of Monte Carlo simulations, why and when they should be used, and how to interpret results from simulation studies. Attendees will be acquainted with pwrSEM, an open-source simulation-based application for power estimation, and learn how it can be flexibly adapted for their individual research programs. The second half will guide attendees through simulating data for various research purposes using the SimDesign package in R. This section will provide hands-on experience with constructing and interpreting a completely customized simulation study. The proposed workshop offers theoretical background, practical tools, and applied experience with simulation methods to improve attendees’ literacy and skills in quantitative methodology for psychological research.
Facilitators: Anna Szabelska
Machine learning is very popular in data science nowadays, and it is a very handy tool. Can psychology benefit from it? Yes, it can! In the workshop I will introduce machine learning, describe various techniques, and discuss various ways in which machine learning can be used in psychological research. In the practical part we will very briefly go through the classic exploratory techniques (to provide context) and then focus on the machine learning part. Step-by-step, we will build a machine learning model, then brainstorm alternative ways of interpretation, and discuss possible lines for generating predictions.The workshop will end with providing and discussing the best resources for further self-learning.
Facilitators: Sara Garofalo, Mariagrazia Benassi
Interaction effects are of special interest in psychological sciences. They are observed whenever the impact of one independent variable changes based on the level of the another independent variable. Typically, experimental designs in psychology involve this kind of expectation, as they often consist of factorial (ANOVA) designs in which experimental and control groups or conditions are contrasted and compared. Despite being largely used, ANOVA interaction effects have been historically misinterpreted and recent evidence still points to the presence of errors in the way in which they are analysed and explained. This workshop will be divided in two sessions: the first one will provide an overview on the statistical assumptions behind interaction effects and the pros and cons of the most common approaches to their investigation (post-hoc tests, planned comparisons, t-tests); the second one will present how to use confidence intervals and Bayesian informative hypothesis for a more powerful interpretation of interaction effects, based on a more descriptive and exploratory or hypothesis-driven, respectively.
Facilitators: Adam Smiley, Jessica Glazier, Yuichi Shoda
Minimum-effect significance testing (MEST) allows researchers to test if the true effect in the population is large enough to be meaningful. No matter how large the sample size, MEST—unlike traditional null hypothesis testing—will never be significant if the observed effect is weaker than the smallest effect of consequence. When MEST is used in conjunction with equivalence testing (EqT), researchers now have a complete set of tools for testing if the effect is large enough to matter, too small to be of consequence, or if more evidence is needed to reach a conclusion. In this workshop, we will present the basic logic of a unified framework encompassing MEST, EqT, and traditional null hypothesis testing. We will then provide a hands-on tutorial with several examples of applications to data including use in registered reports, as well as suggested wording for reporting results. Additionally, we will facilitate conversations (both in small groups and as a full session) about how attendees can apply this framework to their own research.
Facilitators: Loukia Tzavella, Ben Meghreblian, Aoife O’Mahony
We invite researchers from all career levels to discuss how we can improve Registered Reports (RRs) for authors, reviewers, and editors. Discussion themes will include addressing the key challenges surrounding the adoption and implementation of RRs, expanding the format for more types of research, and increasing their overall accessibility. The insights gained from this discussion will be used to guide improvements to the RR process and quality of RRs being published (e.g. RRs study design template, community feedback, quality monitoring). We plan to have a dedicated space for early career researchers (ECRs) who wish to discuss their experience with RRs and potential barriers or concerns. RRs can further be improved for reviewers and editors with standardised checklists of RR criteria and tailored guidance and/or training. Feedback from our survey, slack channel, and the unconference will also inform initiatives that encourage the adoption of RRs by authors and journals.
Facilitators: Maya Mathur, James E. Pustejovsky
This workshop will cover methods to investigate selective reporting in meta-analysis of statistically dependent effect sizes, which are a common feature of systematic reviews in psychology. The workshop is organized into two sections. In the first section, we will describe situations where dependent effect sizes occur and review methods for summarizing findings in the presence of dependent effects. We will then describe methods for creating and interpreting funnel plots, including tests of asymmetry, with dependent effect sizes. In the second section, we will present new statistical sensitivity analyses for publication bias, which perform well in small meta-analyses, those with non-normal or dependent effect sizes, and those with heterogeneity. The sensitivity analyses enable statements such as “For publication bias to shift the observed point estimate to the null, ‘significant’ results would need to be at least 10-fold more likely to be published than negative or ‘non-significant’ results” or “no amount of publication bias could explain away the average effect.” In both sections, we will demonstrate methods using R code and examples from real meta-analyses.
Facilitators: Talia Waltzer, Clare Conry-Murray
Quantifying open-ended data (e.g., verbal responses to questions, video-recorded behaviors) is a crucial part of social science. However, practices for coding and assessing reliability can vary widely across different groups of researchers, and practices are not always made clear in publications. How were coding categories developed? How was agreement between coders established? Many scholars are left to figure out the answers by themselves, or they inherit practices from their lab groups. Even though these decisions can influence statistical measures of reliability (e.g., Cohen’s κ), they are often omitted from published papers. This session aims to increase transparency about coding and reliability practices by fostering dialogue among folks who work with (or are interested in) open-ended data. If there is interest, we will also have an informal hack-a-thon to (1) draft a document to summarize common practices and recommendations and (2) compile a list of key information that should be made transparent when disseminating research.
Facilitators: Krzysztof Cipora, Hannah D. Loenneker
When doing research in cognitive psychology and measuring reaction times the number of researcher’s degrees of freedom seems quite limited as compared to more complex observational designs. Nevertheless, there are multiple possible pipelines of data preprocessing (e.g., how trim for outlier reaction times, how to aggregate etc.). Even while investigating the same phenomena, and using supposedly the same tasks, labs differ considerably in data treatment routines. These differences might contribute to differences in observed effect sizes and reliabilities of observed effects. In this session, I would like to initiate the discussion on whether and how to account for these differences: Does it make sense to run a form of multiverse analysis on results of cognitive tasks? Shall we systematically investigate the effects of data treatment routines? Shall we build the standards / best practice for each task / paradigm?
Thursday, June 24 | ||||||||
All times are in CEST (local time in Padova, Italy) | ||||||||
A | B | C | D | E | F | G | H | |
12:00 | Guidelines on Including Non-WEIRD Populations in Psychological Science | Web Scraping Using R | Matching Stimuli (or Anything) Reproducibly | Taking Experiments Online with PsychoPy/ Pavlovia | Improving Interdisciplinary Review | Replication and Meta-Analysis: When Similar and When Not? | Bridging the gap between research and the public: Building an online resource repository of best practices for public engagement and research communication | |
13:30 | Break | |||||||
14:00 | Lightning talks | |||||||
15:00 | Data Management Hackathon | Developing Resources to Support Teaching Faculty and Integrate Open Scholarship Content Into Curricula | Finalizing a Preregistrat-ion Template for ERP studies | MetaSIPS: A Metascience Un-unconference | How Helpful Are Diversity Classifications Such As WEIRD/Non-WEIRD or Global North/South for Psychological Science? | Disseminating the Idea of a Standard Enabling Sustainable (Re)use of Research Data | Digital Trace Data for Psychological Research – How Can We Access Data That Enable Innovative Research While Avoiding Another Cambridge Analytica Case? | How could we create a researcher skills/time exchange platform to improve psychology? |
16:30 | How Psych Science Can De-racialize for Its Improvement | Large-Scale Psychological Science: Reflecting on Lessons Learned | Sponsored Workshop: Online Research Methods with Gorilla Experiment Builder | Guidelines for Transparency in Open-Ended Data | ||||
18:00 | Meet Prolific, see pre-data posters, attend roundtable discussions in Gather.town | |||||||
Hackathons | Workshops | Unconferences |
Facilitator: James Montilla Doble, Arathy Puthillam, Hansika Kapoor
Previous studies have shown that research in mainstream psychology has been dependent on American (Arnett, 2008) or WEIRD (Western, educated, industrialized, rich, and democratic; Heinrich et al., 2010) populations. Not much has changed in the past decade or so. A 2018 study, for example, found that over 70% of samples in research published in Psychological Science during 2017 are from North America, Europe, and Australia (Rad et al., 2018). A recent preprint has also identified that USA-based researchers were overrepresented in editorial positions in psychology and neuroscience journals (Palser et al., 2021). In this hackathon, we aim to create guidelines on and standards for evaluating and increasing diversity and inclusion in psychological research. We have identified key stakeholder groups for whom these guidelines are, such as authors, journal editors, and reviewers.
Facilitator: Tobias Wingen, Felix Speckmann
The internet contains a broad range of data concerning people's online behavior. Using automated web scraping scripts, researchers can download large amounts of this data with relatively little effort. Types of data that can be publicly accessed are manifold, such as Amazon reviews, newspaper articles, movie ratings, or blog posts. In our web scraping workshop, we will explain how to use web scraping to systematically extract data from websites, effectively supplying researchers with additional approaches within their field of research. Our workshop will focus on the use of the package “rvest” in conjunction with the popular programming language “R”. The theoretical introduction to web scraping will be accompanied by practical exercises. As part of those exercises, participants will write their own basic scripts to extract data from the web. The workshop is an ideal primer for participants to conduct web scraping projects in their field of research.
Facilitator: Jack Taylor
Researchers often need to tightly control for confounding variables across conditions. Often, however, researchers are limited to using only a finite set of existing items. For example, you may be restricted to using a database of only a limited number of candidate words, or images of faces, or recordings of speech. Usually, people approach this problem by manually finding close matches on relevant dimensions. Manually crafting stimuli in this way is time-consuming and very difficult to do reproducibly. In this workshop, I'll show two solutions, using existing tools, for creating controlled stimuli reproducibly in R. The first solution uses an item-wise approach, creating directly comparable items in each condition. The second solution uses a distribution-wise approach, maximising the similarity in distributions across conditions. I’ll show how these two solutions are extremely flexible and can be applied to a range of different problems. Finally, I’ll discuss how using such an approach can aid reproducibility, replicability, and transparency of studies’ methods.
Facilitator: Rebecca Hirst, Thomas Pronk
PsychoPy is a free, open source, software for running behavioural studies that now supports online experiments through integration with Pavlovia.org. In this session we will demonstrate the basics of pushing a study online from PsychoPy, how to view the data and how to make the most of Pavlovia; for example, by using the thousands of publicly available experiments shared by the PsychoPy community.
Facilitator: Hannah Metzler, Jana Lasser
While the importance of interdisciplinary work is widely recognized, getting such work funded or published is often hard. One reason is the difficulty to simultaneously meet the standards of different disciplines, according to which reviewers judge the work. Although a body of work already recognizes this problem, concrete tips and guidelines for people reviewing and writing interdisciplinary articles and proposals are missing. In this session, we will first collect common problems in interdisciplinary review of publications and research proposals in the attendees’ experience, and then draft a list of points to include in practical guidelines for reviewers and reviewees and journals/grant agencies. Potential issues to address include ways to deal with partial expertise and confidence of reviewers, ideas for an expertise taxonomy for reviewers, and the adaptation of new tools for peer review (crowdsourcing, open peer review etc.) to interdisciplinary contexts.
Facilitator: Sera-Maren Wiechert
Based on Carter and colleagues' paper (2019), there are field-dependent statistical biases in meta-analyses depending on the extent to which biases are present at the individual study level, e.g., based on the level of publication bias in the literature, heterogeneity, and/or number of studies available. Therefore, across fields, topics and paradigms meta-analysis effect sizes may differ by degree of magnitude (either presence more or less effect) when compared to controlled/pre-registered (larger-scale) replication effect sizes. But is this always the case? Or under which circumstances are replication and meta-analysis effect sizes more similar? In an open discussion, new ideas may arise on different variables that may affect this comparison, thus, contributing to the biases and further the divergence in effect size. These insights would be relevant not only from a theoretical standpoint, but would also give a better estimation of the meaning of replication and meta-analysis comparisons.
Facilitator: Annayah Prosser
Public engagement and research communication are becoming an increasingly important skills for scientists and researchers looking to address societal challenges. However, training in this is scarce and widely dispersed, and resources can be difficult for researchers to find. It can be difficult for researchers to know how best to communicate with the public on different platforms (e.g. broadcast media, social media, community partnerships), and how to discuss their work while maintaining full rigour and transparency. In this hackathon, we'll be working together to collate links into best practices for public engagement and research communication into an open-access online repository that any researcher can access. In doing this, we hope to highlight the important work already being done in this area, and give researchers access to a variety of tools they can use to better bridge the gap between science and the public.
Facilitator: Anna Wysocki, Michaela DeBolt, Kailey Lawson, Sarah Schiavone, Arianne Herrera-Bennett
The goal of this hackathon is to create an open-access syllabus on data management—a crucial skill that is rarely taught formally—that could be used or adapted for graduate seminars, advanced undergraduate courses, or individual study. Attendees will be provided a skeleton syllabus outlining potential modules and topics that could be included in the syllabus (e.g., data preparation, version control, data sharing). During the hackathon, attendees will collaborate to design, structure, and populate the syllabus. This will include proposing additional modules and determining which topics will be covered within each module. After creating a structure for the syllabus, attendees will add resources to each of the modules. The end product of this hackathon will be a syllabus that outlines critical components of data management and provides an integrated collection of resources for researchers to learn about the best practices in these areas.
Facilitator: Olly Robertson, Sam Parsons, Madeleine Pownall, Flavio Azevedo, Mahmoud Elsherif, Martin Vasilev, and Alaa AlDoh
Developing educational resources is essential for facilitating engagement with, adherence to, and learning of research transparency, replicability, openness and reproducibility. To support instructors, we propose building resources which can be integrated into taught courses. Creating or changing course content can be onerous and time-consuming. We aim to make evidence-based, high-quality lesson plans and activities available to teaching faculty, thus reducing the labour required to develop and implement open scholarship content. This hackathon aims to create resources to support educators by progressing the “200+ Summaries of Open and Reproducible Science Literature” project and developing different activities and lesson plans for teaching open science. Attendees will collectively compile and review summaries of the key literature; create lesson plans/activities and categorize them based on their theme, learning outcome, and method of delivery. Summaries and activities may then be mapped onto lesson plans for ease of use and will be made publicly available.
Facilitator: Gisela Govaart, Mariella Paul, Antonio Schettino
During a hackathon at SIPS 2019, attendees started a preregistration template for EEG research. Over the last two years, a community of active volunteers has been working on this template asynchronously (via Google Docs and Slack) and synchronously, during hackathons organized by the Open Science initiative at the Max Planck institute for Human Cognitive and Brain Sciences. Now, the time has come to finalize the document. Three weeks before SIPS 2021, we will circulate a “minimally viable product” of the template to prospective attendees. In this clean version of the document, all lingering issues will be clearly marked as open for feedback. In preparation for the hackathon, prospective attendees can comment on the document. During the hackathon, the organizers will moderate discussions and incorporate feedback to achieve maximal consensus and have a final version of the template. Afterwards, the document will be sent to COS with the request to add it to the OSF preregistration templates.
Facilitator: Julia Bottesini
We propose a psychological metascience un-unconference: a 3 hour session made up of six 30-minute slots (15 to 20-minute talks followed by Q&A). Metascience is the examination of a scientific discipline’s processes, practices, and products using scientific methodology. Metascientific work in psychology is essential for addressing pressing questions in the field (e.g., what findings should we try to replicate? What is the optimal balance of individual versus team science? How can we improve measurement? How can we measure scientific progress within the domain of psychology?). Some of these questions, which are often discussed at SIPS, can be addressed with existing theoretical and empirical work. As such, the primary goal of this session would be to collate a set of metascientific talks which could be used to improve future SIPS sessions and research by SIPS members. And let’s be honest, in the flurry of activity that is SIPS, a session to sit back and enjoy your coffee while you're talked at will feel like a welcome break.
Facilitator: Sakshi Ghai, Amy Orben, Michael Muthukrishna
The time has come to rethink the study of diverse populations in psychology. Many would agree that the WEIRD acronym (Western, educated, industrialized, rich, democratic) has sensitized our field to the importance of sample diversity. Indeed, diverse populations are a necessary condition for conducting high-quality research. However, these oft-mentioned terms – WEIRD vs. NON-WEIRD or Global North vs. South – might risk overgeneralizing the extent of human diversity by inadvertently putting vastly different populations into unified boxes. This practice raises essential questions. Do we collectively assume that all non-WEIRD societies like Indians, Kenyans, and Brazilians are uneducated and poor? Do Eastern cultures like South Korea and Japan still count as non-WEIRD, given they are advanced economies? Are these terms mutually exclusive and collectively exhaustive? In this unconference, we will a) reflect on the perils and opportunities of using diversity classifications and b) discuss how we can make our science more inclusive.
Facilitator: Marie-Luise Müller, Katarina Blask, Marc Latz
In the course of the Open Science movement, the call for more transparency and openness within scientific research appeared. As a result, making research data accessible to the broader public, in order to enable a sustainable (re)use of data, has become increasingly important within psychological science. However, currently there exists no single standard allowing psychologists from all sub-disciplines to optimally prepare their data for reuse.. To close this gap, we have started to develop a user-friendly curation standard which meets all necessary requirements to guarantee the long-term interpretability and reusability of research data. However, it is not enough to just develop a standard without knowing how to spread it within the research community. Therefore, a comprehensive dissemination concept is needed. The aim of this unconference is to identify and discuss strategically important action goals for the dissemination of the standard, as well as possible strategies for their implementation.
The vast amounts of data generated by the use of digital technology are valuable resources for psychological research. Projects like mypersonality.org and numerous publications from different fields of psychology have demonstrated the great potential of these so-called digital trace data. At the same time, the Cambridge Analytica scandal has highlighted some of the risks related to such data, especially with regard to privacy and data protection. What the Cambridge Analytica incident and its consequences have also shown is that depending on commercial companies and their decisions for data access is risky for researchers. For example, data access via the Application Programming Interfaces (API) offered by many platforms can be drastically reduced or even shut off completely. Hence, there is a need for new ways of access to digital trace data for psychological research. Recently, different models have been proposed, including partnerships with companies or data donation by platform users. Naturally, all of those options have specific pros and cons, and none of them are trivial to implement. The purpose of this session is to discuss what kind of data access we as researchers need and how this can be implemented in a way that enables innovative research while also adhering to legal regulations and ethical principles. In addition to data access, these discussions also relate to questions of data sharing as privacy concerns and platform terms of service can conflict with ideals of open science (especially also regarding the reproducibility of research).
Facilitator: Dr. Vernita Perkins
Racism, a social construct, systemically resides in every aspect of our world and civilizational history, including psychological science. The residue and detriment can be found for over five hundred modern centuries. Identification and eradication of this systemic structure and individual cognition affords a scope to dismantle not only racism, but all other forms of oppression, inequity, and exploitation. This unconference offers a rare opportunity to openly discuss how psychological science has been deprived by the inequities and exploitation of racism and posits innovative brainstorming for how, in a psychological science without racism and its siblings, sexism, ageism, genderism, ableism, and its parent casteism and capitalism; psychological science can thrive in ideology, theory, methodology through re-imagining terminology, training, and research practices, entering a new psychological science revolution.
Facilitator: Maximilian Primbs, Jessica Kay Flake, Biljana Gjoneska, Gerit Pfuhl, Jordan Wagge, Erin M. Buchanan, Patrick Forscher, Miguel Silan, Nicholas Coles
The Psychological Science Accelerator (PSA) is a globally distributed network of psychological science laboratories that coordinates data collection for large-scale research projects. A short time ago, the PSA published its first research study (Jones et al., 2021). We want to take this opportunity to reflect on lessons learned from doing large-scale psychological research. Using our recent research projects as an example, we will highlight issues arising regarding the recruitment of underrepresented minorities, the involvement of graduate and undergraduate students, translation, lab management, methodology and measurement, funding, manuscript writing and other aspects of the team science research process and give advice to researchers on how to avoid these issues. Participants will have the opportunity to ask questions to a plenum of researchers engaged in large-scale psychological research. Followingly, we will invite discussion and ask attendees to share their perspectives on these issues.
6G: Sponsored Workshop: Online Research Methods with Gorilla Experiment Builder
Facilitators: Jo Evershed, Joshua Balsters, Ashleigh Johnstone
Before COVID-19, online research was a choice, but recently it has become a necessity. Since taking the leap, researchers are enjoying the benefits of the speed, scale, and reach of online research, but worry about data quality when they can't see their participants. In this lecture we aim to cover these benefits in more detail, along with how successful pioneers have overcome some of the key challenges associated with online behavioural research. We will also provide an overview of the Gorilla Experiment Builder and Q&A session.
Facilitators: Clare Conry-Murray
We propose to write a paper proposing guidelines for coding open-ended data in a way that is valid, transparent and reproducible.
Friday, June 25 | |||||||
All times are in CEST (local time in Padova, Italy) | |||||||
A | B | C | D | E | F | G | |
8:00 | Many Modelers | GitFun: Introduction to git and GitHub | Introduction to PsychOpen CAMA: Data, Methods, and User Interface for Replicable and Dynamic Meta-Analyses | (Too) many shades of reaction time data preprocessing—a hackathon | ManyMoments - Improving the Replicability and Generalizability of Intensive Longitudinal Studies | ||
9:30 | How to Write a Plain Summary of Your Research: Gain New Perspectives and Open Up Your Research to a Wider Audience | New Publishing Format: Research Modules | Introducing the Journal Editors Discussion Interface | ||||
11:00 | Break | ||||||
11:30 | Rolling Out The Red Carpet for Red Teams in Psychology | [12:00 CEST start] From Talk to Action: Organizing Principles to Diversify Psych | Small n but High Power? Manuscript and Preregistration Templates | Preregistration in Psychology | Reform Outside Traditional University Settings | Next Steps in Exploratory Research | |
13:00 | Expanding the Global Reach of Scholarship: A Case Study of the Open Scholarship Knowledge Base | Theory-Building in Open Science: The Heliocentric Model of (Open) Science | The future of SIPS | ||||
14:30 | Closing Session: Adeyemi Adetula and Heather Urry | ||||||
Hackathons | Workshops | Unconferences |
Facilitator: Noah van Dongen, Leonid Tiokin, Adam Finnemann, Jill de Ron, Shirley Wang, Denny Borsboom
“Nothing is as practical as a good theory.” (Lewin, 1943).
Scientific theories allow us to explain the world and inform possible causal interventions. For example, the theory of evolution explains why species exist and allows us to develop causal interventions to select for less-virulent pathogens.
Unfortunately, psychology lacks strong theory (Cummins, 2000). Many psychological theories exist, but their scope, assumptions, and explanatory power are often unclear. One way to evaluate theory veracity is to build a formal model that captures aspects of the theory and observe if the model can (re)produce relevant phenomena. Yet, any given theory can be instantiated with a wide range of models.
Here, we propose that a ‘many modelers’ approach can help. During this hackathon, teams of modeler and scientists will formalize a theory and test if their model can reproduce phenomena that the theory purports to explain. This will be fun (and maybe useful).
Facilitator: Ana Martinovici
Version control is one of the tools you can use to improve (numerical) reproducibility of your results (link). In this session you will learn how to use one the most popular version control systems: git. There are more ways of using git - in this session, you will practice using RStudio (point and click menus, no command line code) and GitHub. Target audience: Anyone who uses data and/or code in their research, but doesn’t use a version control system to keep track of changes to their files. By the end of the workshop, you will be able to: Create repositories on GitHub, Clone repositories on your device, Make changes to files in repositories, Commit & push the changes to repositories, Collaborate with others on GitHub (both co-authors and other researchers you don’t know).
Facilitator: Tanja Burgard
PsychOpen CAMA is a platform enabling the publication of reproducible and dynamic meta-analyses in psychology. It is a service of ZPID (Leibniz Institute for Psychology) and provides a template to facilitate updating and augmenting existing meta-analyses by the research community. Standardized meta-analytic datasets are available via a point-and-click interface. In the background, analyses are conducted via an Open CPU server with the help of an R package consisting of standardized data, metadata and meta-analytic functions. The workshop is supposed to introduce attendees to the need and concept of Community-Augmented Meta-Analysis (CAMA) systems. PsychOpen CAMA is presented in more detail, including the concrete architecture of the system, as well as data templates and underlying methodology. A demonstration will give an overview of available functionalities on the platform. Furthermore, ways to contribute or extend data in PsychOpen CAMA are presented and potential further ways of acquiring and extending datasets in PsychOpen CAMA are discussed.
Facilitator: Krzysztof Cipora
As a follow-up to our session “(Too) many shades of reaction time data preprocessing” (3H) we propose a hackathon under the same title. Participants of the session expressed their interest in working on the project further. In the session we want to develop frameworks for (1) using integrative data analysis to investigate how data preprocessing routines affect observed effect sizes for a specific cognitive phenomenon; (2) setting up multiverse analysis parameters for specific cognitive phenomena; (3) building unified protocols for “golden standards” of reaction time data preprocessing for a specific tasks / phenomena.
Facilitator: Julia Moeller
The increasing reach of the experience sampling method (ESM) and other intensive longitudinal sampling procedures have generated new opportunities to study people’s everyday experiences. At the same time, cumulative, replicable, and generalizable knowledge gain may be thwarted in some domains applying this method for various reasons, such as small, unrepresentative samples or limitations to few contexts (e.g., one school district, one specific area). This unconference discusses challenges to the replicability and generalizability of ESM findings and aims to identify and generate solutions to these problems in a collaborative brainstorming and debate. This session builds upon prior work by a group of experts who have gathered to help improving replicability and generalizability in ESM research: The ManyMoments Consortium, following examples of other multi-lab collaborations, such as the ManyLabs study (Moshontz et al., 2018; Klein et al., 2018), the ManyBabies study (Frank et al., 2017; ManyBabies Consortium, 2020) and the ManyPrimates study (Altschul et al., 2019). In this unconference, we first summarize the challenges to replicable and generalizable ESM research that we have identified so far to be specific to the work with intensive longitudinal data. We then give an overview of existing solutions, suggest new ones that help solving these challenges to increase replicability, and hope for much creative input from the participants in an open debate. With this unconference, we hope to start a debate about needs and solutions for replicable ESM research and get participants interested in joining a collaborative ESM study.
Facilitator: Marlene Stoll, Anita Chasiotis
In this session, we will explore ways to communicate (psychological) scientific results in a lay-friendly, but not oversimplified or lurid manner. Working with your own examples, I will guide you with evidence-based rules regarding linguistic and formal aspects. At the end of this workshop, you will be able to formulate your own plain language summary (PLS). Not only does the provision of such PLS open up your research to a larger audience - the PLS writing process can also give you a new perspective on your own work.
Facilitator: Chris Hartgerink
In this workshop, you will learn about research modules (i.e., individual components of a research project like theory, materials, data, code), how they can help you be a more effective researcher, and how to start publishing your own research modules.We start off by recapping some of the issues of research articles in light of reproducibility, after which we introduce research modules as a concept. You will learn what a research module is, when you would publish research modules in relation to research articles, and how it helps you document your work in a more complete and intuitive manner. We will introduce the infrastructure (peer-to-peer commons) and software (Hypergraph) used to publish research modules, and what benefits you get in terms of control and innovation. After installing the software, and an initial walkthrough, you will have time to publish your first research module during the workshop. Optional: Bring files for a recent step in a research project that excites you (e.g., collected data, analysis script).
Facilitator: Priya Silverstein
This unconference session will introduce (and discuss ideas for further developing) the Journal Editors Discussion Interface (JEDI): a new community for social science journal editors to ask and answer questions, share information and expertise, and build a fund of collective knowledge. Although JEDI has been designed for discussing all issues related to editorial practices, a large part of discussions will focus on issues surrounding transparency, reproducibility, and diversity in publishing. Given the many demands on editors’ time – and given that most editors face similar processual challenges – there is great value to their interacting with each other about these key issues, and pooling their collective wisdom, sharing lessons, examples, insights, and solutions. The benefits can be further multiplied if experts on relevant topics (e.g. data management personnel, open science advocates) are included in the conversation. JEDI seeks to generate that interaction and those benefits.
Facilitator: Thomas Rhys Evans
Red Teams are individuals or groups who provide feedback from the perspective of an outsider or competitor, and are expected to take an active role in challenging decision-making and actions to improve the quality of the final work produced. Whilst norms of introducing critique early in the research cycle are slowly changing through initiatives such as Registered Reports, the use of Red Teams in psychological research is highly uncommon and are often perceived as threatening (e.g. by having ideas “scooped” or from receiving excessively critical critique). Red Teams could be a valuable source of feedback and support (e.g. through research design, analysis code, measurement practices, etc.) yet little is known about how best to foster such positive collaborations and outcomes. The primary aim of this Hackathon is to develop open resources (a how-to-guide and discussion manuscript) to support and change norms on implementation of Red Teams in Psychology.
Facilitator: Emily Gwynn Turner, Aradhana Srinagesh
If academic psychology hopes to respond to declining mental health across communities, it must diversify who does psychology in order to diversify who and how it serves. The current global moment, if harnessed effectively, can be a portal to transforming psychology research, training and practice. The hackathon will train hackers in foundational principles of social organizing to empower the psychology community to actualize substantive diversity within its own ranks. During the event, hackers will crystallize diversity campaign goals, benchmarks, calls to action, and mobilizing strategies for harnessing struggle into collective power. Hackers will also be matched with an organizing mentor and other hackers with shared purpose to maximize impact through coalition building. This programming not only helps build skills that are important for collaboration and project management, but delivers crucial, urgent activism.
Facilitator: Alex Holcombe, Sarah McIntyre
Many journals have adopted policies encouraging, or even requiring, statistical power and sample size planning. However, the conventional power analyses commonly taught do not sit well with small-N, many-trials-per-participant studies, many of which are largely exploratory. This hackathon aims to provide templates for sample size planning and reporting for under-served designs. We are imagining that one template might provide text that would resemble the following: Psychophysical studies can be seen as providing strong evidence for a result within individual participants, with each participant being a sort of replication (Smith & Little, 2018). This comes from the ability to run large numbers of trials in multiple conditions on individual participants. Statistics can also be used, however, to license generalizing to a broader population, by using the between-participants statistical tests that are more popular in psychology broadly. Here we will use both approaches, by both testing individual participants extensively, and using a large enough sample that between-participant statistical tests may also be statistically significant.
Facilitator: Lisa Spitzer
This workshop is aimed at psychological researchers that are relatively new to preregistration or who would like to learn more about different options for creating preregistrations. Specifically, this workshop will be divided into two parts: In the first part, I will illustrate what a preregistration is and why it is important that researchers preregister their studies. In the second part, I will guide the participants through the preregistration process and give practical advice. I will present various possible routes for creating preregistrations before narrowing down on a practical example. In this example, I will use the R package “prereg” and the PRP-QUANT template that has recently been published by a collaboration of psychological societies (APA, BPS, DGPs). I will walk you through the process of creating the preregistration by using the template until submitting it to the preregistration platform “PreReg in Psychology” (prereg-psych.org).
Facilitators: Evan Nesterak, Alex Uzdavines, Natasha Tonge, Haijing Wu Hallenbeck, Lauren Ashley Anker, Chiara Varazzani, Paul E. Plonski
Around the world, behavioral science is being applied outside of traditional academic institutions. Working with governments, businesses, and not-for-profit organizations presents unique challenges to people interested in open science and improving research practices – movements which have often focused on academic institutions. Despite opportunities to produce robust and transparent research in applied settings, there are barriers. Without awareness, education, and incentives to implement best practices, there is a risk of conducting research that will lead to more scientific crises (like those that motivated the formation of SIPS). This session will be focused on brainstorming how to develop, implement, and incentivize best practices. We will aim to create a living resource that is accessible to applied behavioral scientists as they become interested in improving the quality of their work and the culture of their institutions. Example questions we hope to discuss: What are the main barriers to pre-registration, data sharing, and open access to information outside of academia? How can we incentivize a culture of transparency in the application of behavioral science? What would it take for applied groups to come together in a community around best practices in behavioral science?
Facilitator: Marjan Bakker
The unclear distinction between confirmatory and exploratory research is stated as one of the main reasons for the reproducibility crisis in psychology (Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012). As a response, we have focused mostly on confirmatory research (e.g., multiple preregistration formats, preregistration challenge). And although most papers mention that exploratory research is important in its own right (i.e., to generate hypotheses or when analyzes are complex), we have not much guidance on how to do these explorative analyses or how we should report them. Model validation techniques and blind analysis are proposed but are not always applicable. Or is transparency the key? In this unconference session, we will start with creating an overview of the different ways to do exploratory research, how to report this research, and for which research situations this is and is not applicable.
Facilitator: Marcy Reedy
Despite a professed desire by many to increase inclusion in scholarship, there remains a paucity of representation from the Global South. How can we avoid the implications felt in teaching, learning, research, and scholarship by failing to achieve this goal? How well do infrastructures support output from the Global South to reduce risks of Vandana Shiva’s “monocultures of the mind”? The Open Scholarship Knowledge Base (OSKB), an open education resources repository, can potentially address some of these issues as a centralized hub for sharing scholarly resources; however, there are concerns that the OSKB and other OER tools may inadvertently perpetuate existing barriers to inclusion. This guided unconference invites the audience to explore barriers to representation from the Global South and develop strategies for increasing participation of a broader range of global scholars in knowledge exchanges. We hope to produce a list of recommendations that can be implemented by those seeking to increase inclusivity in scholarship.
Facilitator: Monica Gonzalez-Marquez
We propose a theoretical model to describe the shift from a paper-centered scientific documentation ecology to one where the scientific process itself become the orchestrator of scientific documentation. We discuss implications for the division of labor in science and how this model supports greater transparency and accountability.
Facilitator: Julia Strand, Jennifer Gutsell, Randy McCarthy
The first SIPS meeting (in 2016) had 100 in-person attendees. SIPS2021 has 1000 scientists participating remotely around the globe. What should SIPS meetings look like in the years to come? In this unconference session, the SIPS program committee is eager to hear your input about what you’d like to see retained or changed in future meetings. How might we maintain continuity between SIPS meetings, make participation maximally accessible, and continue to facilitate this dynamic, collaborative environment? How might we balance the inclusivity of remote conferences with the opportunities of in-person ones? Come and chat with us to share your feedback about this year’s meeting and the future of SIPS!
Facilitator: Simine Vazire
Many fields are working towards similar improvements as SIPS is striving for in psychology. Some have started similar societies (e.g., SORTEE in ecology and evolutionary biology, STORK in sports science). This hackathon will be a place for anyone interested in bringing the "improving ____" movement to their field. We will work together to learn from each other's efforts, and help each other take concrete steps to organize similar groups in new fields. This hackathon is especially relevant for people in fields other than psychology who would like to do, or are already doing, something SIPS-like in their field.
Async2: Hackathon: Developing a Global (Yes, Really!) Meta-Data Inventory for Psychological Science
Facilitator: Monica Gonzalez-Marquez
We propose a theoretical model to describe the shift from a paper-centered scientific documentation ecology to one where the scientific process itself become the orchestrator of scientific documentation. We discuss implications for the division of labor in science and how this model supports greater transparency and accountability.
# | Title | Authors | Abstract |
1 | Perception and justification of inequality as predictors of social demands for redistribution | Dayana Amante , Franco Bastias - Juan Carlos Castillo | Latin America has historically been characterized by high levels of poverty and inequality compared to the rest of the world. In this context, research programs on the perception and justification of inequality contribute to the understanding of behaviors and attitudes towards the redistribution of wealth in societies. Objectives. This research project seeks to make an original theoretical contribution at the Latin American level with respect to inequality. It evaluates social demands for redistribution and takes into account the predictive power of variables associated with perception and justification of inequality. Method. The research design will be of a causal correlational type. At least 300 individuals from each country under study will participate: Argentina, Chile and Peru. The collection is scheduled for 2022, which will be carried out through a virtual survey. Discussion. It is intended that the findings of this research may contribute to a better understanding of social attitudes and actions when it comes to inequality. |
2 | “You’re awesome” vs. “Your actions are awesome”: Does implicating an actor’s prosocial personality (vs. behavior) in a thank-you note increase subsequent helping? | Anurada Amarasekera, Lara B. Aknin | Past research has shown that receiving a gratitude message can inspire subsequent generosity (Grant & Gino, 2010), but what features of a thank-you note make future assistance more likely? We plan to explore whether referencing the helper’s kind personality (as opposed to the helper’s kind actions) leads to a greater willingness to help in the future. We will conduct a two-part study in which participants provide help to a peer and then receive a thank-you message from the student they assisted. Importantly, participants will receive one of two randomly assigned thank-you messages that reference either the participant’s kind personality or action. Afterward, participants will receive a survey to assess their willingness to help the same person again and to help other students in the future. |
3 | Playstation Gaming and Well-being: A Panel Study of Objectively Tracked Playtime | Nick Ballou, Craig Sewall, David Zendle, Laurissa Tokarchuk, Sebastian Deterding | Recent years have seen intense research, media and policy attention paid to questions of whether the amount of time spent playing video games affects players’ well-being and/or mental health, with evidence for both positive and negative associations. However, the vast majority of this research has used self-report measures of play, which evidence shows are highly unreliable. Even more concerningly, how inaccurate people are when they self-report their technology use might itself be affected by their well-being, a crucial confound. To address this, we will conduct a 6-wave panel study of 500 participants over the course of 10 weeks, investigating within-person relationships between objectively-measured Playstation gaming, well-being, and self-report inaccuracy. |
4 | Assessing the short-term effects of detached mindfulness: A micro-intervention for repetitive negative thinking. | Teresa Bolzenkötter | When was the last time you ruminated or worried about something? It’s normal to do so occasionally. Excessive forms of repetitive negative thinking, however, can be harmful for mental health. Detached mindfulness is an intervention teaching to observe and release one's thoughts. In my PhD project, I aim to implement detached mindfulness as a micro-intervention and assess its short-term effects on repetitive thinking and affect using experience sampling methodology. Participants with high levels of repetitive thinking will either practice detached mindfulness or a placebo intervention for several days. Repetitive thinking and affect will be assessed 15 and 30 minutes after each intervention. The effects of detached mindfulness will be compared to those of the placebo intervention as well as a non-intervention baseline phase. I am looking forward to getting feedback on the current study ideas and to discuss possible improvements with you. |
5 | On the individual prevalence of cognitive phenomena: integrative reanalysis of multiple unit-decade compatibility studies | Hannah Connolly, Julia Bahnmuller, Kristen Bowman, Thomas Faulkenberry, Krzysztof Cipora | Cognitive phenomena have typically been studied at the group level, with little consideration of individual differences. In this project, we propose a comprehensive framework for investigating the individual prevalence of cognitive phenomena which are indexed by differences between compatible and incompatible experimental conditions. To illustrate the approach, we will use the Unit-Decade Compatibility Effect (UDCE), a widely established and well-replicable phenomenon in numerical cognition. Despite its replicability at the group level, little is known about its prevalence at the individual level. In the planned project, we seek to leverage data from multiple research groups to robustly examine the individual prevalence of UDCE using four different approaches: a psychometric and two bootstrapping methods, as well as hierarchical Bayesian models. This framework allows for answering the “does everybody” question across cognitive phenomena, and for checking the robustness of individual prevalence estimates across analytical approaches. |
6 | Students Under Pressure: How Situational Features Influence Judgments About Cheating | Fiona DeBernardi, Talia Waltzer, Audun Dahl | Although students report that academic cheating is generally wrong, they also report that it is more acceptable in some circumstances. Students' evaluations of cheating in studies 1-3 were dependent on the context; high pressure situations (high obligations to others, low access to resources, and low teacher flexibility) were rated more positively. In order to examine the boundary conditions of this effect, study 4 will use different forms of cheating (fraud, faking drug tests, cheating in sports), ranging from severe cases (harmful to others) to not as severe (victimless cheating). In each of these scenarios, two variables will be manipulated in the vignettes (high versus low access to resources and flexibility), and participants will be asked to rate whether the cheating in the scenario is understandable, good versus bad, ok or not ok, and if they themselves would cheat in those circumstances. |
7 | Leader’s and Follower’s career calling | Sophie Gerdel, | In this three-wave longitudinal study I would like to investigate whether Leader’s career calling trickles down to Follower’s career calling. Based on Social Exchange Theory, I hypothesize that leader-member exchange (LMX) fully mediates the relation between leader’s and follower’s calling. I further predict that perceived supervisor support partially mediates the relation between leader’s calling and LMX. I will collect data among newcomers that are nested within leaders in a large organization. To analyze the data, I will make use of multi-level structural equation models. This study will inform on how leaders influence the development of calling in employees. |
8 | Does the Policy Work? A Survey of College Administrators’ Views on their Schools’ Academic Integrity Policies | Dakota B. Hughes, Talia Waltzer | Colleges across the US take a variety of approaches to curbing cheating and promoting academic integrity. This project builds on a content analysis of US colleges’ misconduct policies by surveying administrators at a range of schools (N = 60) about their views on their institution’s practices surrounding academic integrity. In 5-minute online surveys, administrators will report their institution’s number of academic dishonesty cases, perceived student understanding of the policy, and how punitive or restorative their policies are. We will compare these responses to characteristics of the schools (e.g., enrollment, private/public) and their policies (e.g., amount of information, extent of punitive language) to assess whether administrators’ perspectives on their policies are reflected in the policies utilized by their institutions. In doing so, this project will reveal the perceived and actual effectiveness of the different types of academic dishonesty policies and provide insight into top-down perspectives of academic integrity policies. |
9 | Developing a computer game for participatory simulation to explore parents’ strategies to feed preschool children | Dr. Megan Jarman, Professor. Jacqueline Blissett | Parental feeding strategies and children's eating behaviours are likely to exist in a feedback loop, however traditional research methods have not allowed an exploration of the dynamic nature of these. This project aims to examine the feasibility of using interactive computer game simulations to explore the interactive nature of parents feeding behaviour and children's dietary intake.We will create a computer game in which parents can create a child avatar and home environment similar to theirs in real life and play out mealtimes and feeding strategies they use. Parents will play multiple times and the child avatar responses will be based on what it's 'learned' from previous plays. We will pilot the modified game to collect data on strategies parents’ use, the consequences on child dietary intakes and how these interactions play out over time. |
10 | Compliance to Government's Orders during Covid-19 Pandemic: Does Religiosity Matter? | Alma Jeftic | During the COVID-19 pandemic most countries have announced restrictions to prevent citizens from spending too much time outside. One of the measures was to ban religious activities, such as Friday prayers and Sunday mass/service which caused additional stress to believers. Religious coping refers to the use of religious beliefs or practices to cope with stressful life situations (Pargament et. al, 2005). The purpose of this research is to analyse if religiosity can mediate relationship between COVID-19 related stress and compliance to governments’ orders during the pandemic. The large sample consists of 12000 participants from 24 countries collected as a part of from COVIDiSTRESS survey. Participants filled out PSS (Cohen et al., 1983), six-item scale that measures overall compliance with preventive measures, and two-item scale that measures level of religiosity. This is a quantitative study using a cross-sectional survey design. Mediation analysis is planned to test if religious coping influences the relationship between stress and compliance to government orders. Results will be discussed in line with the theories on religious coping. |
11 | Understanding Pandemic-Related Experiences Through an International Survey-Based Collaboration: The iCARE Study | Keven Joyal-Desmarais, Kim Lavoie; Simon Bacon; on behalf of the iCARE Team | In March 2020, the Montreal Behavioural Medicine Centre (MBMC) launched the “iCARE Study” (https://mbmc-cmcm.ca/covid19/). This is an international collaboration that involves a series of surveys that have been monitoring people’s experiences (e.g., behaviours, mental health) around the globe in relation to the COVID-19 pandemic. Every 6 weeks, we launch a new survey, and are consistently updating the content. As we prepare new surveys, we are always looking for feedback in areas such as: suggestions for what to measure in new survey waves (e.g., health belief constructs), methods to improve the quality of the survey itself (e.g., quality checks), insights on new avenues for participant recruitment (e.g., reaching participants in low-income countries), tools to improve the validity/reproducibility of analyses (e.g., when working with 200+ collaborators/shareholders with diverse levels of research expertise), and statistical insights (e.g., advanced predictive modelling, dealing with missing data). |
12 | Do people spontaneously mention more negative emotions when recalling a self-directed (vs. generous) spending experience? | Zohra Kantawala, Dr. Lara Aknin | Past research suggests that generous behavior, such as pro-social spending leads to higher levels of self-reported positive emotion than self-beneficial behavior (e.g., Dunn, Aknin & Norton, 2008). However, does generosity influence spontaneous expression of negative emotion? To examine this concept, we will code 5,199 recollections of spending for spontaneous mentions of negative emotions (e.g., anxiety, sadness, hostility). Using LIWC (Linguistic Inquiry and Word Count) and third-party human coders we will compare impromptu expressions of negative emotions between generous (pro-social) and self-directed (personal) spending recollections. This proposed study’s hypothesis suggests that through the utilization of different coding mechanisms, spontaneous mentions of negative emotions will provide greater insight into how pro-social action influences other types of emotions and moves beyond self-reported feelings. |
13 | Communicating psychological evidence to non-scientists. How to deal with the complexity of psychological science? | Martin Kerwer, Mark Jonas, Gesa Benz, Marlene Stoll, Anita Chasiotis | Plain language summaries (PLS) aim to communicate scientific evidence to non-scientists in an easily understandable manner. In project PLan Psy, we aim to develop empirically-validated guidelines on how to write such lay-friendly summaries for psychological meta-analyses. Two pre-registered experimental studies have been conducted so far and generated interesting insights on how to structure psychological PLS. Some fundamental research questions remain, however, unanswered and we would like to discuss our ideas for addressing them. More precisely, this poster outlines our plans regarding our next study which will examine how target audience characteristics interact with the complexity of PLS to maximize the impact of our PLS. Against this background, we would like to discuss our ideas on theoretically sound and not overly simplistic ways for (1) assessing empowerment (i.e., laypeople’s ability to use PLS efficiently), and (2) communicating the risk of bias or the trustworthiness of psychological meta-analyses. |
14 | Test-retest reliability of model parameter estimates in human reinforcement learning. | Owen James Lee, Brendan Williams, Lily Fitzgibbon, Daniel Brady, Owen Lee, Paul Vanags, Niamh Bull, Safia Nait Daoud, Aamir Sohail, Anastasia Christakou | Computational Modelling is increasingly used in psychological and neuroscience research, notably in Reinforcement Learning, to make inferences about cognitive characteristics (Lewandowsky & Farrell, 2011 and Lockwood & Klein-Flügge, 2020). Often, the modelling process assumes that the parameter estimates remain stable over time and relate to individual differences. Like any methodology, it is important to establish that our measures have test-retest reliability, which will in turn verify the stability assumption. We aim to test the stability of estimates of model parameters, using an established reinforcement learning model as an example (Kanen et al., 2019). We will estimate the value of model parameters on two separate occasions for participants completing a probabilistic reversal learning task (Izquierdo & Jentsch, 2012), which has been shown to have good test-retest reliability in terms of participant performance (Freyer et al., 2009). We will then assess the reliability of these estimates over time within participants. |
15 | Urgent and irresistible: evaluating how time pressure and incentives influence fraud likelihood | Huanxu Liu, Yuki Yamada | Many studies reported that time pressure affected fraud and high cognitive load was deemed as a possible cause. However, inconsistent results have been found in the previous research and our prior study, which indicates that incentives might play a decisive role in the process of time pressure affecting fraud. Thus, to further clarify the effect of time pressure on fraud, we designed an experiment and planned to perform a two-way mixed-design analysis of variance with time pressure (presence vs. absence) as a between-participant factor and incentives (low vs. mid vs. high) as a within-participant factor. Based on a power analysis detecting the interaction effect, we plan to recruit 22 participants per group (i.e., 44 in total), and use a "coin flip paradigm" to observe participant's tendency to commit fraud under different conditions. We predicted that a significant interaction between time pressure and incentives on fraud could be observed. |
16 | Testing arithmetic competences adaptively | Hannah Lönneker, Julia Huber, Krzysztof Cipora, Hans-Christoph Nuerk | Numerical cognition researchers currently use different instruments to measure arithmetic competences, relying on distinct definitions of the underlying construct and largely varying operationalizations thereof. A standardized, time-efficient and theoretically sound instrument is needed to validly and reliably assess arithmetic competences and to ensure comparability between studies. The aim of this project is to develop an adaptive computerized instrument which assesses performance in the four basic arithmetic operations separately. Items will gradually vary regarding difficulty (e.g., using carry/ borrow operations, increasing problem size) to allow for a precise estimate of the participant’s competence. A test theoretical approach such as the Item Response Theory will be used to assess person and item parameters. Convergent (arithmetic tests), divergent (reading test) and criterion-related (self-reported math grade) validity of the instrument will be estimated as well as reliability (re-test). All material will be openly available so that the instrument can be standardized in different populations. |
17 | The role of physiological arousal in media induced stress recovery | Tamas Nagy, Éva Kovaliczky, Virág T. Fodor | People often use media to recover from the negative effects of daily stress and mental fatigue. However, the mechanism behind media-induced stress recovery is not well known, and some observations are counterintuitive. For example, leisure activities that elicit further stress — such as watching a frightening movie or playing a mentally challenging video game — may be the most effective for stress recovery. In this study, we want to investigate if physiologically and emotionally challenging media content can aid stress recovery and whether arousal has a moderating role. In a double-blind, parallel groups experiment, we will induce fatigue by asking participants to complete a set of challenging tasks. Then we will manipulate physiological arousal by administering caffeine or placebo to participants. They will then play a video game, that will either elicit negative emotions (horror game) or not. Then participants will again solve similar challenging tasks as before, serving as an outcome. |
18 | Memory Performance on Social Media: The Effect of Retrieval Type and Attachment Dimensions | Aylin Ozdes, Koc-Arik, G., Kirman-Gungorer, S. | The proposed study will test the effect of retrieval types used in social media on memory performance using an experimental design. The first aim of the study is to examine the effects of the types of retrieval used in social media (recording information to an external source, sharing information with an uncertain audience) on memory performance. Moreover, we aim to determine the moderation effect of the attachment dimensions (anxious, avoidant) on the relationship between retrieval types and memory performance for close relationship-related experiences. To reach these aims, participants will be asked to complete a recall task on a computer screen and complete a scale to measure the attachment dimensions. The findings will help to understand the negative effects of social media use on memory performance. In addition, it will contribute to the intervention programs to prevent these effects. |
19 | Lateralization shift: Can a phonological intervention shift the pattern of cerebral lateralization of written language in children at risk for dyslexia? | Nantia Papadopoulou, Marietta Papadatou-Pastou | A plethora of studies on the cerebral lateralization of language has established the dominance of the left hemisphere for oral language production in the majority of people. Neuroimaging studies have shown that this pattern is altered in cases of learning difficulties, such as dyslexia. Moreover, it was shown that it is possible to shift lateralization patterns in dyslexia to approximate the lateralization pattern of typically developing individuals through appropriate interventions. However, lateralization of written language has been investigated in very few studies and without including a sample of children, neurotypical or not, or assessing the effects of an intervention. The aim of this study is to examine the effect of a phonological intervention on the cerebral lateralization of written language in children at risk for dyslexia compared to typically developing children using functional Transcranial Doppler ultrasonography. |
20 | [Withdrawn] | ||
21 | Therapeutic Support for Racial Trauma and Substance Use: A DBT Group Approach | Krithika Prakash, Ellen Koch, PhD | "Oppression is the overarching umbrella for all sickness with drugs and alcohol", said a participant when looking at the link between racial trauma and substance use in American Indian communities (Skewes & Blume, 2019). Often substance use treatment tends to focus on the problem behavior itself; however, looking to address the socio-cultural context in which said behavior occurs might be beneficial. Dialectical Behavior Therapy (DBT) uses a biosocial approach to understand and deal with problem behavior. In our study, I am looking to tailor and implementing DBT to address substance abuse within the context of racial trauma. Racial and ethnic minority groups deal with societal and personal invalidation and discrimination. DBT may provide to be beneficial to address these concerns for minority groups, thereby alleviating distress - eventually leading to decreased substance use. |
22 | WARN-D. Designing a large-scale longitudinal online study on forecasting depression: How can we prevent drop-out and errors? | Carlotta Rieble, Ricarda Proppert, Eiko Fried | As depression treatment efficacy remains disappointing, focusing on prevention is crucial. We aim to develop a personalized early warning system, WARN-D, that forecasts depression reliably before it occurs. Starting fall 2021, we will prospectively follow 2,000 students from universities and vocational schools for 2 years. In the first 3 months, we will measure students’ daily mood and lifestyle, combining ecological momentary assessment and smartwatch activity tracking, followed up by quarterly surveys on their mental health and circumstances. As some students will likely experience substantial symptom increases during the study, we can capture the onset of depression. Based on these data, we will build state-of-the-art models that predict individuals’ risk of soon becoming depressed, combining insights from psychological networks, complex systems theory, and machine learning. We hope for input on reaching a diverse student population and achieving high retainment, while implementing efficient, error-tight processes for this large-scale longitudinal online study. |
23 | Neuromodulatory role of of context in a social perception task | Alejandra Rossi, FJ Parada, Stefanella Costa-Cordella. | The processing of social keys is a fundamental condition of communication between agents. The effective use of these signals is an essential requirement when accessing the social world of which, as an intensely gregarious species, we are part. Furthermore, the social and cultural environment in which we develop is inseparable from cognitive exercise, so the context of socio-affective interaction should modulate cognitive activity. This project aims to analyze the behavioral and neurophysiological changes related to the affective and social support context in a robust experimental paradigm of social perception. This project seeks to deepen the knowledge of the neuromodulatory role of context in social perception through a novel experimental design in order to demonstrate the effects of context modulation at different levels of complexity: behavioral responses, neuroendocrine and neurophysiological |
24 | Is working memory differently loaded by specific verbal stimuli depending on individuals’ anxiety profile? A dual-task study. | Serena Rossi, Iro Xenidou-Dervou, Krzysztof Cipora | A negative anxiety-performance correlation is attributed to various cognitive factors. According to the Attentional Control Theory (ACT), anxiety raises an individual’s attention to threat-related stimuli, consequently facilitating the processing of task-irrelevant information and reducing resources - e.g., Working Memory (WM) capacity - necessary to perform an assigned cognitive task. We also know that there are different types of anxiety (e.g., general anxiety, test anxiety, or mathematics anxiety). This study will investigate whether the WM of individuals with different individual anxiety profiles (i.e., configurations of different anxiety types) is differentially affected by specific verbal stimuli. We will use a dual-task design consisting of a primary cognitive task, during which we will load participants’ WM by manipulating the valence of the presented verbal stimuli (neutral, emotional-related, and mathematics-related words). Results can help us identify ways to mitigate the negative link between anxiety and cognitive performance especially in the context of mathematics anxiety. |
25 | [Withdrawn] | ||
26 | Cerebral laterality as assessed by functional transcranial Doppler ultrasound in right-and left-handers: A comparison between pen-and-paper writing and typing. | Christos Samsouris, Marietta Papadatou-Pastou | Written language is traditionally produced using pen and paper, but typing on a PC keyboard has gained widespread popularity in the last decades and has become an equally (if not more) important form of transcription. Regardless, the cerebral laterality of written language production has received little attention, in contrast to the cerebral laterality of oral language production that has been studied extensively. Handedness is an indirect index of cerebral laterality, with right-handers and left-handers exhibiting differences in cerebral laterality during oral language production tasks. In the present study we aim to compare keyboard typing and pen-and-paper writing regarding cerebral laterality. We will use functional Trans-Cranial Doppler (fTCD) ultrasound technology which allows for reliable measurements of hemispheric dominance during language production tasks and is not affected by movements, such as the ones generated during writing. The differences between pen-and-paper writing and typing will further be examined between right-handers and left-handers. |
27 | Modeling Student Math Achievement Across Countries Using TIMSS 2015 and 2019 | Apoorva Shivaram, Elizabeth Dworak | Children’s early math skills are critical for future academic success. To profile the most important predictors of student math achievement, we propose to explore a large-scale secondary dataset (TIMSS) by using empirically driven supervised machine learning methods on nested data across 34 countries. By using these iterative techniques, we seek to determine what features of student, home, teacher, and school characteristics are critical in predicting math achievement in 8th grade students. We are currently piloting these analyses on 4th grade data from 2015 and 2019 to assess the feasibility of these methods. We seek feedback on the methods used in this project prior to submitting a Stage 1 Registered Report. These methods may help us shed light on the contextual factors and/or culture that may account for differences in student math achievement, how analogous these modeled traits are across countries, and how stable these models are across time. |
28 | Published or lost in the file drawer? Publication rate of preregistered studies in psychology | Lisa Spitzer, Stefanie Mueller | Although publication bias can be investigated indirectly by measuring the proportion of positive results in published literature, it is more difficult to examine directly how many conducted studies are not published. In other scientific disciplines, mandatory registries or ethics applications have been used for this purpose, but no such research has been conducted with respect to psychological studies. Using preregistrations, we aim to assess the publication rate and bias of psychological studies: For the N = 382 studies that were preregistered on OSF Registries between 2012-2018, we will search for corresponding publications in journals. We want to investigate the proportion of preregistered studies published in journals and whether the significance of results has an impact on the time until publication. Furthermore, a survey will be conducted among authors of preregistrations for which no publication in a journal can be identified to assess reasons for non-publication. |
29 | Testing Methods to Capture Dynamic Social Context | Marie Stadel, Anna Langener, Laura Bringmann, Gert Stulp, Martien Kas, Marijtje van Duijn | Social context is an essential factor impacting mental health and well being. Yet, the ability to comprehensively capture social context has been challenging. First, social context is dynamic. Yet, most traditional methods involve static measures and do not focus on individual variation. Second, there are several methods that capture different parts of social context such as daily social interactions, a person’s social network, or online social activity. Research that attempts to combine these different methods is scarce. With this study, we aim to investigate how experience sampling methodology (ESM), personal social networks (PSN) and digital phenotyping (using the BEHAPP app) can be combined. Our aim is to find the most participant- and researcher-friendly way of obtaining a complete picture of a person’s dynamic social context. |
30 | Assessing the Reliability of Congruency Sequence Effect in Confound Minimized Online Tasks | Zsuzsa Szekely, Marton Kovacs | The examination of individual differences in connection with cognitive control has shown an increasing trend lately. However, the reliability of congruency sequence effect (CSE), one of the most used indicators of cognitive control, is questionable. The lack of clear evidence regarding the reliability of CSE implies theoretical and methodological concerns for the study of theories based on this construct. In our study, we will examine the reliability of CSE through four confound-minimized, online conflict tasks (Stroop, Simon, flanker, prime-probe). We plan to investigate the question from two perspectives. First, using a between-subjects design by measuring the CSE in each task at two different times. This approach will provide information on the test-retest reliability of the construct. Second, using a within-subjects design, in which participants will complete all four tasks once. By using this method, we can examine whether CSE effect sizes correlate between different conflict tasks. |
31 | A Registered Report on Registered Reports: Investigating Potential Benefits of and Barriers to Adopting Registered Reports | Tristan Tibbe, Amanda Montoya, William Krenzer | Authors of registered reports and traditional peer-reviewed articles will be surveyed about their papers and research practices. The research goal will be to compare the processes of publishing registered reports versus traditional papers, and how authors' research practices differ across methods of publication controlling for publication date and journal prestige. The findings of this research will contribute to the understanding of possible long-term benefits of adopting registered reports, such as what open science practices registered report authors adopt. The results will also reveal any differences that may exist in the publication processes experienced by authors of registered reports and traditional peer-reviewed articles (e.g., time to publication, number of journals submitted to). |
32 | Advancing knowledge on the development of child temperament | Lisa Wagner | Dimensions of temperament in children and of personality in adults are conceptually similar and may be integrated (e.g., Donnellan & Robins, 2009). However, in developmental psychology, temperament is frequently conceived as inborn and “stable”, whereas in personality psychology, there is growing interest in personality development. If anything, child temperament is typically seen as a predictor of ability attainment (e.g., Pérez-Pereira et al., 2016). I argue that influences in the other direction (and, of course, bidirectional relationships) are equally conceivable and that considering the relationships between individual differences in other areas of development could be key in understanding early development of personality. To address this question, I plan a three-wave panel study with parents of young children who will report on their children’s temperament repeatedly. Between waves, they will use the kleineWeltentdecker-App (Daum et al., 2020), a smartphone-based developmental diary assessing the age of attainment of developmental milestones in different areas. |
33 | Examining the role Feedback and Metacognitive Judgement Play in Post Error Slowing | Yiqiong Yang, Michelle Ellefson | Error monitoring helps learners make sense of their responses toward errors, and propose ideas to use external stimuli, such as feedback to aid learning processes. Error motioning is indexed by post-error slowing, a delayed response in actions after error commissions. It is necessary to gain a deeper understanding of individuals’ post- error adaption, and to what extent it reveals one’s metacognitive abilities. My research project will focus on identifying the role of feedback and metacognitive monitoring in post-error slowing by tasks that incorporate numeracy and science knowledge judgement. Four groups of participants will conduct a set of computerised tasks same in content, but different with the provision of trial-wise feedback and block-wise performance prediction. They will also go through the State Metacognitive Inventory afterward to record their task-related metacognition. I will use 2 × 2 × 3 mixed ANOVA and hierarchical regressions to answer my research question. |
Room 1
Room 2
Room 3
Room 4