Published using Google Docs
Experimental Psychology Conference 2023
Updated automatically every 5 minutes

Experimental Psychology Conference 2023

The Australian National University (ANU) is delighted to host the Australasian Experimental Psychology Conference April 12-14, 2023. The Australasian Society for Experimental Psychology Conference (known as "EPC") is an annual meeting for the presentation of scientific work in experimental psychology, with a focus on human perception and cognition. For more information about EPC and its history (including previous conference abstracts), please see https://www.asep.com.au/

EPC2023 will be held on ANU campus at the new flagship Kambri precinct, nestled amongst thriving cafes and restaurants. April is a glorious time in Canberra with autumn colours decorating this unique city. Canberra is home to the National Gallery, National Portrait Gallery, National Arboretum, Australian National Botanic Gardens, Australian War Memorial, and National Science and Technology Centre. One of Canberra’s most recent developments is the hipster-friendly Braddon restaurant and bar district, within easy walking distance of the ANU Campus. The Canberra region also boasts multiple award-winning wineries.


Academic Organising Committee

Stephanie Goodhew, Mark Edwards, and Amy Dawel

Student Organising Committee

Ben Steward, Nicholas Wyche, Liz Miller, and David Denniston

Q&A Academic Career Advice Panel for Students and Early Career Researchers

Panellists: David Badcock, Branka Spehar, Bradley Jack, and Louisa Talipski

Facilitator: Amy Dawel

Student awards

We are pleased to offer four best presentation students awards for EPC2023. These will be awarded to three students giving talk presentations, and one student giving a poster presentation. Each awardee will receive $250. The judging panel consists of David Badcock, Steve Most, Branka Spehar, and Frances Martin.

Schedule:

https://tinyurl.com/ybmp7pnc  


Keynote Address:

Motivated perception: How motivation and emotion shape what we see

By Steve Most, The University of New South Wales

Steve Most is Associate Professor of Psychology at the University of New South Wales in Sydney, where he directs the Motivated Attention and Perception Lab. He and his team use behavioural and physiological measures to understand how attention, motivation, and emotion shape perception and memory, as well as their implications for wellbeing in the real world (e.g., road safety). He is best known for his work on inattentional blindness and on emotion-induced blindness. Steve received his B.A in psychology and creative writing at Brandeis University and his Ph.D. in psychology from Harvard, followed by postdoctoral training at Vanderbilt and Yale. Before joining UNSW, he was tenured as Associate Professor at the University of Delaware in the US, where he continues to maintain an affiliated appointment. Steve’s research has been recognized through grants, fellowships, and awards from the Australian Research Council, NIH, and the New York Academy of Sciences, and it has been covered by media outlets including the New York Times, The Economist, and The Discovery Channel. His teaching and mentoring have been recognized through the Alpha Lambda Delta Excellence in Teaching Award at the University of Delaware and through an Outstanding Postgraduate Supervisor Award, an Exemplary Teaching Practice Award in the category of “Inspiring Students”, and an Education Excellence Award at UNSW. In 2021, he co-authored (with Marvin Chun) a new textbook on Cognition, available from Oxford University Press, which received the Most Promising New Textbook Award from the Textbook and Academic Authors Association.

Keynote Abstract: Can our motivations and emotions shape what we perceive of the world around us? This has been a longstanding and divisive question within cognitive science. In this talk, I provide an overview and update on a program of research in my lab that tackles this question, focusing on what my colleagues and I call “emotion-induced blindness”. In emotion-induced blindness, task-irrelevant emotional stimuli are so prioritised by the visual system that they outcompete and briefly disrupt people’s ability to see targets that they are looking right at. The effect is robust and hard to overcome. Intriguingly, it appears to involve mechanisms that are distinct from other commonly used measures of emotion-driven attentional bias. I argue that emotion and motivation can indeed shape what we perceive, and that work delineating mechanisms of how they do so can serve as a platform for dialogue between cognitive and clinical science, aiding efforts to specify the nature of clinically relevant individual differences in information processing


Abstracts

 

Learning & Conditioning

 

Electrophysiological activity from over the cerebellum and cerebrum during eye blink conditioning in human subjects

 

Neil Todd, Sendhil Govender, Peter Keller, James Colebatch

 

University of New South Wales, Western Sydney University

 

Email: n.todd@unsw.edu.au

 

Following the pioneering work of Eccles in elucidating the physiology of the cerebellum, some 50 years ago theorists David Marr and James Albus devised cerebellum-based models of classical conditioning. To date these models have not been tested in human subjects. We report the results of an experiment in which electrophysiological activity was recorded non-invasively from the human cerebellum and cerebrum in a sample of 14 healthy subjects before, during and after a classical eye-blink conditioning procedure with an auditory tone as conditional stimulus and a maxillary nerve unconditional stimulus. Electrodes recorded EMG and EOG at peri-ocular sites, EEG from over the frontal eye-fields and the electro-cerebellogram (ECeG) from over the posterior fossa.  Of the 14 subjects half strongly conditioned, while the other half were resistant.  We confirmed that conditionability was linked under our conditions to extraversion-introversion.  Inhibition of cerebellar activity was shown prior to the conditioned response, as predicted by Albus.  However, pausing in high frequency ECeG and the appearance of a contingent negative variation in both central leads occurred in all subjects.  We conclude that while conditioned cerebellar pausing may be necessary, it is not sufficient alone to produce overt behavioural conditioning, implying the existence of another central mechanism.

 

Fear reinstatement: old fear, new fear or no fear?

 

Sarah Olsson, Camilla Luck, Luke Ney, Ottmar Lipp

 

Queensland University of Technology, Curtin University

 

Email: sarah.olsson@hdr.qut.edu.au

 

In studies of human fear conditioning, one conditional stimulus (CS+) is paired with an aversive unconditional stimulus (US) whereas a second (CS-) is presented alone. Fear is indicated through larger electrodermal responding to CS+ than CS-. Fear reinstatement is induced by presenting the US alone after an extinction phase during which only the CSs are presented and assessed in a subsequent extinction re-test. Past studies yielded patterns of either differential reinstatement, increased responding to CS+ but not CS-, or generalised reinstatement, increased responding to both CS+ and CS-, relative to extinction. The latter pattern may reflect sensitization, however, the increased physiological responding to any stimulus following an intense stimulus rather than a return of conditional fear. Reanalysis of data (N=149) from 4 experiments which overall yielded a pattern of generalized reinstatement indicated that increased responding to CS+ and CS- was fully accounted for by an increase in arousal (indicated by skin conductance level). This was not the case for the pattern of differential reinstatement observed in a subset of participants (N=59).  The current results have implications for the interpretation of past studies and suggest improvements for experimental procedures aimed at assessing the reinstatement of human fear.

 

High frequency heart rate variability as a biomarker for learning capacity

 

Andrew Campbell, David Neumann, Tamara Ownsworth

 

Griffith University

 

Email: andrew.campbell@griffith.edu.au

 

The project examined the potential for brief measures of autonomic function to serve as indicators of the capacity to learn new cognitive skills. Healthy adults (N = 270) completed a novel working memory task while High Frequency Heart Rate Variability (HF-HRV) was measured. Participants completed a pre-test n-back task, followed by a 30-minute adaptive n-back training task, and a post-test n-back task. HF-HRV was measured at baseline, during the pre-test and post-test, and during the two halves of the adaptive training task to observe HF-HRV patterns during learning. To examine the association between HF-HRV and cognitive strategy formation, participants were randomly assigned to groups that were either provided with an n-back cognitive strategy, or that were required to formulate their own cognitive strategy during the task. Learning was evidenced by n-back performance improvement in both conditions. However, preliminary results indicated that the degree of improvement was related to the participants’ cognitive strategy and their patterns of HF-HRV during the testing session. The study outcomes may provide evidence for the utility of HF-HRV as a biomarker for learning capacity.

 

Revisiting the illusory causation effect

 

Jessica Lee, Julie Chow, Peter Lovibond

 

University of Sydney, University of New South Wales

 

Email: jessica.c.lee@sydney.edu.au

 

The illusory causation effect describes the tendency to judge an unrelated cue and outcome to be causally related in a contingency learning task. The standard procedure for assessing causal illusions makes two implicit assumptions: 1) that participants start as naïve observers with no prior beliefs about the likely relationship between the cue and outcome, and 2) that learning can be adequately captured as a single (point-estimate), causal rating after null contingency training. Here, we use a novel distributional measure to assess participants’ beliefs over a range of causal relationships prior to, as well as after, exposure to a non-contingent cue and outcome. In two experiments with different causal scenarios and 50% cue and outcome density, we find evidence against these two assumptions. We conclude that distributional measures and assessment of prior beliefs can offer novel insights in understanding the mechanisms behind the illusory causation effect.

 

Presenting unpaired unconditional stimuli during extinction reduces renewal due to an increase in (unpleasant) arousal

 

Ottmar Lipp, Luke Ney, Camilla Luck, Allison Waters, Michelle Craske

 

Queensland University of Technology, Curtin University, Griffity University, University of California at Los Angeles

 

Email: ottmar.lipp@qut.edu.au

 

Presenting unpaired unconditional stimuli (US) during extinction reduces the return of fear as indexed by renewal, extinction re-tests or slow reacquisition. The present study investigated whether this effect is due to an increase in arousal mediated by these unpaired USs. Using an ABA renewal paradigm that trained extinction in a context different from acquisition and test, participants (N=126) either received five unpaired presentations of the same aversive US used during acquisition (electro-tactile or scream; group Same), of an aversive US that was different from that used during acquisition (group Different), or five presentations of a non-aversive US, a reaction time task (group RT). Extinction was followed by tests for renewal and re-acquisition. Re-acquisition did not differ between the groups. Renewal of electrodermal conditional responses was observed in group RT, but not in groups Same or Different indicating that the effect of unpaired USs on extinction learning is not limited to the USs used during acquisition. This finding is consistent with an arousal account assuming that presentations of the non-aversive USs were not sufficient to elevate arousal. The observation that self-reported anxiety decreased across extinction in group RT, but not in groups Same or Different supports this interpretation.

 

Perception

 

Similarity judgements for naturalistic image regions

 

Emily J. A-Izzeddin, Jason B. Mattingley, William J. Harrison

 

University of Queensland

 

Email: e.aizzeddin@uq.net.au

 

Human perception is known to be influenced by priors – internalisations of statistical regularities present in the natural world.  Analyses of naturalistic images reveal the correlation of low-level features (e.g., contrast) between two regions of space is spatially-dependent (e.g., the closer together they are, the more strongly correlated their features).  However, it is unclear if humans’ similarity judgements for such regions reflect these statistical properties, suggesting the internalisation of a prior for spatially-dependent feature relationships.  We therefore had participants (N=20) indicate which two of three image patches came from the same scene, and built a model to predict their reports based on simple image statistics.  Two target patches were cropped from the same larger photograph, separated by varying displacement distances and displacement azimuths, with the third, a foil, drawn from a different photograph.  Observers’ performance across spatial conditions followed the pattern predicted by low-level feature correlations, including luminance and contrast.  By testing observers’ performance with edge-only and two-tone images, we further find that similarity performance depends on more than just edges or large-scale structures.  Hence, our results indicate humans’ similarity judgements for naturalistic image regions can be accounted for by the matching of low-level statistical patterns.

 

What makes self-occluding contours so special?

 

Barton L. Anderson, Hua-Chun Sun, Phillip J. Marlow

 

University of Sydney, University of Geissen

 

Email: barton.anderson@sydney.edu.au

 

All objects are globally convex, which causes them to form a special class of contours in images known as ‘self-occluding contours’ (SOCs). For smoothly curving surfaces, these contours are geometrically ‘special’ because their 2D image shape provides unambiguous information about 3D surface geometry. The importance of SOCs in the perception of 3D shape has been studied and debated in the context of static images, but they provide even greater computational puzzles in motion displays where objects rotate.  The ‘motion’ of the contours in these images is not defined, because each new view represents the accretion or deletion of a portion of the surface. We performed a series of experiments to assess the role of SOCs in the perception of 3D shape. We assessed the relative contribution of SOCs, surface shading, and specular reflections in the perception of 3D shape by constructing novel cue conflict stimuli in which SOC's signal a much thicker 3D shape than either shading or specular reflections. Our results demonstrate that the pattern of accretion and deletion of surfaces along SOCs can dominate our experience of global 3D shape. Our results highlight the importance of ‘unmatchable features’ in the perception of global shape in motion displays.

 

Efficient coding in the human visual system, Part 1: Sensory encoding schemes are revealed by generative modelling

 

Reuben Rideaux, Paul Bays, William J. Harrison

 

University of Sydney, University of Cambridge, University of Queensland

 

Email: reuben.rideaux@sydney.edu.au

 

Sensory representations are thought to be tuned to behaviourally relevant statistics of natural environments over evolutionary and developmental timescales. Edges and contours in natural images are primarily oriented along the cardinal axes. There is a corresponding anisotropy in the orientation selectivity of visual neurons in several mammalian species that prioritises the encoding of cardinally oriented information. Analogously, humans are superior on a range of visual tasks for stimuli that are oriented around cardinal orientations relative to oblique orientations. Several computational accounts have attempted to unify the influence of environmental statistics on the properties of sensory neurons, as well as perception, but have been unable to address empirically how such encoding is implemented at the neural level. Using a data-driven approach, here we extract the brain’s representation of visual orientation, using forward encoding of EEG signals, and compare this with simulations from different sensory coding schemes. We find that the tuning of the human visual system is highly conditional on stimulus-specific variations in a way that is not predicted by previous proposals. More broadly, we introduce a new set of generalizable analytic tools that can be used to reveal neural encoding schemes within the human brain.

 

Efficient coding in the human visual system, Part 2: The functional architecture of Bayesian inference

 

William J. Harrison, Paul M. Bays, Reuben Rideaux

 

University of Queensland, University of Cambridge, University of Sydney

 

Email: willjharri@gmail.com

 

The prevailing view in cognitive neuroscience is that perception is a process of active inference: the brain estimates the most likely structure of an environment by combining noisy sensory signals with prior expectations. This Bayesian framework has been critical to our understanding of how the brain deals with uncertainty in the external world as well as uncertainty inherent in the spiking of individual neurons. A critical gap in our understanding remains, however, undermining the biological plausibility of leading models of active inference: we do not know how the prior is coded by the brain. In our investigation, we bridged information theoretic approaches and forward models of neural coding and have discovered how Bayesian inference is instantiated in the human visual system. We first found that Bayes’ theorem can be simplified by a sensory encoding scheme in which the prior is embedded within tuning curves that are distributed according to the principles of efficient coding. We then used a data-driven approach to recover the sensory encoding scheme of the human visual system, and found that it matches the theoretic solution. We therefore show that optimal (Bayesian) inference can be performed by the functional architecture within the earliest stages of cortical processing.

 

Physical, perceived and represented complexity

 

Branka Spehar, Lindsay Peterson, Colin Clifford

 

University of New South Wales

 

Email: b.spehar@unsw.edu.au

 

The subjective experience and effects of complexity are pervasive in nearly all domains of perception and cognition.  Not only can we effortlessly assess the complexity of nearly everything that we encounter, but complexity often influences where we allocate our attention, what we like and how well we learn and remember objects, scenes, and events around us. However, the relevant factors involved in experience of various forms of complexity and how is it mentally represented, remain unclear.

Here we investigate perceived complexity and the corresponding mental representation in dynamic synthetic noise patterns varying in their 1/f spatiotemporal amplitude spectra. Participants either rated the complexity of dynamic spatiotemporal patterns or were asked to report any structure that they perceive in these stimuli. Overall, the perceived complexity increased linearly as a function of 1/f spatiotemporal spectra with shallower 1/f spectral slopes perceived as more complex in both the spatial and temporal domains. Interestingly, the simplest and most complex spatiotemporal patterns were associated with lower number of reported interpretations, compared to those with intermediate complexity, indicating the richer mental representations associated with intermediate levels of complexity.

 

Attention & Individual Differences

 

Anxiety, threat and disengagement of attention

 

Poppy Watson, Agnes Musikoyo, Mike E. Le Pelley

 

University of New South Wales

 

Email: poppy.watson@unsw.edu.au

 

The tendency to focus attention on signals of threat is argued to play a significant role in the development and maintenance of anxiety. Such ‘attentional biases’ are typically studied by examining the degree to which visual search performance is impaired in the presence of task-irrelevant but anxiety-provoking stimuli (e.g., angry faces). Many studies have purported to show that high-anxious individuals are not necessarily more likely to orient to signals of threat (i.e. threat stimuli are not more likely to capture attention), but that once their attention is on such a signal, it lingers for longer.  However, most studies claiming to demonstrate such ‘difficulties in disengaging attention’ from threatening stimuli did not use paradigms that could reliably disentangle attentional capture from attentional disengagement effects. Furthermore, no motivation was ever provided to participants to disengage their attention quickly, so any delayed disengagement could be entirely voluntary rather than representing an involuntary ‘difficulty to disengage’. I will present data from a series of experiments examining the question of whether individuals scoring high in anxiety show delayed disengagement from signals of threat, using a task that can separate attentional capture from disengagement and under conditions where it was beneficial to disengage attention quickly.

 

Individual differences in social attention 'in the wild'

 

Monique Piggott, David White, James Dunn, Bojana Popovic, Alice Towler, Sebastien Miellet, Victor Varela

 

University of New South Wales, University of Queensland, University of Wollongong

 

Email: M.piggott@unsw.edu.au

 

Our understanding of social attention is largely based on laboratory studies and group-level analyses. The present study uses state-of-the-art mobile eye tracking technology and an innovative automatic person annotation program to explore individual differences in social attention in naturalistic settings. Seventy-one student participants wore eye-tracking glasses while (i) engaging in a 1 on 1 conversation and (ii) navigating a busy university. Participants also completed a battery of lab-based measures including face recognition tasks and personality questionnaires. Our findings show that people with higher face recognition ability are more likely to fixate on faces during a 1 on 1 conversation, but not when navigating their environment. Results also revealed individual differences in fixations on faces and bodies were stable for each observer within the same task (i.e., during conversation or during navigation) but not between different tasks (i.e., between conversation and navigation). Together, we conclude that patterns of individual differences in social attention depend on social context and/or the task people are performing.

 

Improving the reliability of the emotion-induced-blindness paradigm

 

Mark Edwards, David Denniston, Camryn Bariesheff, Nicholas J. Wyche, Stephanie C. Goodhew

 

Australian National University

 

Email: mark.edwards@anu.edu.au

 

People typically have an attentional bias to emotionally-salient compared to neutral stimuli. Individual differences in the magnitude of this bias exist and it is thought to be linked to individual differences in anxiety and negative affect. However, studies that have investigated these potential relationships have resulted in mixed findings. If such relationships exist, one potential reason for these mixed findings is the poor reliability a popular paradigm used to measure this bias: the emotion-induced-blindness (EIB) paradigm. The aim of our study was to improve EIB reliability. In Experiment 1 we included mid-intensity emotionally-salient stimuli to try to obtain a wider range of EIB magnitudes (difference in performance between the neutral and emotionally-salient conditions) to improve reliability. Reliability for the High- and Mid-intensity EIB conditions were low, while reliability of the scores for individual conditions (Neutral, High-, and Mid-intensity) were high. We thought that the Neutral condition may also be tapping a degree of attentional control so we developed a modified version of EIB, which resulted in the same pattern of results as obtained in Experiment 1. We discuss the results in relation to the utility of EIB for individual-differences research and what it measures.

 

The impact of age on the detection of targets in low prevalence visual search

 

Stephanie C. Goodhew, Mark Edwards

 

Australian National University

 

Email: stephanie.goodhew@anu.edu.au

 

When performing multiple successive visual searches, the prevalence of the target has a profound impact on target detection, such that low prevalence targets are at elevated risk of being missed. This has important implications for real-world visual search tasks, such as diagnostic medical imaging (e.g., searching for a cancer) and airport baggage security screening (e.g., searching for a weapon), which are characterized by low prevalence targets and potentially dire consequences of target misses. Previous work on low prevalence visual search indicates that individuals who spontaneously respond more slowly miss fewer targets, and previous aging research indicates that older adults typically respond more slowly across multiple task contexts. Synthesizing these two separate lines of research, here we tested whether this would therefore translate into a performance benefit for older adults in low prevalence visual search. In two experiments, older adults consistently responded slower, and irrespective of age, those who responded slower missed fewer targets. In Experiment 1 there was a modest age benefit on accuracy, but there was no age-related impairment, whereas in Experiment 2, there was evidence for a clear age-related improvement such that older adults missed fewer low prevalence targets. Theoretical and practical implications are discussed.

 

Mental Imagery & Inner Speech

 

How mental images and real images interfere with one another

 

Alexander Sulfaro, Amanda Robinson, Thomas Carlson

 

University of Sydney, University of Queensland

 

Email: alexander.sulfaro@sydney.edu.au

 

Research suggests that mental imagery and veridical perception share neural resources and should therefore interfere with one another. Yet, mental images are intrinsically private experiences, making it difficult to investigate how they interact with real sensory content. While self-reported measures of imagery quality exist (i.e. vividness ratings), they have limited interpretability. Here, we assessed the degree to which imagined content interacts with real stimuli by measuring neural responses more directly. We asked participants to visualise white bars recently displayed at specific orientations, using a rhythmic cue to increase the consistency of when each instance of imagery occurred. For half of trials, a bar at an incongruent angle would appear briefly onscreen during imagery. We applied multivariate pattern analysis to electroencephalography (EEG) data recorded during each trial to investigate whether real stimulus information would amplify, or detract from, imagined information. While both imagined and real orientation could be decoded from brain recordings separately, only indirect evidence was found supporting interference between real and imagined representations generally. Additionally, shared neural representations between real and imagined stimuli were minimal. Our findings suggest that representational overlap between real and imagined perception is limited for low-level features such as orientation, aligning with recent imagery models.

 

People who have intense imagined audio and visual experiences tend to have low resonance brain activity

 

Derek H. Arnold, Blake Saurels, Natasha Anderson, Isabella Andresen, Dietrich S. Schwarzkopf

 

University of Queensland, University of Auckland

 

Email: d.arnold@psy.uq.edu.au

 

Most people can conjure images and sounds that they experience in their minds. There are, however, marked individual differences, with people ranging from being unable to generate imagined sensory experiences (aphantasics) to people who have unusually intense imagined experiences (hyper-phantasics). We have examined the dynamics of brain activity that predict these variable outcomes. Like others, we find that alpha band (8-12 Hz) oscillatory brain activity is linked to the act of sensory imagery, but these dynamics do not predict differences in the subjective intensity of imagined experiences. Rather, the subjective intensity of imagined audio and visual experiences was inversely scaled with the resonance of theta (5-7 Hz), beta (13 – 29 Hz) and gamma band (30 – 40 Hz) activity, both when people tried to imagine having experiences, but more so when they tried to meditate with their eyes closed. This suggests these dynamics are a generalized product of brains, which predict the subjective intensity of imagined experiences, as opposed to these dynamics being a marker of specific cognitive operations that are involved in generating imagery. Overall, our data suggest that people who are prone to have intense imagined audio and visual experiences tend to have generally less powerfully resonate brain activity.

 

Investigating the role of visual imagery in associative memory of objects and spatial locations through studying aphantasia

 

Rebecca Keogh, Zoey Isherwood, Anina Rich

 

Macquarie University, University of Nevada Reno

 

Email: rebecca.keogh@mq.edu.au

 

Our ability to visualise has been implicated in many forms of memory. However, recent research has shown that some forms of memory that were thought to require visual imagery can be completed by individuals who lack visual imagery (aphantasia). For example, aphantasic individuals can still perform well on short-term and visual working memory tasks. Conversely, there is evidence that these individuals have poor autobiographical memories. Here we expanded on these findings by assessing associative visual memories in aphantasia. In this online study, we recruited aphantasic individuals and control individuals with intact visual imagery. Participants completed an associative memory task which involved memorising four unique object, location, and colour pairings. There was a significant interaction between performance on the feature being tested (object or location) and group membership. Aphantasic individuals outperformed controls on the associated object and location pairings, with no significant differences in their own performance on the two types of associations. In contrast, control participants were significantly better at remembering associated locations than objects. The key to these differences may lie in the strategies used to memorise the associations, with aphantasic individuals reporting they relied mainly on a verbal labelling strategy while controls relied more heavily on visual imagery.

 

Semantic processing for inner and overt speech: An ERP study

 

Bradley N. Jack, Kirralee Poslek, Lachlan Hall, Mike E. Le Pelley, Thomas J. Whitford

 

Australian National University, University of New South Wales

 

Email: bradley.jack@anu.edu.au

 

There is a long-standing debate as to whether the neural processes associated with inner and overt speech are the same or different. Watson (1913) claimed that the only difference between them is that inner speech does not produce an audible sound, whereas Vygotsky (1934) argued that they are completely different, in that inner speech does not contain detailed semantic information. To distinguish between these possibilities, in two experiments, participants watched an animation which provided them with precise knowledge about when they should produce inner or overt speech. Approximately 1 second later, they heard a sound that was either semantically congruent or incongruent with their speech. We found that incongruent sounds elicited a larger N1 – an event-related potential (ERP) associated with auditory processing – and N400 – an ERP associated with semantic processing – than congruent sounds. In Experiment 1, we found that inner speech yielded a larger N400 than overt speech; in Experiment 2, we controlled for the presence of an overlapping N2 – an ERP associated with deviance detection – and found no difference in the N400 between inner and overt speech. These results suggest that inner and overt speech share similar neural processes, at least in the context of semantic processing.

 

Facial Emotion Perception

 

Perception of genuine emotional facial expressions in the broader autism phenotype

 

Ellen Bothe, Romina Palermo, Amy Dawel, Bronte Donatti-Liddelow, Linda Jeffery

 

University of Western Australia, Australian National University, Curtin University

 

Email: ellenjadebothe@gmail.com

 

Autistic people and people with higher levels of autistic-like personality traits often have difficulty reading facial expressions of emotion. There is wide individual variation in levels of difficulty. It has been argued that difficulties are not features of autism per se but of a co-occurring personality trait, alexithymia, which describes difficulty processing internal sensations of emotion. Previous research into the ability to label deliberately posed, i.e., faked, expressions demonstrates that alexithymia indeed largely accounts for difficulties associated with autism and autistic-like traits. In two non-clinical samples (Ns = 149, 201) we aimed to extend these findings by investigating the role of alexithymia in two other expression-reading abilities: assigning emotion labels to naturalistic, rather than posed, expressions, and judging whether expressions reflect genuine emotion. Autistic-like traits were associated with difficulties in both abilities, but we found no evidence that difficulties could be attributed to co-occurring alexithymia. Instead, individual variation was tied to levels of autistic-like traits in separate domains, namely communication, social skills, and attention to detail. Results suggest that alexithymia cannot consistently explain expression-reading difficulties associated with autistic-like traits and it is necessary to examine specific domains of autistic traits to understand sources of individual variation.

 

Meta-analysis of face and visual context interactions in emotion perception

 

Ben A. Steward, Paige Mewton, Romina Palermo, Eryn Newman, Amy Dawel

 

Australian National University, University of Western Australia

 

Email: ben.steward@anu.edu.au

 

Basic emotions theory argues that core emotions (e.g., anger) are universally expressed on people’s faces in standard, consistent ways (Ekman, 1992). However, there is now compelling evidence that surrounding visual context (e.g., body posture) influences how emotion is perceived in faces, and vice-versa. We used meta-analyses to quantify these bidirectional effects for the first time and build understanding of the factors influencing them. We searched PsycInfo, Web of Science, and Scopus to identify studies capturing emotion perception, faces, and visual context. Sixty-three studies met the inclusion criteria, of which 38 had data available or provided on request for meta-analysis. Data were analysed using multi-level mixed-effects meta-analytic models. Results reveal large effects of both visual context on emotion perception in faces (gav = 1.08) and faces on emotion perception of visual context (gav = 3.06). These effects were moderated by the congruency of emotions across face and visual context stimuli, and by the ambiguity of facial emotions, but not the ambiguity of visual contextual information. These findings suggest that faces in emotion perception may be more complex than suggested by basic emotions theory, highlighting the need to consider stimulus ambiguity when using highly prototypical emotional stimuli to draw conclusions.

 

Emotional expression production and recognition in parent-child dyads

 

Nicole Nelson, Frankie T.K. Fong, Hanne-Marie Buttle

 

University of Adelaide, Max Planck Institute for Evolutionary Anthropology, University of Queensland

 

Email: nicole.nelson@adelaide.edu.au

 

Children’s emotion expression recognition and production skills appear to develop slowly (Fong et al, 2020). To determine whether children’s expression skills might be more apparent while interacting with a common social partner like a parent, we recruited 33 parent-child dyads to participate in an emotion guessing game. Children (4-9 years) and parents were presented with a series of boxes, each containing one of four emotion-related objects (sticker, a broken balloon, a spider, and poop). Parents and children took turns opening the boxes one at a time, and then generated an expression based on the object inside. Their partner then guessed what object was in the box. Children’s expressions were also video-recorded, and later presented to naïve adults (N=88), who judged what emotional expression children were displaying. We found that parents’ emotion judgments of their children’s expressions were more accurate than judgments made by naïve adults. In addition, children’s expressions of happiness were most recognizable, followed by disgust, sadness, and finally fear. We find that the expressions children generate for parents are not generally recognizable by all adults, and parents’ familiarity with their own children’s expressions may assist them in correctly guessing the meaning of children’s expressions.

 

Emotion judgement in Schizophrenia Spectrum Disorder (SSD): Meta-analytic evidence of a specific deficit and how it is concealed

 

Paige Mewton, Amy Dawel, Yiyun Shou, Elizabeth J. Miller, Bruce Christensen

 

Australian National University, National University of Singapore

 

Email: paige.mewton@anu.edu.au

 

People with SSD perform worse than community samples when judging emotion on faces; however, it is unclear whether this is a specific deficit that exceeds impairments when judging non-emotional information from faces (e.g., age, identity). The literature provides conflicting evidence, likely due to variation in sample and task characteristics. We present a meta-analysis (N=103 studies, 603 effect sizes) that looks beyond variation across individual studies and quantifies SSD-related deficits when judging emotion and non-emotion facial information. We consider: 1) whether deficits are larger when judging emotion, and, if so; 2) are emotion-specific deficits moderated by sample characteristics (i.e., SSD severity; matching SSD and comparison samples on intelligence) and/or task characteristics (i.e., task difficulty; memory-dependency of non-emotion tasks). Results show that people with SSD have larger deficits when judging emotions, which occurs regardless of SSD severity, intelligence matching of samples, and task difficulty. However, when non-emotion tasks are memory-dependant, SSD-related deficits are comparable across emotion and non-emotion tasks. This is the first meta-analysis to robustly demonstrate an emotion-specific deficit in SSD. Recommendations are presented for considering the processes involved in face judgement tasks to avoid confounding or misleading results.

 

Investigating the impact of alexithymia on the detection of threatening faces

 

Stephanie McGowan, Megan Bartlett, Mike Nicholls

 

Flinders University

 

Email: stephanie.mcgowan@flinders.edu.au

 

Alexithymia affects an individual’s ability to identify and describe emotional facial expressions and the processing of negative emotions may be particularly impacted. This study investigated the impact of Alexithymia on the detection of social threats (i.e., angry faces). Participants (n = 91) were recruited online from Prolific and completed a ‘face-in-the-crowd task’ where matrices of schematic faces expressing either happy, angry, or neutral expressions were presented, with a target face (either angry or happy) present on 50% of  trials. Participants determined whether the faces within the matrix belonged to the same emotion category and identified the emotion of the target face. Participants also completed the Perth Alexithymia Questionnaire. Results revealed that reaction times were significantly faster for the high alexithymia condition across all trial types. Furthermore, angry targets were detected faster than happy targets, and targets were detected faster across smaller set sizes overall. No significant differences in accuracy were observed between conditions. The results suggest angry faces are detected faster than happy faces. Furthermore, contrary to expectations, individuals higher in Alexithymia appear to be more efficient at detecting emotional targets compared to those lower in Alexithymia.

 

Does intelligence predict the ability to recognise naturalistic facial expressions?

 

Louisa A. Talipski, Amy Dawel, Gilles E. Gignac, Linda Jeffery, Clare A. M. Sutherland, Romina Palermo

 

Australian National University, University of Western Australia

 

Email: louisa.talipski@anu.edu.au

 

The ability to recognise facial expressions of emotion is critical to many aspects of social functioning. At the same time, individuals differ in the degree to which they are proficient at recognising others’ emotions, and considerable research has been devoted to uncovering the individual-level variables that can predict expression-recognition ability. One variable that has been found to predict this ability is intelligence (Schlegel et al., 2020). However, studies investigating the relationship between recognition ability and intelligence have used expression-recognition tests consisting of posed (i.e., fake) expressions, which limits their ecological validity. Using a new test composed of naturalistic expressions, as well as three measures of intelligence—the Single Word Comprehension Test, Baddeley’s Grammatical Reasoning Test, and the Paper Folding Test—we re-examined the relationship between expression-recognition ability and intelligence in a sample of 330 participants. Medium associations were found between each measure of intelligence and overall scores on the naturalistic test of recognition ability. Implications of these results for theories of emotion recognition are discussed.

 

Language and reading

 

Language experience predicts music processing in a half-million speakers of fifty-four languages

 

Courtney B. Hilton, Jingxuan Liu, Elika Bergelson, Samuel Mehr

 

University of Auckland, Columbia University, Duke University

 

Email: courtney.hilton@auckland.ac.nz

 

From our earliest years, we are surrounded by people speaking and singing, shaping the development of our auditory system. But does linguistic experience shape music processing? While music and speech both use sound, each does so in specialised ways, so the answer isn't trivial. Here, we focus on the claim that using pitch at the syllable level in speech (i.e., in tonal languages like Mandarin) shapes melody processing in music. We first meta-analysed this purported effect, finding some support, but in studies limited by small sample sizes in only few tonal languages and countries. Addressing these issues, we used web-based citizen science to test this question on a global scale. We measured music perception abilities in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba) and compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Speakers of tonal languages had a clear advantage for melodies, but curiously, had a disadvantage for processing the beat, suggesting a tradeoff in attention to different acoustic features. These results held across our diverse sample and after controlling for confounds like musical training, suggesting a general effect.

 

Lexical tone in bilingual spoken word recognition

 

Xin Wang, Bob McMurray

 

Macquarie University, University of Iowa

 

Email: x.wang1@mq.edu.au

 

Spoken word recognition is characterized by competition, as the lexical processor needs not only to interpret the unfolding speech input, but also to inhibit the activation of non-target candidates. This competition has been extended to investigations in bilingualism to understand how bilingual listeners recognize spoken words in one language that sound similar to words in the other. One linguistic dimension, namely, lexical tones, has been shown to provide independent cues for lexical access within a tonal language. If tones are crucial in spoken word recognition, a key question is whether this linguistic knowledge is utilized in bilingual spoken word recognition. To address this question, we used the Visual World Paradigm due to its temporally sensitive measures of lexical activation and competition (Tanenhaus, et al., 1995). The experimental manipulation is realized through the presence of a competitor, the name of which bears a phonological relationship with the target. In a within-participant design, we observed competition effect in the segment + tone condition, but not in the segmental condition. These results first demonstrate the obligatory role of lexical tones in cross-language lexical competition in VWP.

 

The agent frontrunner in a race against the causer

 

Margaret Ryan, Linda Cupples, Iain Giblin, Lyndsey Nickels, Paul Sowman

 

Macquarie University

 

Email: margaret.ryan1@students.mq.edu.au

 

Unravelling the causes of reading difficulty allows us to better target literacy instruction and remediation. We isolated the relative reading benefits of canonical, left-to-right ordering of thematic roles, on our proposed agent>experiencer>causer/theme hierarchy. Fluent-English-speakers rated the intent of first nouns of object-experiencer (OE) and agentive actives, and second nouns of their corresponding passives (NP1-(‘was’)VP(‘by’)-NP2-PP). Participants introduced these sentences at their own reading pace, phrase-by-phrase. The verbs fell naturally into “high”- and “low”-rated groups: Consistently “high” ratings identified an agent in OE “eventives” and agentives; however, a spread of ratings indicated labile interpretation for the “low” group. Splitting this group at its mean, we compared a low-intent causer/theme interpretation to an agent interpretation of the same verbs, with the “high” group. The ordering of the rated role compared to the necessary OE experiencer or agentive theme determined canonicity. Similar first-noun reading speed was maintained at the verb for actives, with passives lagging. By the second noun, agent-first sentences led causer/theme-first sentences. Remarkably, passives with experiencers first, causers/themes second were quickest at sentence end. Across different verbs or differing interpretations of labile verbs, reading was fastest if agents were first, canonical sentences with no agent followed, and last came noncanonical sentences.

 

A meta-analysis of syntactic priming experiments in children

 

Shanthi Kumarage, Evan Kidd, Seamus Donnelly

 

Australian National University, Max Planck Institute of Psycholinguistics

 

Email: shanthi.kumarage@anu.edu.au

 

Syntactic priming is the tendency to re-use previously heard or used structures (e.g., hearing the passive sentence the boy was bitten by the dog increases the likelihood of a speaker using another passive structure). A substantial literature exists using this methodology with children to test hypotheses regarding the acquisition of syntax, under the assumption that priming effects reveal both the presence of syntactic knowledge and the underlying nature of learning mechanism supporting the acquisition of grammar. Here we present the first meta-analysis of syntactic priming studies in children. We identified 36 eligible studies. Our analysis confirmed that syntactic priming is a large and robust effect, with syntactic primes increasing later production of that structure by 4.5 to 2.5 times. Several variables moderated the magnitude of priming in children, including (i) study design (within- or between-subjects design), (ii)  repetition of the prime, and (iii) the animacy configuration of syntactic arguments. Interestingly, lexical overlap between prime and target, which has been shown to significantly affect priming in adults, did not moderate the effect, and neither did age. We discuss how the results bear upon theories of the acquisition of syntax.

 

The influence of task demands on lexical processing across the visual field

 

Aaron Veldre, Erik D. Reichle, Lili Yu, Sally Andrews

 

University of Sydney, Macquarie University

 

Email: aaron.veldre@sydney.edu.au

 

During reading, word processing begins prior to direct fixation—when words are in the low-acuity parafoveal region of vision. However, it remains unclear precisely how the visual constraints of parafoveal processing impact letter and word identification. This study comprised six experiments in which skilled adult readers made speeded binary decisions about letter strings that were displayed for 100 vs. 300 ms at different retinal eccentricities in the left vs. right visual field to examine how these variables and task demands influence word-identification accuracy and latency. Across the experiments, lexical-processing performance decreased with eccentricity, but to a lesser degree for words displayed in the right visual field, replicating previous reports. However, the effect of eccentricity was attenuated for the two tasks that required “deep” semantic judgments (e.g., discriminating words that referenced animals vs. objects) relative to the tasks that required “shallow” letter and/or lexical processing (e.g., detecting words containing a pre-specified target letter, discriminating words from nonwords). These results suggest that lexical and supra-lexical knowledge play a significant role in supporting lexical processing, especially at greater eccentricities, thereby allowing readers to extend the visual span, or region of effective letter processing, into the perceptual span, or region of useful information extraction.

 

Culturally fair assessment of phonological awareness in an Indigenous context

 

Melissa Freire, Evan Kidd, Kristen Pammer

 

Australian National University, University of Newcastle

 

Email: melissa.freire@anu.edu.au

 

Assessing a child’s readiness to read typically involves assessing pre-linguistic skills such as phonological awareness (PA). PA is a unique predictor of reading skill that involves understanding and deconstructing components of words in the language of reading instruction.  Cultural and linguistic biases occur when the language if instruction is not the first language of the child being assessed. Such is the case for many Indigenous children living in very remote communities in the Northern Territory who speak an Indigenous first language but learn to read in English. This study explores the utility of two nonword tasks, the Syllable Repetition Task (SRT) and the Nonword Test of Auditory Analysis (NoTAAS), for assessing phonological awareness with remote Australian Indigenous populations, while minimising cultural and linguistic bias. Assessment of task equivalence showed that both tasks were suitable for assessing PA of Indigenous and non-Indigenous emergent readers. However, while both tasks positively predicted reading for the non-Indigenous group, only the NoTAAS positively predicted reading for the Indigenous group; a negative relationship between the SRT and reading ability was evident for the Indigenous group. We suggest the relationship between SRT and reading may be influenced by top-down processing and/or associated with cumulative ear health issues.

 

Poster session

 

Task demands and the oddball Fast Periodic Visual Stimulation paradigm

 

Samantha Curtis, Bianca De Wit

 

Macquarie University

 

Email: samantha.curtis1@hdr.mq.edu.au

 

A novel paradigm called oddball Fast Periodic Visual Stimulation (FPVS) has been used to find evidence of automatic brain processes dedicated to word-meaning discrimination without an explicit word-discrimination task. This result is surprising in a context that presumes task demands drive visual word discrimination. This study focused on investigating the automatic nature of the oddball FPVS paradigm for visual word recognition through task manipulations. Participants were fitted with electroencephalography (EEG) and viewed a 60-second sequence of a series of flickering pseudo-words (e.g., SLIEM) presented at 10Hz with an oddball presentation of a word (e.g., SMILE) inserted at every 5th presentation (or 2Hz). The word discrimination response is indexed as a unique and significant periodic response to the words at 2Hz (and its harmonics). During the presentation, participants performed two tasks, (1) an orthogonal colour-changing fixation task and (2) a lexical task that manipulated the participant's attentional awareness of the word stimuli. The main finding was that manipulating the task with an attentional shift increased the word discrimination responses. These findings question the automaticity of this word discrimination response as it appears vulnerable to attentional mechanisms.

 

The representational dynamics of food in the human brain

 

Denise Moerel, James Psihoyos, Thomas A. Carlson

 

University of Sydney

 

Email: denise.moerel@sydney.edu.au

 

The human brain can quickly categorise visually perceived objects. Although previous research has focussed on object and category representations in the brain, it is still unclear how one category of ecological importance, food, is represented in the brain. In this study, we utilised time-resolved multivariate analyses of electroencephalography (EEG) data to investigate the time-course of food representations in the brain. We recorded EEG while participants engaged in a fixation monitoring task, where the stimuli were not task relevant, and a food naturalness categorisation task, where the stimuli were task relevant. Our results demonstrate that the brain can differentiate between food and non-food items from as early as 84 milliseconds after stimulus onset. Additionally, the neural signal contained information about the naturalness, level of transformation, and perceived caloric content of food. Importantly, this information was present regardless of task relevance. Furthermore, the recorded brain activity predicted the behavioural responses of different groups of participants who participated in an odd-item-out task on 3 presented food items. Overall, our findings enhance our understanding of how the human brain processes visually presented food.

 

Arts experience enhances aesthetic enjoyment of dynamic human movement

 

Courtney E. Casale, Emily S. Cross, Ryssa E. Moffat

 

Macquarie University, Western Sydney University

 

Email: courtney.casale@students.mq.edu.au

 

Reflective of human socialisation and communication, dance offers insights into the understanding of socially intentioned body movements. Individuals differ in their perceptions and enjoyment of dance, with inter-individual differences in prior dance experience (embodied expertise) and cumulative arts experience shaping evaluations of dance aesthetics. Arts experience warrants further exploration, as no study has assessed the influence of various arts experiences on aesthetic appreciation of movement, despite the integral role of art in socialisation throughout history. This preregistered study investigates how embodied and cumulative arts experience shape aesthetic preferences for body movements.

Participants first completed a questionnaire about their previous arts (visual, musical, dramatic) and sports experience. Next, they viewed and rated dance videos for aesthetic qualities (liking, familiarity, reproducibility) before and after learning a dance choreography, made up of the same steps shown in half the video stimuli. We found that after dance training, participants rated learned movements as more familiar, reproducible, and enjoyable than unlearned movements. Enjoyment was enhanced by sports experience and theatre enjoyment, familiarity by painting experience and musical enjoyment, and reproducibility by painting experience. Our findings demonstrate the importance of considering embodied and cumulative arts experience when assessing aesthetic perceptions of the human body in motion.

 

Visual cuing to learn spatial orientation in virtual reality

 

John Salamon, Mike Nicholls, Oren Griffiths

 

Flinders University, University of Newcastle

 

Email: john.salamon@flinders.edu.au

 

Interpretation of SONAR time-bearing “broadband” displays requires submariners to rapidly recognise and translate 2D, exocentric data to 3D, egocentric bearings. Novel methods to assist submariners to perform this translation process more efficiently include visual cuing in virtual reality. However, the effects of such cues on operator performance when the cue is subsequently removed is unexplored. Keeping operator performance optimal even when machine assistance is unavailable is essential in this domain. We therefore examined the effect of presenting cues to optimise operator learning, not performance. A virtual reality experiment was developed in which participants learnt to read a virtual SONAR time-bearing display and point to targets in 3D space over three phases: (a) baseline (b) training and (c) test. A between-subjects manipulation of visual assistance present / absent was provided on the training phase, and unassisted performance measured on test. Behavioral and psychophysiological measures including accuracy, EEG, and eye-tracking were recorded. We anticipated the cue would assist with performance when present, but that exposure to cueing may reduce subsequent, unsupported performance. Interim findings partially support these hypotheses. These data highlight how onscreen cueing can impact ongoing learning.

 

Attention distribution during a visual search task in screen-based versus virtual reality displays

 

David Nicoll, Salvatore Russo, Megan Bartlett, John Salamon, Mike Nicholls, Tobias Loetscher, Oren Griffiths

 

Flinders University, University of South Australia

 

Email: david.nicoll@flinders.edu.au

 

Do participants alter their search strategy based on the display format? Existing applied psychology and human factors literature has yielded mixed findings. This is likely due to the variance in contexts, resolution, scale, gaze metrics and immersion of the environments used. We investigated gaze behaviour in undergraduate volunteers during the same experiment delivered in a traditional screen-based display format and via a virtual reality head-mounted display. In addition, we explored the impact of valid and invalid cueing to support target detection in our search task, which we modelled on overwatch duties performed by a dismounted soldier. We found similar rates of gross attentional errors across display formats. Yet, we noted pervasive changes across display formats in other gaze metrics, such as less dwell proportion on distractor characters in virtual reality (24.8%) compared to screen-based (45.7%). These findings may inform the relative weights we should place on traditional, screen-based cognitive research versus virtual reality-based research. In addition, it can provide some guidance on which contextual elements influence how impactful the increased immersion available within virtual reality is on individual measures.

 

Meaning explains seeing when we least expect it: Expectation and semantic relatedness in inattentional blindness

 

Suzanne Chu, Anne Aimola Davies

 

Australian National University

 

Email: suzanne.chu@anu.edu.au

 

Researchers investigating Inattentional Blindness (IB)—the failure to perceive an unexpected object in plain sight, when attention is engaged elsewhere—have identified a ‘semantic-congruency effect’. The present study explored the role of expectation and the semantic-congruency effect, on attention and perception. A series of nine IB experiments were conducted where participants completed seven trials of a picture-naming task that included primary-task and distractor pictures. Four trials preceded the ‘critical’ trial where an unexpected six-letter word was presented at fixation on the computer screen, simultaneously with the pictures. A significantly higher percentage of participants reported words semantically related to the category of primary-task pictures than semantically-unrelated words (i.e., participants were more likely to report the word BEETLE when looking for insects than the word WRENCH). Additionally, when primary-task expectations were violated, by changing the expected semantic category of the primary-task pictures from insects to tools, a significantly higher percentage of participants reported the word WRENCH, which was semantically related to the new, and unexpected, category of primary-task pictures than the semantically-unrelated word VIOLIN.  In conclusion, when primary-task expectations are violated, the internal model for what is meaningful is updated, subsequently guiding attention toward meaningful stimuli using the updated model.

 

Animal, plant or mineral: Disentangling object concepts from visual features

 

Sophia Shatek, Thomas Carlson

 

University of Sydney

 

Email: sophia.shatek@sydney.edu.au

 

Recent studies have attempted to disentangle how the brain processes visual features from semantic knowledge. Many of these approaches employ some form of visual “scrambling” that renders objects unrecognizable. Here, we use images of real but unfamiliar animals, plants, and minerals from Schmidt et al. (2017) to investigate how perception of objects occurs in the absence of knowledge about their true category. We recorded brain activity with Electroencephalography (EEG) while participants actively classified the ambiguous stimuli as animals, plants and minerals, and during a fixation monitoring task. Our results indicate that participants were largely naive of the true classes, and liberally classified objects as minerals but were more conservative for animals and plants. Despite poor classification performance, the true class of the stimulus was distinguishable from multivariate patterns of neural activity in both the classification and fixation monitoring tasks. However, compared to the true class, representations in the brain more closely matched behavioural classifications of the stimuli. Overall, our findings demonstrate that even without top-down knowledge of an object’s ‘true category’, the brain encodes both the ‘true category’ based primarily on mid-level shape features, as well as the expected category.

 

The effect of image variability on unfamiliar face identification performance across simultaneous and sequential presentation

 

Niamh Hunnisett, Simone Favelle, Harold Hill

 

University of Wollongong

 

Email: nok915@uowmail.edu.au

 

Previous research into within-person variability information in unfamiliar face recognition has produced discordant results. Some experiments show improved face matching accuracy with multiple images while others show no advantage. It has been suggested that multiple images will only improve face matching performance when a memory component encourages viewers to abstract a representation of a face from the multiple images. The aim of this research is to clarify the conditions under which within-person variability information can improve unfamiliar face identification performance. Two experiments (simultaneous and sequential presentation) were conducted in which participants completed both a same/different face matching task and a two-alternative forced choice (2AFC) face matching task with both high and low variability images. Results were consistent with previous studies, finding a variability advantage for the sequential same/different task, but not simultaneous. Interestingly, there was a variability advantage in both the simultaneous and sequential 2AFC tasks. We argue that the nature of the 2AFC task encourages viewers to use an abstracting strategy, therefore allowing for the advantage of multiple images in both experiments. These findings suggest that abstracting is essential for the multiple image benefit to occur, however this can be induced by more than just a memory component.

 

Recognition of compound expressions of emotion: Conclusions from a face-to-label matching task

 

Emily Keough, Simone Favelle, Steven Roodenrys

 

University of Wollongong

 

Email: eck978@uowmail.edu.au

 

The ability to identify facial expressions is important for interpersonal communication and social functioning. Most research exploring expression recognition use only 6 “basic” expressions, although humans are capable of producing a much wider range that are used more often in social interaction. The current study aimed to measure people’s ability to match a visual stimulus (facial expression) with a verbal label (emotional label). Unlike previous research, this study employed the use of basic and compound expressions of emotion, increasing the complexity of the task and providing greater insight into how complex expression of emotion are perceived. 80 participants completed a face-to-label matching task, consisting of 242 matched and 242 mismatched trials, across 22 different levels of emotion category. Results showed humans can accurately match basic and complex expressions to a label, although accuracy performance significantly varied across emotion categories. Further, accuracy was greater for match than mismatch trials. Criterion data showed participants were more biased to report a match between the label and the expression. Implications for what this means in regards to how humans perceive and categorise expressions of complex emotions is discussed.

 

Cognitive lifestyle and task-switching in young and older adults: A pilot study

 

Xuanning He, Erin Walsh, Cobie Brinkman, Anne M. Aimola Davies

 

Australian National University

 

Email: Xuanning.He@anu.edu.au

 

Task-switching refers to the ability to alternate sequentially between multiple tasks. As individuals age, their ability to switch between tasks may decline, possibly leading to reduced complexity in their cognitive lifestyles (i.e., their capacity to engage in a diversity of daily tasks is reduced). To examine the association between current cognitive lifestyle and task-switching performance in young and older adults, the Activity Lifestyle Questionnaire and a Task-Switching paradigm was used, while taking account of individuals’ current physical activity level and their cognitive reserve level (as measured by the International Physical Activity Questionnaire and the Cognitive Reserve Index Questionnaire). A pilot study was conducted with both young and older adults to compare their performance on two task-switching paradigms—switching between luminance discrimination and letter discrimination tasks versus switching between digit parity and digit comparison tasks. The study also aimed to assess whether the Activity Lifestyle Questionnaire was effective in capturing participants’ lifestyle variations (i.e., the different varieties of daily tasks they choose to engage in) in the Canberra population. Understanding the relationship between cognitive lifestyle and task-switching abilities in older adults has practical implications for designing interventions that maintain cognitive functions, and promote healthy lifestyles in later life.

 

Estimating statistical power for ERP studies using the auditory N1, Tb, and P2 components

 

Lachlan Hall, Amy Dawel, Lisa-Marie Greenwood, Conal Monaghan, Kevin Berryman, Bradley N. Jack

 

Australian National University

 

Email: lachlan.hall@anu.edu.au

 

The N1, Tb, and P2 components of the event-related potential (ERP) are thought to reflect the sequential processing of auditory stimuli in the human brain. Despite their extensive use, there are no guidelines for how to appropriately power ERP studies using these components. Here, we used Monte Carlo simulations to investigate how the number of trials, number of participants, effect magnitude, and study design influence the probability of finding a statistically significant effect. We found that as the number of trials, number of participants, and effect magnitude increased, so did statistical power. We also found that increasing the number of trials had a bigger effect on statistical power for within-subjects designs than for between-subjects designs, and that within-subjects designs required a smaller number of trials and participants to provide the same level of statistical power for a given effect magnitude than between-subjects designs. These results show that it is important to carefully consider these factors when designing ERP studies, rather than relying on tradition or anecdotal evidence. We hope that these results will allow researchers to estimate the statistical power of previous studies, as well as help design appropriately-powered studies in the future.

 

Facial first impressions and social decisions in schizotypy

 

Simone Favelle, Callie MacGregor, Emma Barkus

 

University of Wollongong, Northumbria University

 

Email: simone_favelle@uow.edu.au

 

Social functioning and face processing deficits are associated with schizophrenia and schizotypy. While findings are scant, there is some evidence that the intensity of first impression judgements – immediate subconscious evaluation of personal traits from faces - is increased in those with schizophrenia. In this study we investigate first impression judgements in schizotypy, a precursor to schizophrenia. We also test whether schizotypy and first impression judgements influence social decision making in an economic trust game and a novel dominance endorsement task. Results showed a positive correlation between levels of schizotypy and ratings of aggressive and mean traits, but not with trustworthiness or dominance.  The overall trustworthiness premium (more money invested with trustworthy than untrustworthy faces in the economic trust game) was significant, and the size of the premium varied with schizotypy but not with trustworthiness rating intensity. Results of the dominance task unexpectedly showed greater endorsement for less dominant faces, and this did not vary with schizotypy or dominance rating intensity. Schizotypy, like schizophrenia, appears to be characterised by differences in rating intensity for some traits. But our results show that it is schizotypy, and not trait rating intensity, that impacts social decisions involving trust.

 

Exploring the temporal dynamics of perceptual, conceptual and contextual properties of objects in EEG

 

Ariel Kim, Genevieve Quek, Denise Moerel, Olivia Gorton, Thomas Carlson

 

University of Sydney, University of Western Sydney

 

Email: ariel.kim@sydney.edu.au

 

Object knowledge encompasses information about an object’s perceptual features (e.g., dogs are furry, have legs), its conceptual attributes (e.g., dogs are animals, eat food), and contextual associations (dogs are commonly seen with humans and at kennels). We used electroencephalography (EEG) and multivariate pattern analysis to investigate the processing dynamics of these three dimensions of object knowledge in the brain. Models of perceptual and conceptual similarity and the strength of contextual associations for 190 naturalistic objects were created using behavioural data. Using representational similarity analysis, we compared these models to two EEG datasets, in which participants passively viewed the stimuli. Our results show that an object’s perceptual characteristics uniquely predicted the neural response evoked by object images in both early and later stages of visual processing, while conceptual properties were strongly represented later in time. There was limited evidence of objects' contextual association uniquely explaining object representations. These findings suggest that the neural representation of object knowledge is complex and involves multiple dimensions that are differentially represented in the human brain and contribute to our understanding of how the brain represents and processes information about objects.

 

Developing and validating novel night-time driving hazard perception tests

 

Catherine Kennon, Joanne Wood, Alex Black, Philippe Lacherez, Allison McKendrick

 

Queensland University of Technology, University of Western Australia

 

Email: catherineann.kennon@hdr.qut.edu.au

 

Driving at night is challenging, due to reductions in visibility under the low light-levels of night-time roads and glare from oncoming headlights. Currently there are no validated methods for measuring driving performance at night-time. The Hazard Perception Test (HPT) and DriveSafe Test have been used as off-road screening tools to assess driving skills, such as ability to respond to potential hazards and awareness of the driving environment, but these are based on daytime scenes. This study developed two novel night-time driving assessment tools: a video-based HPT (N-HPT) and an image-based test (Night-time Item Recognition Test, (NIRT)). As part of the validation process for these tests, this study assessed whether blur significantly reduces performance on these night-time driving tests. Thirty-four visually normal drivers (age = 42.2 ± 13.5 years) performed both the N-HPT and NIRT under two conditions: best-corrected vision and with moderate blur (+1.00DS). Blur significantly decreased performance on both the NHPT (p = <.001) and NIRT (p = <.001), validating that night hazard perception ability in these novel tests is partially driven by visual performance. These findings contribute to understanding night-time driving safety, by providing tools that can be used to provide an index of night-time driving performance.

 

Quantifying observed motor synchrony: Movement predictability and inter-individual traits predict accuracy

 

Ryssa E. Moffat, Emily S. Cross

 

Macquarie University, Western Sydney University

 

Email: ryssa.moffat@mq.edu.au

 

Evidence that motor synchrony is a powerful form of social glue abounds: Matching another person’s body movements can enhance one’s affect, interpersonal rapport with one’s partner, and prosocial behaviour more globally. But what about the influence of synchrony on an observer? Are we sensitive to the degree of motor synchrony in dyadic interactions? Can we quantify it accurately?

In this pre-registered study, we assess how accurately observers quantify the degree of synchrony in stick-figure dyads playing the mirror game, and whether synchrony is influenced by inter-individual differences in self-reported embodied expertise (ability to reproduce movements), psychosocial resources (extraversion, self-esteem, body awareness, and body competence), or social tendencies (empathy, autistic traits).

Preliminary analyses suggest observers quantify synchrony with high accuracy, particularly for highly predictable movements. Greater embodied expertise and enjoyment, as well as fewer autistic traits, are associated with improved accuracy. We also explored observers’ enjoyment of synchronous dyadic movement, finding that enjoyment correlates positively with movement similarity and observers’ extraversion, but negatively with movement predictability and observers’ autistic traits.

In sum, observers’ accuracy quantifying motor synchrony in dyadic movement is contingent on movement predictability, notwithstanding observers’ autistic traits, embodied expertise and enjoyment relating to the movements.

 

Detecting a loss of situational awareness

 

Stephen R. Pickard, Ami Eidels, Eric J. Beh, Leslie M. Blaha

 

University of Newcastle, 711th Human Performance Wing, Air Force Research Laboratory

 

Email: stephen.pickard@uon.edu.au

 

Based within the aviation context, this research is attempting to detect a loss of situational awareness (SA) using Detection Response Tasks (DRT). The research is in two phases: an initial survey and then an experimentation phase. A survey has been distributed attempting to gain an understanding of what current avia- tion personnel understand of the construct of SA and how it is currently taught and assessed, and what are the observed indicators of a loss of SA. The survey results will be used to validate some previously identified elements of SA around the link between SA and cognitive capacity and workload, and the cognitive and physical indicators of SA loss. The survey results have provided input into the experimentation phase of the research project. In the experimentation phase, a DRT-style experiment using Modifiable Multitasking Environment (ModME) software will simulate pilot workloads and behaviours, and through gradual in-creases in workload, use the DRT responses as a cognitive performance indicator of a loss of SA.

 

Judgement & Decision-making

 

Does allowing for changes of mind influence initial responses?

 

Augustine Nguyen, Grant Taylor, Nathan Evans

 

University of Newcastle, University of Queensland

 

Email: augustine.nguyen10@uon.edu.au

 

Evidence accumulation models (EAMs) have become the dominant framework for rapid decision-making, and while many theoretically distinct models exist, model comparisons have proved challenging due to their similarities in predictions about choice response time data. One solution is subject these models to additional sources of data outside of the standard single response time and choice. A phenomenon known as double responding, which are a second response that is made after the initial response can serve as an additional source of data to constrain models. However, instructing participants that they are allowed to change their mind (explicit double responding) could influence their strategy for initial responding, meaning that a paradigm of this nature may not generalise to standard paradigms. Here, we provide a validation of explicit double responding paradigms, by assessing whether participants' initial decisions, as measured by diffusion model parameters, differ based on whether or not they were instructed that they could change their response after their initial response. Across two experiments, our results consistently indicate that allowing for changes of mind does not influence initial responses, with Bayesian analyses providing at least moderate evidence in favour of the null in all cases. Our findings suggest that explicit double responding paradigms should generalise to standard paradigms, validating the use of explicit double responding in future rapid decision-making studies.

 

Explaining away, credibility, and the ‘illusion of consensus’

 

Saoirse Connor Desai, Jacqueline Fai, Jaimie Lee, Brett Hayes

 

University of Sydney, University of New South Wales

 

Email: saoirse.c.d@gmail.com

 

Previous research has demonstrated an illusion of consensus such that people are often equally convinced by the same claim from multiple independent social sources (true consensus) and repetition from a single source (false consensus). Across three experiments, we examined whether people interpret repetitions as a cue to credibility in a false consensus. Participants rated their confidence in claims from different news outlets reporting election predictions from independent pollsters (true consensus) or the same pollster (false consensus). In some false-consensus conditions, we added explanations designed to decrease or increase the credibility of the repeated claim. We replicated our previous finding that people give more weight to true than false consensus when the independence of sources in the former condition is salient. However, explaining the repetitions had no effect on confidence (Exp 1), regardless of its salience (Exp 2) or frequency (Exp 3). Results suggest that source credibility contributes to the interpretation of repetitions but does not solely account for the weight given to repetitions in a false consensus.

 

Reframing the framing effect: The importance of information leakage

 

Omid Ghasemi, Adam J.L. Harris, Ben R. Newell

 

University of New South Wales, University College London

 

Email: omidreza.ghasemi21@gmail.com

 

The framing effect is a widely acknowledged phenomenon, wherein logically equivalent options trigger different preferences (e.g., 90% fat-free vs 10% fat). This effect has been interpreted as evidence of deviation from normative decision-making. However, the information leakage account suggests that frames convey choice-related information to decision-makers, making them informationally non-equivalent and causing the choice of frame to "leak" information to listeners. For example, decision-makers might interpret a positive frame (90% fat free) as an implicit recommendation. Therefore, in contrast to traditional paradigms, the information leakage account views framing effects as normatively defensible. In a series of experiments, we eliminated the informativeness of frames by minimizing the freedom of a speaker to choose a frame and varying the communication context between a speaker and a listener from collaborative to competitive. The information leakage account would predict a less pronounced framing effect in these situations, where the leaked information conveys no useful cue to decision-makers. The results indicate that manipulations to block information leakage have labile impacts on the extent of framing. The findings contribute to our broad understanding of people’s susceptibility to framing, and the rationality of such effects.

 

A model-based approach to investigating the mechanisms of congruency sequence effects

 

Ping-shien Lee, David Sewell

 

University of Queensland

 

Email: pingshien.lee@uqconnect.edu.au

 

The congruency sequence effect (CSE), or Gratton effect, describes diminished congruency effects (i.e., faster responses for <<<<< vs. <<><< stimuli in a flanker task) on trials following an incongruent trial than those following a congruent trial.  Traditionally, the CSE is regarded as an index of conflict adaptation. Accounts of the CSE have typically emphasised either top-down (cognitive control) or bottom-up (associative) processes. To disentangle top-down and bottom-up contributions to the CSE, we compared performance on versions of Simon and flanker tasks that control for memory and learning confounds present in the standard versions of these tasks, with the standard tasks. We analysed the data using a recently-developed model that explains conflict effects in terms of attention-shifting dynamics, the revised diffusion model for conflict tasks (RDMC). When memory and learning effects are controlled for, the CSE is mainly driven by across-trial changes in the way attention is loaded onto distractor information, consistent with top-down control. In the standard version of the flanker task, when both stimulus and response were repeated across trials, the data were best explained by facilitated memory retrieval, consistent with the bottom-up account. However, in the standard Simon task, neither the top-down nor the bottom-up account produced good fits to data. Instead, model fits suggest a combination of top-down and bottom-up influences that differ across congruent and incongruent trials. Performance benefits on congruent trials are driven by adjusting how attention is loaded onto distractor information based on the previous congruency types (i.e., top-down influence). However, on incongruent trials, asymmetrical repetition benefits are mainly due to faster memory retrieval (i.e., bottom-up influence).

 

Climate endgame? (How) do people understand projections about future climate change?

 

Ben Newell, Alice Mason

 

University of New South Wales, University of Warwick

 

Email: ben.newell@unsw.edu.au

 

Recent calls have urged the scientific community to face the challenge of improving communication about worst-case scenarios for climate change. Such scenarios are informed by climate projections which are often presented as a numerical range - for example, by 2100 global surface temperatures will increase by between 4 and 7.2 degrees. Built into these projections are assumptions about how long-term climate responses, will be affected by different actions on greenhouse-gas emissions and associated socio-economic activity. Across 4 experiments, (N =798) we examine how people interpret the uncertainty associated with the likelihood of different scenarios (e.g., Best vs. Worst Case) and how this interacts with assumptions about the distributions underlying projected ranges (e.g., uniform, normal, skewed). We find evidence for an apparent ‘optimism bias’ and/or judgments that are anchored on participants’ prior beliefs about temperature increases. Specifically, we see that many participants think higher temperatures will be more likely under best-case scenarios (left-skewed), but that lower ones will dominate worst-case scenarios (right-skewed). We discuss the implications of these results for climate communication and prudent climate-risk management.

 

Face Perception

 

How unfamiliar faces become familiar: The role of variability in a face matching task

 

Janice Choi Tung Yung, Alice Towler, Mehera San Roque, Richard Kemp

 

University of New South Wales, University of Queensland

 

Email: j.yung@student.unsw.edu.au

 

Past research on face learning has often used separate learning and test phases to study how factors like image set variability, number of images and length of exposures affect the development of face familiarity.  Fewer studies have considered the process of building familiarity with an identity whilst making face matching decisions.  In our study, we investigated whether the type of exposure (either singular or variable) affects the familiarity people form with novel faces and whether this confers any immediate benefits on face matching.  Participants completed a face matching task where they compared a still image with video clips of a person performing either 4 different activities or 1 activity, and decided whether they showed the same person or different people.  While participants had significantly greater confidence in their responses when familiarity was formed from variable exposure, the same increase was not observed in their accuracy.  Our findings suggest that variability when learning a new face only improves performance on match trials, and not on non-match trials.  We will discuss implications of this research and its new insight into the role of variability and the use of video clips for developing familiarity with a face.

 

Differences in information sampling support expertise in face matching

 

James Dunn, Sebastien Miellet, David White

 

University of New South Wales, University of Wollongong

 

Email: j.d.dunn@unsw.edu.au

 

Accurate face matching is important in forensic settings, but the perceptual processing involved is not well understood. We compared the visual sampling of novices with two high-performing groups: (i) 'super-recognisers’, with naturally superior skill in face matching ability; (ii) forensic facial examiners, experts with years of professional experience in face matching tasks. Participants viewed pairs of passport photos and decided if they showed the same person, while their eye movements were tracked. We found higher accuracy for super-recognisers and examiners than novices on both natural view and spotlight trials. There were also significant differences in the information sampled by each group. Examiners analysed the same facial regions with and without a spotlight, while super-recognisers flexibly adapted their use of local and global information based on task conditions. These findings suggest that examiners and super-recognizers achieve similarly high accuracy through different approaches. Examiners rely on exhaustive local analysis of all facial regions, while super-recognizers are more flexible in their use of local and global information depending on viewing conditions.

 

A dynamic model of individual-level and group-level processing of faces

 

Daniel Skorich

 

Australian National University

 

Email: Daniel.Skorich@anu.edu.au

 

Face processing models typically separate face perception into that for changeable aspects of the face – eye gaze, emotional expression, and lip movements – and that for the invariant individual identity of the face. As such, these models generally neglect another invariant aspect of the face: their group-level characteristics – e.g., age, ethnicity, gender etc. In this talk, I will present a new model of face processing that describes the processing of both individual-level and group-level information in faces. First, I will briefly present some empirical research suggesting that the processing of individual-level and group-level information in faces is largely equivalent. I will then present the new model with reference to Bruce and Young (1986) and Haxby and colleagues’ (2000; 2011) models. In particular, I will show that each individual-level processing component in those models has a group-level counterpart, including those related to: determining familiarity; detecting face instances for constructing and identifying a facial representation; activating semantic information associated with a facial representation; and retrieving a name associated with a facial representation. Finally, I will discuss the mechanisms by which a perceiver can dynamically switch between individual-level and group-level processing of faces, and the downstream consequences for social perception, cognition, and interaction.

 

Computer-generated (CG) faces in research: Systematic survey and meta-analyses

 

Amy Dawel, Elizabeth J. Miller, Yong Zhi Foo, Paige Mewton, Annabel Horsburgh, Patrice Ford

 

Australian National University, University of Western Australia

 

Email: amy.dawel@anu.edu.au

 

Recent advances have made it easy to generate artificial face images for use in research. Computer-generated (CG) faces offer several advantages, including high experimental control, image diversity, and affordability. Accordingly, our systematic survey of studies using faces as stimuli across psychology and neuroscience (N = 12 journals and >3,300 articles coded) shows there has been a strong increase in the use of CG faces since the turn of the century. However, CG images are often being used as stand-ins for human faces (77% of studies) without clear evidence they engage face processing systems in the same way or to the same extent. We therefore conducted a meta-analysis of studies that directly compared people’s responses to CG and human faces, across multiple face processing domains and integrating findings from computer science, psychology, and neuroscience. Results revealed people’s responses were impaired for CG stimuli relative to human ones in many domains, and that there is a critical lack of data testing whether CG faces engage face processing mechanisms properly (e.g., very few studies testing inversion, other-race effect, N170, etc.). Overall, findings provide direction for the appropriate use of CG faces and raise important questions as hyper-realistic GAN faces enter this space.

 

White participants misclassify White AI faces as human and are convinced they are correct

 

Elizabeth Miller, Ben Steward, Zak Witkower, Clare A.M. Sutherland, Eva G. Krumhuber, Amy Dawel

 

Australian National University, University of Toronto, University of Aberdeen, University College London

 

Email: elizabeth.miller@anu.edu.au

 

A recent high-profile study (Nightingale & Farid, 2022) found that faces generated by artificial intelligence (AI) are now indistinguishable from human faces. However, error rates were higher for White AI faces than for AI faces of other races, indicating a White bias. Our reanalysis of Nightingale & Farid’s data and a new replication experiment (N=124) revealed White AI faces were (erroneously) classified as human significantly (66-69%) more than actual human face images (51-52%, where chance = 50%). Worryingly, participants who were the least accurate at this task were the most confident, with the least insight into their abilities. A second experiment (N=190) found ratings related to having a human mind (creativity, curiosity, imaginativeness) did not differ for AI versus human faces. A final study (N=610) collected ratings of 14 attributes potentially driving the bias to perceive AI faces as human (e.g., facial distinctiveness, skin smoothness), finding they accounted for 62% of the variance in AI-human classification rates. While there were reliable physical differences between AI and human faces, participants interpreted these cues in the wrong direction, accounting for the AI bias. These findings raise concerns about misuse of overly convincing AI faces online (e.g., revenge porn, misinformation, cyber warfare).

 

WM & intelligence

 

Spatial deployments of attention and working memory load: The role of temporal dynamics

 

Nicholas J. Wyche, Stephanie C. Goodhew, Mark Edwards

 

Australian National University

 

Email: nicholas.wyche@anu.edu.au

 

The relationship between spatial deployments of attention and working memory load is an important topic of study, with clear implications for real-world tasks such as driving. Previous research has shown that attentional breadth broadens under higher load, while exploratory eye movement behaviour also appears to change with increasing load. However, relatively little research has compared the effects of working memory load on different kinds of spatial deployment, especially in conditions that require updating or manipulation of the contents of working memory rather than simple retrieval. The present study measured participants’ attentional breadth and exploratory eye movement behaviour under low and high updating working memory loads. While spatial aspects of task performance were unaffected by the load manipulation, the exploratory dynamics of the free viewing task (including fixation durations and saccadic amplitudes) changed under increasing load. These findings suggest that temporal dynamics, rather than the spatial extent of exploration, are the primary mechanism affected by working memory load during spatial deployment of attention. Further, individual differences in exploratory behaviour were observed on the free viewing task: all metrics were highly correlated across working memory load blocks. These findings suggest a need for further investigation of individual differences in eye movement behaviour.

 

Efficient measurement of dynamic visual working memory

 

Garry Kong, Isabelle Frisken, Gwenisha J. Liaw, Robert Keys, David Alais

 

Waseda University, University of Sydney

 

Email: kong.garry@aoni.waseda.jp

 

Here we introduce a novel visual working memory tracking paradigm to measure visual working memory, inspired by continuous psychophysics and multiple object tracking. Participants viewed a sequence of stimuli moving along variable paths and were asked to reproduce the path by tracing it on a touchscreen. This reproduction was then compared to the original stimulus to determine error and thus visual working memory performance. Across four experiments, we found that this new method is efficient, reliable and powerful, with only 10 trials per condition required for stable performance estimates, requiring less than 5 minutes of testing. We also demonstrated that the method shows minimal effects from perceptual or attentional confounds. Most importantly, this method also allows for the investigation of how visual working memory changes across time. By averaging equivalent time points across trials, we can identify influences from both primacy and recency effects, and quantify performance around particularly important points along the motion path. The visual working memory tracking paradigm is therefore especially useful when experimental time is limited, the number of experimental conditions is extensive or when the time-course is the key interest. The method also opens up the study of visual working memory with dynamic stimuli.

 

Testing the role of perceptual anisotropies in working memory capacity limits

 

Eugene Cho, William J. Harrison

 

University of Queensland

 

Email: eugene.cho@uqconnect.edu.au

 

Visual working memory is an important cognitive ability underpinning most daily functions, but its neural basis is hotly debated. The sensory recruitment hypothesis states that working memory representations are maintained in early visual cortex, which is supported by neuroimaging and recent psychophysical experiments. At least one such psychophysical experiment, however, was likely to have been confounded: the chosen behavioural manipulation could have induced perceptual differences across working memory conditions, potentially giving false evidence for sensory recruitment. Hence, the aim of the present study was to confirm or rule out this potential confound through two experiments (n = 30). Experiment 1 explicated the plausibility of the confound by quantifying a well-understood perceptual bias (radial-tangential anisotropy), while Experiment 2 tested whether this bias could account for effects previously attributed to sensory recruitment during working memory. While the perceptual bias found in Experiment 1 suggests that the confound was indeed plausible, working memory performance in Experiment 2 was independent of this bias. Overall, these results indicate that perceptual biases did not confound the results of the previous behavioural study, implying that our data are consistent with sensory recruitment, although more direct tests of sensory recruitment are required to confirm this latter claim.

 

Relational binding and integration in the Latin Square Task and relational monitoring task

 

Damian Birney, Yueting Zhan

 

University of Sydney

 

Email: damian.birney@sydney.edu.au

 

Relational integration is thought to be localized within the prefrontal cortex. However, behavioural mechanisms linked to individual differences in relational integration remain mostly vague, metaphorically described, and methodologically challenging. With the goal to understand the nature of relational binding as a precursor to individual differences in relational integration, we consider process accounts based on relational complexity (RC) theory of two tasks: the Latin Square Task (LST; a timed and untimed version, each with three levels of RC) and the Relational Monitoring Task (RMT, with three levels of RC). Both tasks have been proposed as indicators of individual differences in relational integration and both correlate significantly with Gf. Using an LMER approach, we aim to isolate relational binding demands of the LST and RMT to test the extent to which individual differences in capacity to manage these demands explain relational integration costs. A total of 214 participants completed all variants of the LST and RMT, as well as two measures of fluid intelligence. In reporting the results, we reflect on challenges in developing a unified process account with sufficient task specificity that is generic enough to span different tasks and a level of analysis that can incorporate individual differences methods.

 

Action

 

TMS reveals distinct patterns of proactive and reactive inhibition in motor system activity

 

Dominic Tran, Illeana Prieto, Ross Otto, Evan Livesey

 

University of Sydney, Macquarie University, McGill University

 

Email: minh.d.tran@sydney.edu.au

 

Response inhibition—our ability to suppress actions—is achieved through coordinated reactive and proactive processes that may recruit different neurophysiological mechanisms. Here we adapted a two-step continuous performance task in which the decision to respond depends on a combination of an initial context cue and a subsequent target probe. Using transcranial magnetic stimulation (TMS), we mapped changes in corticospinal excitability and inhibition, providing indices of reactive and proactive processes that influence the state of the motor system in the lead-up to initiating or suppressing a response. We found distinct changes in corticospinal excitability at critical timepoints when participants were preparing in advance to inhibit a response (during the cue) and while inhibiting a response (during the probe). Motor system activity during early timepoints correlated with behavioural indices of proactive capacity and predicted whether participants would later successfully inhibit their response.

 

Visual detection while walking: Sensitivity modulates over the gait cycle

 

Cameron Phan, David Alais, Frans Verstraten, Matthew Davidson

 

University of Sydney

 

Email: cpha4652@uni.sydney.edu.au

 

We investigated visual sensitivity over the gait cycle for stimuli presented at two eccentricities. Participants (n = 33) wore a virtual reality headset and walked along a smooth, level path while making a trigger-pull response to indicate if a briefly flashed visual target was detected. The small ellipsoid target was varied in contrast against a grey background to determine detection thresholds. Thresholds were measured at eccentricities of 4° or 12° from a central fixation cross, and while standing still or walking at natural speed. There were 190 stimuli per condition, presented at various jittered time points along the path. Head position data were used to divide the walking sequence into individual steps so that the data could be pooled into a single densely sampled gait cycle. Performance modulated over the gait cycle with an approximate sinusoidal pattern and fitted Fourier functions for response accuracy and response time revealed a modulation rate of 3.21 Hz for both variables. Accuracy modulated in-phase with gait cycle but was phase-shifted for response time. Overall, contrast thresholds were higher for peripheral targets regardless of motion condition. These results uncover the effect of walking on visual detection ability and its interaction with visual eccentricity. What would its effect be on higher visual abilities? Would other visual phenomena have different interactions?

 

Action decision congruence between actions of humans and deep reinforcement learning agents during a cooperative herding task

 

Gaurav Patil, Phillip Bagala, Patrick Nalepka, Rachel W. Kallen, Michael J. Richardson

 

Macquarie University

 

Email: gaurav.patil@mq.edu.au

 

Anticipating each other’s actions and reciprocally responding to them during multi-agent tasks is not only essential for effective human-human interaction but also a characteristic of expert human behaviour. With regards to creating artificial agents (AAs) that can replace humans in these scenarios, deep reinforcement learning (DRL) has been shown to train agents that exceed human-levels of performance. However, the behaviors exhibited by these AAs are not guaranteed to be human-like or human-compatible. This poses a problem if the goal is to design AAs capable of collaborating with humans or augmenting human actions in cooperative tasks. The current study simultaneously explored skill learning performance of human learners while working alongside heuristic and DRL AAs during a collaborative shepherding task, as well as evaluated the effectiveness of the same AAs as action decision recommender systems to aid learning. In addition to evaluating skill learning performance, the current study also tested the congruence between the decisions of the AAs with decisions made by humans. Results demonstrate that the performance of humans was significantly worse when working alongside the DRL AA compared to the heuristic AA. Additionally, the action decisions participants made also showed less alignment with the recommendations made by the DRL AA.

 

The power of self-pacing: Memory benefits for active compared to passive image presentation

 

Briana L. Kennedy, Steven B. Most, Vanessa K. Bowden

 

University of Western Australia, University of New South Wales

 

Email: briana.kennedy@uwa.edu.au

 

Advances in technology have made serial visual presentations a common way to display information. Social media feeds, for example, often present stimuli one by one. Phenomena observed in the lab could replicate on these mediums, but a core difference between typical laboratory experiments and real-life paradigms lies in the way the information is presented. That is, stimuli are usually presented at a fixed speed in the laboratory, whereas users tend to self-pace through stimuli on social media feeds. Does actively self-pacing, compared to passively watching, images change the way they are remembered? In this study, 842 participants viewed streams of seven landscape images presented at either a self-paced (active) or automatic (passive) rate. Critically, the speed of automatic trials was yoked to match self-paced trials. After each stream, participants indicated which of four images they remembered from the stream, as well as their confidence in their memory. Both memory accuracy and confidence increased for images shown on self-paced compared to automatic trials. This finding has important implications both in the way that designers could capitalise on a self-paced designs to relay important information, and in the way theories should consider relationships between action, attention, perception, and memory.

 

Perception

 

Rotational self-motion inhibits opposed visual motion

 

Kate Pickard, David Alais, Sujin Kim, Robert Keys, Frans Verstraten

 

University of Sydney

 

Email: kpic0319@uni.sydney.edu.au

 

Distinguishing whether retinal motion is caused by self-motion or external motion is essential for effectively interacting with the world and the brain combines vestibular, proprioceptive, and re-afferent motor signals to determine this. This study seeks to better understand how visual and vestibular rotation signals interact in motion perception. In Study1, participants wore a virtual reality headset while seated, turning their heads either right, left or staying stationary while judging whether a visual stimulus translated left or right. Thresholds for this task were measured by varying the signal-to-noise ratio of the motion stimulus. When head-turn and retinal motion were congruent (both left, or both right) thresholds did not differ from the stationary condition. However, when head-turn and retinal motion were incongruent (opposed directions), thresholds were elevated. In Study 2, the visual motion was presented at various angles between horizontal and vertical to measure direction perception. Results suggest no significant angular tuning of this effect. These results may be explained by known visual-vestibular interactions in area MSTd, where only incongruently tuned visual-vestibular cells have been found for rotation. Such cells could serve to detect unwanted reafferent signals produced by head rotations and suppress them from perception.

 

The effects of rotational dynamics on weight perception in more ecologically valid settings

 

Philippe A. Chouinard, Jarrod W.C. Harris

 

La Trobe University

 

Email: p.chouinard@latrobe.edu.au

 

Previous research suggests that the rotational dynamics of an object influences our perception of its weight. With 3D printing, we created a set of stimuli that enabled us to examine the generalisability of this account in a more ecological way than stimuli used in earlier research. All the stimuli had the same mass (245 g). We varied mass distribution (i.e., mass concentrated either at the top, bottom, centre, near the edges, or evenly distributed throughout the object) and lifting approach (i.e., lifting directly by the hand or indirectly using a handle or string). Twenty-one individuals were recruited from our University. The results were in line with our predictions. An Mass Distribution × Lifting Approach interaction (F(6.54, 45.95) = 2.41, p = .03, η2p = .11) was found where conditions with higher rotational dynamics made stimuli feel heavier than those with lower rotational dynamics. These findings demonstrate rotational dynamic effects in a more run-of-the-mill experience of weight perception than demonstrated before. We conclude that rotational dynamics plays an important role in how we perceive the weight of objects.

 

Investigating the mechanisms underlying motion silencing in dynamic orientation discrimination tasks

 

Tabea-Maria Haase, Denise Moerel, Kevin R. Brooks, Iain D. Gilchrist, Christopher Kent , Anina N. Rich

 

Macquarie University, University of Sydney, University of Bristol

 

Email: tabeamaria.haase@hdr.mq.edu.au

 

Detecting changes in our visual environment is key for survival. However, the Motion Silencing Effect shows that we cannot perceive feature changes reliably when objects are moving quickly. In a series of experiments, participants discriminated dynamic orientation changes in sinusoidal gratings, arranged in an annulus around fixation, where the annulus moved with different rotational velocities. The results show that silencing occurs for orientation changes and that covertly directing spatial attention moderates motion silencing. Using Magnetoencephalography, we then examined the difference in the neural signal for perceived versus silenced change trials, to determine the neural time course of instances where the visual changes were not perceived. These studies give insight into both the behavioural and neural underpinnings of motion silencing and enhance our understanding of motion and orientation perception.

 

Detecting motion through apertures: How do we account for aperture motion?

 

David R. Badcock, J. Edwin Dickinson, Mark Edwards

 

University of Western Australia, Australian National University

 

Email: david.badcock@uwa.edu.au

 

Objects in motion are often occluded by foreground scenery which may itself be moving so here we examine how we perceive the object motion viewed through multiple apertures (i.e., small windows). Signals from a set of stationary windows with different, but appropriate, combinations of grating speeds and directions (vectors) can be combined, using a method called Intersection of Constraints (IOC), into a single percept of rigid coherent motion. We also know that moving apertures affect the perceived motion of the gratings and this influences the overall motion percept. Here, in four experiments, we determined the rules describing how the aperture and grating motions are combined.  Our results show that if the coherent grating motion is specified relative to coordinates in the world, the cloud of static patches appears to move in the direction of the IOC specified motion of the gratings. More generally, this direction is consistent with summing vectors representing the average motions of the envelopes and the coherent IOC motion of the gratings specified relative to their envelopes. We hypothesize that vision combines these different motions into a vector sum to reveal the instantaneous direction of motion of an object that is partially obscured by a moving foreground.

 

Forensic

 

Perceptions of emotions from female and male complainants of sexual violence in criminal trials

 

Faye Nitschke, Blake McKimmie, Eric Vanman, Sophie Johnson-Holmes

 

University of Newcastle, University of Queensland

 

Email: Faye.Nitschke@newcastle.edu.au

 

Both social context and individual characteristics can influence how emotions we show are interpreted by others in every day life and high stakes decision-making contexts. The emotions shown by female adult sexual violence complainants effect jurors decisions about complainant credibility and defendant guilt, despite that complainant emotion is not accurate information to inform these decisions (Nitschke et al., 2019; 2022). However, because gender identity also shapes expectations for the type of emotions people expect others to show (Brody et al., 2016), we report on three studies exploring the how emotion effects perceptions of male and female complainants of sexual violence. In study 1 (N = 550) and study 2 (N = 362), participants read a trial synopsis in which a female complainant was portrayed as distressed or unemotional through either photographic images or through her behaviours in the trial synopsis. Female complainants who showed strong visible distress were perceived as more credible than those who appeared unemotional. In study 3 (N = 186), participants evaluated a male complainant of sexual violence who displayed either anger, distress or no emotion. Results will be discussed in relation jury decision-making in sexual violence trials.

 

Inattentional blindness and eyewitness recall: Does recall type matter?

 

Hayley Cullen, Zoe Crittenden, Ella Tobin

 

University of Newcastle

 

Email: hayley.cullen@newcastle.edu.au

 

As crimes are unexpected, witnesses are at risk of experiencing inattentional blindness, failing to notice crimes because their attention was focused elsewhere. There are mixed findings regarding how experiencing inattentional blindness affects witness recall memory. These discrepant findings may be explained by the way in which different recall tasks (free vs. cued recall) allow witnesses to regulate the information they provide. Therefore, the current study explored whether the negative effects of experiencing inattentional blindness on witness memory depend on the recall task. Two-hundred and six participants completed an attention-demanding task while viewing a video containing an unexpected physical assault. Inattentional blindness for the crime was assessed immediately afterwards. After a filler task, participants received correct and incorrect information about the crime, completed a free or cued recall task about their memory of the video, and rated their memory confidence. Overall, witnesses who experienced inattentional blindness were less accurate, detailed, and confident in their recall, but no more likely to accept misinformation than participants who noticed the crime. Significant interactions with recall type emerged, such that these reductions in memory accuracy and confidence were only present under cued recall questioning. The implications of the findings for police questioning will be discussed.

 

Gauging the effects of post-event drawing on memory for a traumatic event

 

Georgina A. Maddox, Glen E. Bodner, Ryan P. Balzan

 

Flinders University

 

Email: georgina.maddox@flinders.edu.au

 

Drawing is often used to facilitate recall and communication about traumatic events. Relative to verbal debriefing, drawing can enhance voluntary memory, and performing visual-motor tasks (akin to drawing) immediately after a traumatic event can reduce involuntary memory (i.e., intrusions). Yet little is known about the effects of drawing traumatic events on intrusions—a contributing key criterion of posttraumatic stress disorder. To close this gap, using the trauma-film paradigm, we tracked the effects of a draw versus verbal task on both memory types over 3 consecutive days. In Experiment 1 (n = 60) the task occurred immediately after viewing a trauma film (Day 1). Contrary to previous findings, drawing failed to enhance voluntary memory accuracy or the amount of information reported (Days 1-3). Drawing led to significantly more intrusions 24-hrs after the trauma film (Day 2). In Experiment 2 (n = 60), when the tasks were performed after a delay (Day 2), no differences between drawing and verbalising were found. In sum, drawing did not yield consistent positive effects on memory. Our results highlight the need for further investigation into the effects of drawing traumatic events on memory given its widespread use in many applied settings (e.g., forensic interviews, art therapy).

 

The effect of fingerprint expertise on visual short-term memory

 

Brooklyn Corbett, Jason Tangen, Rachel Searston, Matthew Thompson, Samuel Robson

 

University of Queensland, University of Adelaide, Murdoch University, University of New South Wales

 

Email: b.corbett@uq.edu.au

 

A landmark finding in research on the nature of expertise is that experts have superior memory for domain-specific information. For instance, expert chess players can recall the configuration of pieces on a chessboard almost perfectly after only a brief exposure, whereas novices have much poorer memory performance (Chase & Simon, 1973). Across two experiments, we investigated whether expertise in fingerprint examination is accompanied by enhanced domain-specific visual short-term memory. We demonstrate that experts outperform novices on a test of recognition memory and that both experts and novices have improved memory performance for distinctive fingerprints compared to non-distinctive fingerprints. We suspect that many of the tasks performed by fingerprint examiners rely on short-term memory. Examiners spend their time analysing specific patterns of crime-scene prints. They then sort through a bank of possible matching suspect prints until they find one that is suitable for a more thorough comparison. Short-term memory appears to be used by examiners to accurately reflect the features of the latent print, allowing for rapid and flexible investigation of a large number of potential matching impressions. These findings have important implications for understanding development of expertise and may have practical implications for training.

 

Attention

 

Emphasising response speed or accuracy does not modulate the distractor-quitting threshold effect

 

Rebecca Lawrence, Brett Cochrane, Ami Eidels, Zach Howard, Lisa Lui, Jay Pratt

 

Griffith University, University of Aberdeen, University of Newcastle, University of Western Australia, University of Toronto

 

Email: rebecca.lawrence@griffith.edu.au

 

Highly distracting objects can make participants prematurely terminate visual search tasks; response times for target-absent trials are faster and error rates for target-present trials are greater when a salient distractor is present (Moher, 2020). Here, we tested how task instructions and feedback modulate this distractor-quitting threshold effect. Participants completed three blocks of a visual search task when a distractor was either present or absent. During the first block of trials, participants received no feedback about performance. However, during the second and third blocks, participants received both instructions and feedback which emphasised either accuracy or speed. Across two counterbalanced blocks, points were awarded for correct (fast) responses and time delays imposed for incorrect (slow) responses. The findings indicate that distractors lowered quitting thresholds across all three blocks of the search task. As such, the distractor quitting threshold effect appears to be a robust phenomenon not easily eradicated via task instructions or trial-by-trial feedback.

This research is supported by a Griffith University New Researcher Grant awarded to RL.

 

Media multitasking and mind wandering

 

Myoungju (Jay) Shin, Karen Murphy, Astrid Linke, Dimitar Taseski, Eva Kemps

 

Charles Sturt University, Griffith University, Flinders University

 

Email: jshin@csu.edu.au

 

Media multitasking refers to the simultaneous use of more than one form of media. Previous findings in examination of media multitasking, sustained attention and inhibitory control have been mixed, with some studies linking heavy media multitasking with impaired sustained attention and inhibitory control whereas others have failed to find any relationship. In this study we examined the relationship between media multitasking and mind wandering. Study 1 investigated media multitasking and mind wandering as a function of task difficulty in an n-back task. Study 2 examined media multitasking, mind wandering and different types of attentional errors in a Sustained Attention to Response Task. Study 1 showed that heavy media multitaskers mind wandered more than intermediate media multitaskers at more challenging levels of the n-back task, suggesting that the mind wandering is due to executive function failures. Study 2 showed that heavy media multitaskers mind wandered more without awareness, made more attentional errors and more anticipatory responses (RT<100ms), suggesting that heavy media multitaskers are more likely to disengage from the task at hand and tend to make automatised responses to stimuli without full conscious processing. Taken together, the results show that media multitasking is linked to poorer sustained attention and inhibitory control.

 

Five lessons about attention from studies of multiple object tracking

 

Alex O. Holcombe

 

University of Sydney

 

Email: alex.holcombe@sydney.edu.au

 

While early visual processing is massively parallel, visual cognition has very low processing capacity. Selective attention is a major determinant of which aspects of the visual scene become available to cognition. Multiple object tracking (MOT) is one task that researchers have used to study this. Over four decades of MOT research, we have learned several things about the relationship between perception and cognition. Some of these lessons emerged from the study of MOT, while others will be exemplified by particular studies of MOT. The five lessons are:

1. Selecting an object does not entail knowing anything about it apart from its location.

2. Object selection is limited by a specific capacity within each hemisphere of the brain.

3. A unitary (not hemisphere-specific) resource can also contribute to object selection, which can interfere with researcher efforts to study capacity limits.

4. Focused attention is often necessary for feature binding.

5. Object selection is constrained by spatial and temporal crowding, but only temporal crowding is markedly worsened as the number of objects to select increases.

Speculation about the mechanisms underlying these findings will be offered, including about the neural underpinnings.

 

Metacognition

 

Can metacognitive ratings improve learning and transfer?

 

Kit Double, Micah Goldwater, Damian Birney

 

University of Sydney

 

Email: kit.double@sydney.edu.au

 

A growing body of evidence suggests that eliciting metacognitive ratings (e.g., confidence ratings, judgements of learning etc.) from participants can impact their cognitive performance. Evidence that ratings improve performance is particularly strong for tasks that involve learning relationships between stimuli. We examined whether metacognitive ratings improve relational category learning and whether the ratings shift people away from memorisation strategies in favour of rule abstraction strategies. The results across multiple experiments support the notion that metacognitive ratings improve rule abstraction and facilitate knowledge transfer, however, there are important judgment, person, and task characteristics that moderate this effect. In particular, we will present experiments that show that the framing of metacognitive ratings can shift their effect on participants learning, such that they either impair or improve learning and transfer depending on how they are worded. We will discuss what these findings tell us about metacognitive monitoring as well as the potential educational benefits of using metacognitive self-evaluation in the classroom.

 

Linking metacognition with cognitive flexibility

 

Yueting Zhan, Damian Birney, Kit Double, Micah Goldwater

 

University of Sydney

 

Email: yueting.zhan@sydney.edu.au

 

One notion of cognitive flexibility is as a meta-competency for adapting to novel situations. To detect and act on situational changes, a combination of cognitive and conative processes is needed. One of the conative factors potentially involved is metacognitive monitoring. Metacognitive monitoring is generally assessed with self-report measures (e.g., confidence ratings, judgment of learning) with the goal to understand how people control and regulate their own learning. Recent research suggests metacognitive self-evaluations while solving a task can be reactive in that they affect on-going task performance. In Experiment 1 (N=100), we found reactivity in a complex problem-solving task where participants were required to control a system by learning consistent underlying relations. Eliciting metacognitive self-evaluations appeared to negatively affect relational learning relative to a control group. To further understand the link between metacognition and cognitive flexibility, in Experiment 2 (N=94) we investigated reactivity in situations that involves changes in the relations to be learnt. Based on our findings, we reflect on how flexibility might be improved by manipulating the content of metacognitive self-evaluations.

 

Confidence in perceptual decisions is shaped by priors for natural image statistics

 

Rebecca K West, Emily J A-Izzeddin, William J. Harrison

 

University of Queensland

 

Email: rebecca.west1@uq.net.au

 

Decision confidence enables humans to make adaptive decisions in a noisy perceptual world. The underlying computations that lead to confidence judgements, however, are not well understood. In the current study, we investigated one of the leading theoretical frameworks for understanding confidence: Bayesian models. Specifically, we sought to determine if participants use a prior probability distribution to inform their confidence. However, in contrast to previous research, we used a novel psychophysical paradigm which did not require participants to learn the parameters of the prior distribution within the limited time context of an experiment. Instead, we utilised well-established priors for natural image statistics, which are known to play a crucial role in guiding perception. Participants (N = 21) were asked to report the subjective upright of naturalistic image target patches, followed by their confidence in their orientation responses. We used image processing and computational modelling to relate the statistics of the targets to participants' responses. Our results reveal that participants use natural image priors to inform their perceptual judgements and, importantly, they use the same priors to inform their confidence judgments. Overall, our findings support a Bayesian characterisation of confidence and highlight the influence of environmental priors on confidence in perceptual decision-making.

 

Human computer interaction & human factors & virtual reality

 

Group performance in human-human and human-bot teams

 

Ami Eidels, Laiton Hedley, Murray Bennet, Jonathon Love, Joseph Houpt, Scott Brown

 

University of Newcastle, University of Texas at San Antonio

 

Email: ami.eidels@newcastle.edu.au

 

Complex tasks may require the division of labour across multiple team members. Yet assigning multiple agents to collaborate does not guarantee efficiency. Miscommunication or limited resources may hamper the performance of the team, compared with what one might expect based on the individual performance of each operator alone. We study the performance of human-human and human-bot teams in an arcade-like computer game, Team Spirit. Two players each controlled a horizontally-moving paddle and had to prevent bouncing balls from hitting the virtual floor. Each team completed three conditions: separate, where they operated individually to maximise their own personal score while ignoring the other player; collaborative, and competitive. In another set of experiments we paired human players with a bot. Behaviour of one bot-type was driven by reinforcement learning. Another bot-type was loosely based on principles of ideal observer. Broadly, our research programme aims to scale-up cognitive modelling techniques that have been used to understand individuals’ behaviour, and apply them to small groups. Here, I present measures and analyses of group performance. A follow-up talk [Hedley] will present analyses of behavioural patterns within teams, using sensitive, dynamic spatiotemporal measures of players’ movement.

 

The relationship between teaming behaviours and joint capacity of hybrid human-machine teams

 

Laiton G. Hedley, Murray S. Bennett, Jonathon Love, Joseph Houpt, Ami Eidels, Scott D. Brown

 

University of Newcastle, University of Texas at San Antonio

 

Email: c3299957@uon.edu.au

 

Artificial machine agents are becoming fully fledged autonomous team members, working alongside human co-actors to achieve outcomes neither could alone - forming the Hybrid Human-Machine (HM) team. Human-Human (HH) teams are sensitive to the social context of their environment, their behaviour will change if they are in Collaborative or Competitive contexts. As for HM teams, how their behaviour is influenced by these contexts remains unclear. Furthermore, teaming behaviours may influence the team's ability to both handle task demands and teamwork processes – what we refer to as Joint Capacity. However, global performance measures (such as accuracy and reaction time) alone cannot capture Joint Capacity or the dynamic behaviour of teams. To overcome this limitation, we adapted the Capacity Coefficient (to measure Joint Capacity; Townsend & Nozawa, 1995) and state-of-the-art spatiotemporal analyses of behaviour. We compared the Joint Capacity and behavioural patterns across Team types (HH vs HM) under Collaborative and Competitive conditions. Team behaviour predicted Joint Capacity, where less correlated behaviour was associated with better Joint Capacity, and the behaviour of HH teams was less correlated than HM teams. It is not surprising that two humans demonstrated superior abilities in handling both task demands and team coordination than the hybrid teams.

 

Audio-visual integration in depth using virtual reality

 

Mick Zeljko, Philip Grove, Laurence Harris, Ada Kritikos

 

University of Queensland, York University

 

Email: m.zeljko@uq.edu.au

 

Much of the research on audio-visual integration has been conducted using highly simplified stimuli and protocols that typically involve participants sitting in a dark and quiet room making button press responses to simple visual stimuli presented on two-dimensional computer screens and simple auditory tones presented over headphones. It remains largely unknown if the findings from these settings generalise to real life situations that are three dimensional, sensory rich, involve meaningful stimuli, require the interaction of multiple cognitive processes like attention, prioritisation, and prediction, and necessitate contextually relevant responses. Here, we examine audio-visual interactions in 3D space in an immersive, naturalistic, and complex visual environment using virtual reality (VR) as an intermediate step between the lab and the real world. We examine participants’ detection and discrimination of unisensory and multisensory static and looming stimuli in left and right, and near and far space while controlling for potential stimulus intensity and presentation confounds. Analyses of reaction times, multisensory benefits, and race model violations demonstrate location based differences that depend on confound control.

 

Incidental coupling of perceptual-motor behaviors associated with solution insight during physical collaborative problem-solving

 

Patrick Nalepka, Finn O'Connor, Rachel W. Kallen, Michael J. Richardson

 

Macquarie University

 

Email: patrick.nalepka@mq.edu.au

 

Solving problems with others not only reduces the time required to complete a challenge but may also enable the discovery of novel strategies that qualitatively change how a problem is approached. At the dyadic level, the laboratory-based ‘shepherding task’ demonstrated that, when tasked to contain evasive agents to a centralized location, some participants discover a non-obvious but optimal strategy to solve the task. This paper quantified the interactions between participants engaged in the task using Multidimensional Cross-Recurrence Quantification Analysis (MdCRQA), applied to each participant’s gaze and hand movements. The results demonstrated that strategy discoverers exhibited greater amounts of incidental coupling than non-discoverers prior to discovery. Once discovered, the strategy reduced the strength of coupling between participants, indicating that the strategy also reduced coordination demands. Future work will investigate whether differences in problem-solving can be attributable to differences in the perceptual features participants use which scaffold the discovery of task-optimal solutions.

 

Representing the 360-degree horizon on a single display: The impact of panel arrangement on operator situation awareness

 

Jason Bell, Steph Michailovs, Stephen Pond, Zachary Howard, Troy Visser, Madison Fitzgerald, Shayne Loft

 

University of Western Australia, Defence Science Technology Group

 

Email: Jason.Bell@uwa.edu.au

 

Advances in digital technology provide an opportunity to reconsider how imagery is represented on human machine interfaces. In the submarine periscope context, technological developments could allow the entire 360-degree environment to be represented in horizontal panels on a single display, yielding performance advantages over conventional line-of-sight optical periscopes. The current study considers the impact of panel configuration on operator spatial awareness (SA) in a simulated submarine control room environment. We tested configuration concepts that mapped the horizon akin to a clockwise periscope sweep, and several novel designs aiming to improve spatial mapping but sacrificing panel continuity. We also manipulated participant orientation such that they were seated parallel or perpendicular to the direction of travel of the simulated submarine (Ownship). To assess SA, participants (N=76) moved a joystick in the direction of where they believed each visual target to be, relative to Ownship. Participants had the best SA when the panels of the display were arranged to maximally align with their physical orientation within Ownship. Computational modelling suggests that aligned configurations improved processing speed and minimised response conflict. Our results accord with broader research showing that there are performance costs for imagery layouts that require greater spatial transformations by an operator.

 

Self & Social Cognition

 

Does task set modulate the underlying decisional mechanisms responsible for the self-reference effect?

 

Ashleigh Vella, David Sewell, Timothy Ballard, Ada Kritikos

 

University of Queensland

 

Email: ashleigh.vella1@uqconnect.edu.au

 

People have better memory for information related to themselves compared to information related to another person, a phenomenon called the self-reference effect (SRE). Recently, investigation into self-biased cognition’s underlying decisional mechanisms, through computational modelling, has provided conflicting results for enhanced perceptual processing. Different decisional mechanisms may underly self-biased recognition (old/new) and source memory (self/other/new) judgements, as task set modulates other self-biased cognition. This paper aims to investigate the underlying decisional mechanisms in recognition (experiment one) and source (experiment two) memory for self-relevant stimuli. Participants first were informed that themselves and an other-referent had won miscellaneous items and needed to sort them into the correct owner’s shopping bag, based on a cue. Subsequently, a surprise recognition memory test was conducted, participants indicated if the displayed item was “old” or “new” (experiment one) measuring recognition memory, or “self-owned”, “other-owned”, or “new” (experiment two), measuring source memory. As hypothesized, both experiments showed an SRE, with self-owned items better remembered than other-owned items. To investigate the underlying decisional parameters behavioural results were submitted to the Hierarchical Linear Ballistic Accumulator model (HLBA). Results from both Experiments indicated the underlying factor was a threshold difference, with a stricter decision threshold for self- compared with other-owned threshold. Hence, self-biased recognition and source memory effects are underpinned by a common mechanism, reflecting decision strategy rather-than enhanced perceptual processing.

 

Ownership, agency, and temporal binding

 

Jennifer Day, Alan Pegna, Ada Kritikos

 

University of Queensland

 

Email: jennifer.day@uq.net.au

 

Agency refers to the sense that the outcomes of an action are the result of the action itself. When a person experiences a greater sense of agency, they report shorter perceived time between action and outcome relative to when they are not given agency. This is known as intentional binding. People also show differences in actions performed towards owned property relative to property of others. Less inhibitory control is required to act with owned property which could be thought of as a form of enhanced agency. In this study we considered intentional binding between performing action on owned property compared with a stranger’s property. Results show that participants initially show no difference between these two, however eventually shift to showing greater intentional binding for the property of a stranger. Contrary to predictions, this suggests that there may be a period of inhibition followed by an enhanced sense of agency when given the capacity to use others’ property.

 

The child-reference effect: Attenuating the self-reference effect in a Western context

 

Harrison Paff, Nikki Castrosis, Ada Kritikos

 

University of Queensland

 

Email: h.paff@uq.edu.au

 

Encoding information in relation to the self produces a memory advantage compared with other encoding methods (self-reference effect: SRE). Typically, the SRE is demonstrated when people evaluate the self-relevancy of to-be-remembered target stimuli. Notably, prior work suggests the SRE is attenuated when Chinese individuals encode information to self vs. close-other, relative to self vs. stranger. Conversely, the SRE remains stable when Western individuals encode information to a close-other, suggesting culture influences if others are incorporated into self-representation. However, prior research has exclusively used parents and best friends as close others, thus limiting the findings’ generalisability. In the current study, we asked whether the type of close-other relationship attenuates the SRE by investigating if the SRE extends to mothers’ children. During encoding, participants indicated whether trait-adjectives were descriptive of a stranger-adult (n = 47), stranger-child (n = 20), own-child (n = 39), or primed own-child (n = 6). Preliminary findings suggest better source memory for words paired with one’s own name vs. another name for stranger-adult and stranger-child conditions but not for own-child conditions with and without priming. The absence of an SRE for the own-child conditions suggests that not only culture but the relationship type influences close-other SRE extension.

 

Mouse tracking, the self and Value Modulated Attentional Capture (VMAC)

 

Tessa Clarkson, Ada Kritikos, Sheila Cunningham, Catherine Haslam

 

University of Queensland, Abertay University

 

Email: t.clarkson@uq.edu.au

 

The perceived value or importance of a stimulus influences visual search performance. Value-modulated attentional capture (VMAC) tasks can measure the degree of influence a stimulus has on attention. In these tasks, when a non-target is made salient and represents high reward potential, attentional resources are redirected to the non-target, resulting in slower response times. This study aims to measure the effect of egocentric cognitions in attentional capture and measure the impact on motor movements using mouse trajectories. In Experiment 1, we replicated a traditional VMAC effect with button-press response times. Additionally, using mouse tracking, we showed that trajectories are longer under high reward potential compared to low reward potential. In Experiment 2, we explored the effect of an ownership manipulation, in which participants have equal opportunity to earn rewards for themselves or a fictitious 'Other.' Participants showed no VMAC effect, and response times for earning a reward for themselves verses another person were comparable. This research sheds new light on the relationship between reward, visual search performance, and motor behaviour, as well as the effect of egocentric cognitions on visual search. Additionally, we will discuss details of an ongoing third experiment.

 

Sample space (Un)packing effects on judgments under growing awareness

 

Michael Smithson, Yimeng Cheng

 

Australian National University

 

Email: Michael.Smithson@anu.edu.au

 

Halevy, et al. (2022) demonstrated in Israeli and USA samples that refining a group (e.g., unpacking “Israel” into its right-wing, centre, and left-wing blocs) increases the allocation of blame to it for an intergroup conflict (Israel-Palestinians).  We report an extension of their experiments by investigating the effect of expansion (adding a third group: Saudi Arabia) in factorial-design combination with refinement of the original two groups.  Our experiments closely replicated the findings in the Halevy, et al. experimental conditions, for our Israeli and USA samples. Both samples reduced blame to Palestinians when Saudi Arabia was in the sample frame.  An unexpected tendency was also to remove some blame from Israel, although this was substantially greater in the USA than in the Israeli sample.  Expansion did not moderate (un)packing effects on relative blame to Palestinians or Israel, although it did reduce the Israel unpacking effect on raw blame for both the Israeli and USA samples. The ratios of mean relative blame (Palestinians/Israel) are quite similar in the expansion and no-expansion conditions, providing support for the “reverse-Bayes” hypothesis in recent behavioural economics literature.

 

Well-being & Clinical

 

Playful physical activity improves mood through self-efficacy, self-esteem, enjoyment and emotion regulation

 

Indra Carey, Ivanka Prichard, Eva Kemps

 

Flinders University

 

Email: indra.carey@flinders.edu.au

 

Background: Previous research has split exercise in various ways to assess its effects on mood, but none has analysed the impact of play in physical activity. Based on recent theories of exercise and mood, this study investigated how playful activity impacts mood through self-esteem, self-efficacy, enjoyment and emotion regulation. Method: Using a cross-sectional design, 136 Australians (17-45 years, 108 women) completed an online survey that incorporated measures of physical activity (intensity and type, coded as playful or not), positive and negative affect over the past week, and psychological constructs (general and physical self-efficacy, self-esteem, trait playfulness, emotion regulation, physical activity enjoyment). Findings: Serial mediation analyses showed that playful activity had a positive indirect effect on positive affect, first through general self-efficacy (0.37) and enjoyment (0.23) independently, and second through emotion regulation. Playful activity also had a negative indirect effect on negative affect, first through general self-efficacy (-1.10), enjoyment (-0.59) and self-esteem (-0.73) independently, and second through emotion regulation. Discussion: The results demonstrate the psychological health benefits of engagement in playful physical activity. Playful exercise will likely be more enjoyable than non-playful exercise and better enhance general self-efficacy and self-esteem, resulting in better emotion regulation and further improving mood.

 

Mindful appraisal of faces

 

Yeow Khoon Pua, Reilly Innes, Scott Brown, Frances Martin

 

University of Newcastle

 

Email: yeowkhoon.pua@uon.edu.au

 

Mindfulness is form of mental clarity that has been linked to better mental and emotional health, and relationships. Since it is an internal state, Mindfulness is mainly assessed through self-reports. This can result in inaccuracies: people may differ in their understanding of the questions and may consciously or unconsciously give incorrect ratings of themselves. An objective assessment of Mindfulness would be useful. But first, we needed to identify and quantify mental qualities that may be indicative of Mindfulness. We achieved that through computational cognitive modelling. I will go through the steps we took and the results we found. It may be that Mindfulness is associated with less variability and “noise”.

 

The effect of autonomy and intensity on affective responses to physical exercise

 

David L Neumann, Kate Brayley

 

Griffith University

 

Email: d.neumann@griffith.edu.au

 

The manipulation of variables may provide a mechanism by which to promote participation in or improve feeling states during physical exercise. The independent and interactive effects of two variables were investigated in the present study: autonomy and exercise intensity. Participants (N = 60, Mean age = 20.43 years) completed a 20 minute treadmill trial at either a low or moderate intensity (65% or 75%, respectively, of age-dependent maximal heart rate). In a between-groups design, participants were randomly allocated to one of three conditions: No choice-Low intensity (NC-65%), No choice-Moderate intensity (NC-75%), and Choice of either low (C-65%) or moderate (C-75%) intensity. Physiological effort and feeling states were measured before, during, and after the exercise trial. The NC-75% group experienced a greater positive change in affect from pre-trial to post-trial than the NC-65% group, consistent with positive affect induced by moderate physical activity. However, there was no difference between intensity conditions for the two choice conditions (C-75% vs. C-65%). In addition, participants across both No choice conditions experienced greater negative affect after the trial than participants in the Choice conditions. The findings show that autonomy is an important factor that determines feeling states during exercise even when the level of exertion is below the ventilatory threshold. The key to engaging individuals in exercise may be to encourage them to choose among activities that are of an intensity that is enjoyable and challenging, but still within their capabilities.

 

Blue poo and the brain: What can a novel measure of gut function tell us about the microbiota-gut-brain axis?

 

Alexandra Adams, Reilly Innes, Layla Solomon, Alexandra Tabley

 

University of Newcastle

 

Email: Alexandra.Adams@newcastle.edu.au

 

Although the microbiota-gut-brain axis has been linked to a range of important psychological processes – from being implicated in psychiatric and neurological disease, through to associations with stress, emotion, cognitive functioning, and social behaviour – research in this area remains relatively limited. We suggest that one of the limiting factors for further consideration of the gut microbiota in psychological studies could be the general reliance on expensive methodological techniques. Indeed, these techniques often require specialised equipment and staff, and can also be quite inconvenient and burdensome for participants. The current study therefore utilised a novel, indirect marker of gut microbiome function that circumvents these issues. Specifically, the blue dye method was used to measure participants’ gut transit time as this measurement has previously shown to strongly correlate with gut microbial alpha diversity and gut microbiome composition. In addition, participants completed a battery of measures to provide indices of diet, general health status, cognitive function, and emotional wellbeing, as well as additional measures of gut health. Key findings will be discussed and interpreted with respect to potential limitations of the current study. Suggestions for future research will also be provided in light of these.

 

Perception

 

Event probabilities tend to scale inversely with neural measures of prediction error

 

Blake Saurels, Alan Johnston, Kielan Yarrow

 

University of Queensland, University of Nottingham, City, The University of London

 

Email: b.saurels@uq.edu.au

 

The oddball paradigm is perhaps the most popular means of studying the neural and perceptual consequences of implicit predictions in the human brain – including implicit visual predictions. The traditional paradigm involves presenting a sequence of identical repeated events, that are eventually broken by a novel ‘oddball’ presentation. Oddball presentations have been linked to increased neural responding, and to an exaggeration of perceived duration relative to repeated events. As the number of repeated events in such protocols is circumscribed, as more repeated events are encountered the conditional probability of a further repeated event diminishes, whereas the conditional probability of an oddball presentation increases. However, these facts have not always been appreciated in analyses of visual oddball protocols. Rather, repeats and oddballs have been treated as binary categories of event. This risks an underappreciation of the impact of event probabilities on measures of neural response and perception. Here we show that event probabilities tend to scale inversely with measures of neural response, and positively with measures of perceived duration – resulting in a negative correlation between measures of perceived duration and neural responding. This relationship is opposite to a popular account of how perceived duration might be linked to neural responding, but it is consistent with the suggestion that perceived durations might scale with the degree of anticipatory attention allocated to an event.

 

Tactile adaptation to orientation produces a robust tilt aftereffect and exhibits cross modal transfer when tested in vision

 

Guandong Wang, David Alais

 

The University of Sydney

 

Email: guandong.wang@sydney.edu.au

 

Orientation processing is one of the most fundamental functions in both visual and somatosensory perception. Converging findings over the years have suggested that the processing of orientation in both modalities is closely linked: somatosensory neurons share a similar orientation organisation as visual neurons, and the visual cortex has been found to be heavily involved in tactile orientation perception. The tilt aftereffect (TAE) is a demonstration of orientation adaptation and is used widely in behavioural experiments to investigate orientation mechanisms in vision. By testing the classic TAE paradigm in both tactile and cross-modal orientation tasks between vision and touch, we were able to show that tactile perception of orientation shows a very robust TAE, similar to its visual counterpart. And orientation adaptation in touch would transfer to produce a TAE in vision, but not vice versa. This provides concrete evidence that vision and touch engage a similar orientation processing mechanism, but the asymmetry in cross-modal transfer provides more insights into the underlying mechanism of this link.

 

Sensory suppression from repeating, alternating, and unpredictable sounds

 

Imogen A. Clarke, Lisa-Marie Greenwood, Bruce K. Christensen, Kirralee Poslek, Lilli Donovan, Bradley N. Jack

 

Australian National University

 

Email: imogen.clarke@anu.edu.au

 

Sensory suppression refers to the phenomenon that sensory input generated by our own actions elicits smaller neural responses than sensory input generated by external agents. It is often explained via the internal forward model in which an efference copy of the motor command is used to compute a corollary discharge, which acts to predict and suppress sensory input. In the present study, we sought to determine whether corollary discharges suppress sounds when presented in a complex sequence. To investigate this, we measured the N1 component of the event-related potential elicited by self- and externally-generated sounds that were presented in a repeating (e.g., AAAAAA… or BBBBBB…), alternating (e.g., ABABAB…), or unpredictable (e.g., ABBABA…) sequence. As expected, we found that the repeating sequence yielded N1-suppression, in that self-generated sounds elicited a smaller N1 than externally-generated sounds (BF10 = 6.46), whereas the unpredictable sequence did not (BF10 = 0.18). Unexpectedly, there was no difference between self- and externally-generated sounds for the alternating (BF10 = 0.18) sequence, despite it being predictable. These results suggest that corollary discharges do not predict and suppress sounds in a complex sequence, indicating that sensory suppression might not be due to predictive processing.

 

The time course of visual feature coding in the human brain

 

Tijl Grootswagers, Amanda K. Robinson, Sophia M. Shatek, Thomas A. Carlson

 

Western Sydney University, University of Queensland, University of Sydney

 

Email: t.grootswagers@westernsydney.edu.au

 

The basic computations performed in the human early visual cortex are the foundation for all stages of visual processing. Recent work using human neuroimaging methods with high temporal resolution has revealed the dynamics of visual feature coding in the brain, elucidating how features are coded through progressive stages of processing. A crucial next step is to understand the relative coding dynamics of different features, their interactions and relationship to perceptual experience. In this study, we first measured neural responses using electroencephalography (N=16) to a large set of 256 oriented gratings that varied in orientation, spatial frequency, contrast, and colour, to map the response profiles of the neural coding of basic visual features and their interactions. We then related these to independently obtained behavioural judgements of stimulus similarity. The results reveal that all four features are processed simultaneously but differ in their dynamics, indicating distinct yet important parallel processes. Furthermore, strong relationships between neural coding and behaviour were clear from initial stages of processing, indicating that even the earliest neural responses contribute to perception. Finally, we observe strong interactions between all four features in the neural responses and behaviour, signifying that the integration of these features is crucial for overall form perception.

 

Perception

 

Does categorical memory influence perception? Investigating the role of memory in low-level perception

 

Laura Wang, William J. Harrison

 

University of Queensland

 

Email: uqrwan20@uq.edu.au

 

Memory enriches our perceptual understanding of reality by prioritizing relevant information. For example, content in visual working memory can reach awareness faster in a continuous flash suppression paradigm where the targets are below perceptual thresholds. Nevertheless, most studies used stimuli primarily processed in early cortical areas, revealing little about how memory representations at the higher levels of the visual hierarchy influence perception. Mooney images are stimuli that are difficult to recognise without learning about "hidden" objects, allowing us to compare different states of perception before and after activating memory representations. We reasoned that if memory exerts a top-down influence, the visual processing of features on a Mooney image should depend on whether an observer learns about the hidden object. Accordingly, participants performed a target detection task before and after learning about the hidden objects within Mooney images. Targets were oriented edge features that we presented either aligned with, or orthogonal to,  an implied edge within a Mooney image, with the expectation that participants' sensitivity would improve for aligned targets after learning about the hidden objects. However, sensitivity was no different before versus after learning, regardless of target alignment. Therefore, whether higher-level memory representations influence low-level perception remains an open question.

 

Measuring and simulating human perceptual categorisation performance using Signal Detection Theory

 

Samuel Robson, Rachel Searston, Matthew Thompson, Brooklyn Corbett, Jason Tangen

 

University of New South Wales, University of Adelaide, Murdoch University, University of Queensland

 

Email: sam.robson@unsw.edu.au

 

Perceptual categorisation is a cognitive process that involves making binary decisions about stimuli (e.g., is that a bird or not?). Researchers often use measurement models derived from Signal Detection Theory to quantify human performance for these kinds of decisions. In this study, we collected data from experts and novices on a matching task with naturalistic stimuli and simulated how different measurement models of performance (e.g., proportion correct, dʹ, Aʹ, Area Under the Curve) are affected by various experimental conditions. We demonstrated how factors such as response bias, prevalence, ceiling effects, and inconclusive response options can  influence performance estimates. Our simulations provide useful insights and recommendations for researchers who want to use Signal Detection Theory in their studies of human perceptual categorisation.

 

tCFS: A new ‘CFS tracking’ paradigm reveals uniform suppression depth regardless of target complexity or salience

 

Jacob Coorey, David Alais, Randolph Blake, Matthew J. Davidson

 

University of Sydney, Vanderbilt University

 

Email: jcoo2322@uni.sydney.edu.au

 

Presenting a dynamic stimulus to one eye can remove a target presented to the other eye from conscious awareness – a technique known as continuous flash suppression (CFS). Measuring the time needed for a suppressed image to breakthrough from CFS (bCFS) has been used to investigate unconscious processing, and has led to controversy regarding the scope of visual processing without awareness. Advocates interpret faster bCFS times to salient stimuli as evidence for unconscious high-level processing, while opponents attribute these differences to varying low-level stimulus features between stimuli. We address this controversy with a new ‘tracking-CFS’ (tCFS) paradigm. In tCFS a suppressed image steadily increases in contrast until breakthrough, and then decreases until suppression occurs again. This cycle repeats to measure both the threshold of appearance and re-suppression. The difference between these thresholds provides a measure of suppression depth. We first replicate the widely reported difference in bCFS thresholds between salient image categories, before demonstrating no difference in true suppression depth as measured with tCFS. After breakthrough contrast is reached, all stimuli show a strikingly uniform reduction in the corresponding suppression threshold. Our findings indicate a single mechanism of CFS suppression occurring early in the visual system, unmodulated by target salience or complexity.

 

Awareness & Intention

 

A hierarchically structured sample space and its implications in visual awareness

 

Xuan Di, Michael Smithson, Martin Davies, Anne Aimola Davies

 

Australian National University

 

Email: xuan.di@anu.edu.au

 

Sample space refers to the set of all possible outcomes of a process. Recent studies argue that the sample space should be defined not only by the outcomes, but also by the possible ways that the outcomes are perceived (Smithson, 2021), and stressed the necessity of a hierarchical structure in the sample space (Dominiak & Tserenjigmid, 2018). The current work examined the subjectively reported awareness in a static inattentional blindness (IB) paradigm, and concluded that the mental reality of visual perception can be better captured by a hierarchically structured than a flat sample space. We explored the differences in detection-rates between the inhibited and the irrelevant semantic categories under paradigm priming. We recruited 30 participants who were experienced with the IB experiment and 15 participants who were unfamiliar with the paradigm. After the primary semantic category-based selective naming task, nobody could detect the unexpected word in the centre of the visual field, if the word was an exemplar of the inhibited category. However, experts could detect the irrelevant word, while “naïve” participants could not. The finding suggests that the appropriate sample space for this paradigm is hierarchical with expertise relevance {yes, no} as the parent node: {yes{irrelevant}, no{inhibited, attended}}.

 

How context and kinematics interact in intention prediction: Insights from a qualitative study

 

Ayeh Alhasan, Michael J. Richardson, Nathan Caruana, Emily S. Cross  

 

Macquarie University, Western Sydney University

 

Email: ayeh.alhasan@hdr.mq.edu.au

 

Intention prediction often plays a crucial role in successful social interaction. Previous studies have attempted to understand this skill by focusing on the role of movement kinematics in isolation. However, this approach is limited as the same kinematics typically map to multiple action possibilities (affordances) and as a result, individual's also employ contextual information to predict others' intentions. In this study we present preliminary findings from a qualitative study aimed at investigating intention prediction in naturalistic contexts. Participants viewed an individual reaching for a cup with one of two object-directed intentions: to drink OR to clear the table. A third non-object-directed intention was also included where the observed individual placed their hand on the table next to the cup. For each intention the contextual information was varied by changing the environmental scene between (1) cups full of juice, (2) almost empty cups, and (3) half-empty cups. The findings reveal that participants perceived the cup's functional (most salient) affordance - drink - for the intention so far as the movement kinematics specified an object-directed intention (drink or clear) with a context that clearly afforded it (full and half-full cups). However, participants were also sensitive to the kinematic differences between the object-directed intentions when the context made the functional affordance seem improbable (almost empty cups).

 

Failure of cue integration may help to explain change blindness

 

Chenyan (Cheyanne) Gu, Reuben Rideaux, William J. Harrison

 

University of Queensland, University of Sydney

 

Email: cheyanne.gu@uq.edu.au

 

Our perception of the world is often based on integrating information from multiple senses, a process known as cue integration. Although cue integration has been shown to improve observers’ performance in various perceptual tasks, people still fail to notice changes in complex scenes, suggesting that the visual system may not always fully exploit multiple sources of information. The present study was therefore designed to test how well observers integrate multiple cues in a change detection task. In two experiments (N = 24 each), we tested whether observers perform optimally in detecting a change in colour and location of a flickering target spot. In three separate blocks, observers reported whether they detected a change in colour, a change in location, or a change in both colour and location of the target. Using signal detection theory, we tested if observers’ sensitivity in the both-change condition was the optimal integration of their sensitivities in the single-cue conditions. We found, however, that observers perform sub-optimally: in the both-change condition, their sensitivity was no better than their sensitivity in the single-cue conditions. Our data therefore show a profound failure of cue integration in change detection, which may help to explain change blindness more generally.