Pupil Citation List
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

View only
 
 
ABCDEFGHIJ
1
Year PublishedAuthor(s)TitleAbstractFieldsURLPDFJournal/ConferenceKeywordsProducts
2
2020St-Onge David, Anguiozar, N. U.Measuring cognitive load: heart-rate variability and pupillometry assessmentCognitive load covers a wide field of study that triggers the interest of many disciplines, such as neuroscience, psychology and com- puter science since decades. With the growing impact of human factor in robotics, many more are diving into the topic, looking, namely, for a way to adapt the control of an autonomous system to the cognitive load of its operator. Theoretically, this can be achieved from heart-rate variability measurements, brain waves monitoring, pupillometry or even skin conductivity. This work introduces some recent algorithms to analyze the data from the first two and assess some of their limitations.Robotics, HRIhttps://www.researchgate.net/publication/337200745_Planetary_Exploration_with_Robot_Teams_Implementing_Higher_Autonomy_With_Swarm_Intelligencehttps://www.researchgate.net/publication/337200745_Planetary_Exploration_with_Robot_Teams_Implementing_Higher_Autonomy_With_Swarm_IntelligenceMSECP'20 Workshop
ICMI '20 Companion, October 25–29, 2020, Virtual Event, Netherlands
cognitive load, pupillometry, heart-rate variabilitycore
3
2020Matti Krüger, Tom Driessen, Christiane B. Wiebel-Herboth, Joost C. F. de Winter, Heiko WersingFeeling Uncertain—Effects of a Vibrotactile Belt that Communicates Vehicle Sensor UncertaintyWith the rise of partially automated cars, drivers are more and more required to judge the degree of responsibility that can be delegated to vehicle assistant systems. This can be supported by utilizing interfaces that intuitively convey real-time reliabilities of system functions such as environment sensing. We designed a vibrotactile interface that communicates spatiotemporal information about surrounding vehicles and encodes a representation of spatial uncertainty in a novel way. We evaluated this interface in a driving simulator experiment with high and low levels of human and machine confidence respectively caused by simulated degraded vehicle sensor precision and limited human visibility range. Thereby we were interested in whether drivers (i) could perceive and understand the vibrotactile encoding of spatial uncertainty, (ii) would subjectively benefit from the encoded information, (iii) would be disturbed in cases of information redundancy, and (iv) would gain objective safety benefits from the encoded information. To measure subjective understanding and benefit, a custom questionnaire, Van der Laan acceptance ratings and NASA TLX scores were used. To measure the objective benefit, we computed the minimum time-to-contact as a measure of safety and gaze distributions as an indicator for attention guidance. Results indicate that participants were able to understand the encoded uncertainty and spatiotemporal information and purposefully utilized it when needed. The tactile interface provided meaningful support despite sensory restrictions. By encoding spatial uncertainties, it successfully extended the operating range of the assistance systemTransportation Safety, Automotive, HCIhttps://www.mdpi.com/2078-2489/11/7/353https://www.mdpi.com/2078-2489/11/7/353/pdfInformation 2020spatiotemporal displays; sensory augmentation; reliability display; uncertainty encoding; automotive hmi; human-machine cooperation; cooperative driver assistance; state transparency displaycore
4
2020Mingardi, Michele, Patrik Pluchino, Davide Bacchin, Chiara Rossato, and Luciano GamberiniAssessment of Implicit and Explicit Measures of Mental Workload in Working Situations: Implications for Industry 4.0Nowadays, in the context of Industry 4.0, advanced working environments aim at achieving a high degree of human–machine collaboration. This phenomenon occurs, on the one hand, through the correct interpretation of operators’ data by machines that can adapt their functioning to support workers, and on the other hand, by ensuring the transparency of the actions of the system itself. This study used an ad hoc system that allowed the co-registration of a set of participants’ implicit and explicit (I/E) data in two experimental conditions that varied in the level of mental workload (MWL). Findings showed that the majority of the considered I/E measures were able to discriminate the different task-related mental demands and some implicit measures were capable of predicting task performance in both tasks. Moreover, self-reported measures showed that participants were aware of such differences in MWL. Finally, the paradigm’s ecology highlights that task and environmental features may affect the reliability of the various I/E measures. Thus, these factors have to be considered in the design and development of advanced adaptive systems within the industrial context.Psychology, Cognitive Sciencehttps://doi.org/10.3390/app10186416Applied Sciences 10, no. 18implicit/explicit measures; human–machine interaction; symbiotic system; mental workload; eye tracking; heart rate; electrodermal activity; NASA-TLX; working taskscore
5
2020Michael A. Cohen, Thomas L. Botch, and Caroline E. RobertsonThe limits of color awareness during active, real-world visionColor ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. This limitation on perceptual awareness could not be explained by retinal neuroanatomy or previous studies of peripheral visual processing using more traditional psychophysical approaches. In a second study, we measured color detection thresholds using a staircase procedure while a set of observers intentionally attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. Together, these results show that during active, naturalistic viewing conditions, our intuitive sense of a rich, colorful visual world is largely incorrect.Psychology, Cognitive Science, Neurosciencehttps://doi.org/10.1073/pnas.1922294117https://www.pnas.org/content/pnas/early/2020/06/02/1922294117.full.pdfPNAS June 16, 2020 117 (24) 13821-13827; first published June 8, 2020vision, scenes, attention, color, virtual reality (VR)vr
6
2020Pavel Weber, Franca Rupprecht, Stefan Wiesen, Bernd Hamann, Achim EbertAssessing Cognitive Load via PupillometryA fierce search is called for a reliable, non-intrusive, and real-time capable method for assessing a person’s experienced cognitive load.Software systems capable of adapting their complexity to the mentaldemand of their users would be beneficial in a variety of domains. Theonly disclosed algorithm that seems to reliably detect cognitive load inpupillometry signals – the Index of Pupillary Activity (IPA) – has notyet been sufficiently validated. We take a first step in validating the IPAby applying it to a working memory experiment with finely granulatedlevels of difficulty, and comparing the results to traditional pupillometrymetrics analyzed in cognitive resarch. Our findings confirm the significantpositive correlation between task difficulty and IPA the authors stated.Cognitive Sciencehttps://web.cs.ucdavis.edu/~hamann/WeberRupprechtWiesenHamannEbertCSCE2020ACC2020PaperAsSubmitted04252020.pdfCognitive Load, Pupillometry, Index of Pupillary Activity (IPA), Eye Tracking, Working Memorycore
7
2020Moritz Stolte, Benedikt Gollan, Ulrich AnsorgeTracking visual search demands and memory load through pupil dilationContinuously tracking cognitive demands via pupil dilation is a desirable goal for the monitoring and investigation of cognitive performance in applied settings where the exact time point of mental engagement in a task is often unknown. Yet, hitherto no experimentally validated algorithm exists for continuously estimating cognitive demands based on pupil size. Here, we evaluated the performance of a continuously operating algorithm that is agnostic of the onset of the stimuli and derives them by way of retrospectively modeling attentional pulses (i.e., onsets of processing). We compared the performance of this algorithm to a standard analysis of stimulus-locked pupil data. The pupil data were obtained while participants performed visual search (VS) and visual working memory (VWM) tasks with varying cognitive demands. In Experiment 1, VS was performed during the retention interval of the VWM task to assess interactive effects between search and memory load on pupil dilation. In Experiment 2, the tasks were performed separately. The results of the stimulus-locked pupil data demonstrated reliable increases in pupil dilation due to high VWM load. VS difficulty only affected pupil dilation when simultaneous memory demands were low. In the single task condition, increased VS difficulty resulted in increased pupil dilation. Importantly, online modeling of pupil responses was successful on three points. First, there was good correspondence between the modeled and stimulus locked pupil dilations. Second, stimulus onsets could be approximated from the derived attentional pulses to a reasonable extent. Third, cognitive demands could be classified above chance level from the modeled pupil traces in both tasks.Cognitive Sciencehttps://jov.arvojournals.org/article.aspx?articleid=2770209https://jov.arvojournals.org/article.aspx?articleid=2770209Journal of Vision June 2020, Vol.20, 21pupillometry, cognitive load, visual search, continuouscore
8
2020Cherie Zhou, Monicque M. Lorist, Sebastiaan MathotEye movements in real-life search are guided by task-irrelevant working-memory contentAttention is automatically guided towards stimuli that match the contents of working memory. This has been studied extensively using simplified computer tasks, but it has never been investigated whether (yet often assumed that) memory-driven guidance also affects real-life search. Here we tested this open question in a naturalistic environment that closely resembles real life. In two experiments, participants wore a mobile eye-tracker, and memorized a color, prior to a search task in which they looked for a target word among book covers on a bookshelf. The memory color was irrelevant to the search task. Nevertheless, we found that participants' gaze was strongly guided towards book covers that matched the memory color. Crucially, this memory-driven guidance was evident from the very start of the search period. These findings support that attention is guided towards working-memory content in real-world search, and that this is fast and therefore likely reflecting an automatic process.Cognitive Science
https://doi.org/10.1101/2020.05.18.101410
https://www.biorxiv.org/content/10.1101/2020.05.18.101410v1.full.pdfbioRxiv preprintattentional capture; visual search; working memory; real-life searchcore
9
2020Lukas Wöhle, Marion GebhardSteadEye-Head—Improving MARG-Sensor Based Head Orientation Measurements Through Eye Tracking DataThis paper presents the use of eye tracking data in Magnetic AngularRate Gravity (MARG)-sensor based head orientation estimation. The approach presented here can be deployed in any motion measurement that includes MARG and eye tracking sensors (e.g., rehabilitation robotics or medical diagnostics). The challenge in these mostly indoor applications is the presence of magnetic field disturbances at the location of the MARG-sensor. In this work, eye tracking data (visual fixations) are used to enable zero orientation change updates in the MARG-sensor data fusion chain. The approach is based on a MARG-sensor data fusion filter, an online visual fixation detection algorithm as well as a dynamic angular rate threshold estimation for low latency and adaptive head motion noise parameterization. In this work we use an adaptation of Madgwicks gradient descent filter for MARG-sensor data fusion, but the approach could be used with any other data fusion process. The presented approach does not rely on additional stationary or local environmental references and is therefore self-contained. The proposed system is benchmarked against a Qualisys motion capture system, a gold standard in human motion analysis, showing improved heading accuracy for the MARG-sensor data fusion up to a factor of 0.5 while magnetic disturbance is present.Medical
https://www.mdpi.com/1424-8220/20/10/2759/htm
Sensors 20, no. 10 (2020)data fusion; MARG; IMU; eye tracker; self-contained; head motion measurementcore
10
2020Diederick C. Niehorster, Thiago Santini, Roy S. Hessels, Ignace TC Hooge, Enkelejda Kasneci, Marcus NyströmThe impact of slippage on the data quality of head-worn eye trackersMobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.Eye Tracking Algorithms
https://link.springer.com/article/10.3758/s13428-019-01307-0
Behavior Research Methods (2020)core
11
2020David Vetturi, Michela Tiboni, Giulio Maternini, Michela BoneraUse of eye tracking device to evaluate the driver’s behaviour and the infrastructures quality in relation to road safetyEye tracking allows to obtain important elements regarding the drivers’ behaviour during their driving activity, by employing a device that monitors the movements of the eye and therefore of the user’s observation point. In this paper it will be explained how analysing the behaviour of the drivers through the eye movements permits to evaluate the infrastructures quality in terms of road safety.

Driver behaviour analysis have been conducted in urban areas, examining the observation target (cars, pedestrians, road signs, distraction elements) in quantitative terms (time of fixing each singular target). In particular, roundabout intersections and rectilinear segment of urban arterials have been examined and the records related to seven drivers’ behaviour were collected, in order to have a significant statistical variability. Only young people has considered in this study.

The analyses carried out have made it possible to assess how different types of infrastructure influence the behaviour of road users, in terms of safety performance given by their design.

In particular, quantitative analyzes were carried out on driving times dedicated to observing attention rather than distraction targets. From a statistical point of view, the relationship that exists between the characteristics of the driver, weather conditions and infrastructure, with driving behavior (traveling speed and attention / inattention time) was analyzed by ANOVA method.
Transportation Safety, Automotive
https://doi.org/10.1016/j.trpro.2020.03.053
https://www.sciencedirect.com/science/article/pii/S2352146520302155/pdf?md5=1f493572aa49a927a890377ad1ac1ce5&pid=1-s2.0-S2352146520302155-main.pdfTransportation Research Procedia
Volume 45, 2020, Pages 587-595
eye tacking, driver behaviour, road safety, infrastructure qualitycore
12
2020Aayush K. Chaudhary, Jeff B. PelzPrivacy-Preserving Eye Videos using Rubber Sheet ModelVideo-based eye trackers estimate gaze based on eye images/videos.
As security and privacy concerns loom over technological advancements, tackling such challenges is crucial. We present a new approach to handle privacy issues in eye videos by replacing the current identifiable iris texture with a different iris template in
the video capture pipeline based on the Rubber Sheet Model. We
extend to image blending and median-value representations to
demonstrate that videos can be manipulated without significantly
degrading segmentation and pupil detection accuracy.
HCI, Privacy
https://arxiv.org/abs/2004.01792
https://arxiv.org/pdf/2004.01792.pdfETRA 2020 Preprint: June2-5 2020Privacy, Security, Eye tracking, Iris Recognition, Rubber Sheet
Model, Eye Segmentation
core
13
2020Aunnoy K Mutasim, Anil Ufuk Batmaz, Wolfgang StuerzlingerGaze Tracking for Eye-Hand
Coordination Training Systems in
Virtual Reality
Eye-hand coordination training systems are used to improve user performance during fast movements in sports
training. In this work, we explored gaze tracking in a Virtual Reality (VR) sports training system with a VR headset.
Twelve subjects performed a pointing study with or without passive haptic feedback. Results showed that subjects
spent an average of 0.55 s to visually fnd and another 0.25
s before their fnger selected a target. We also identifed
that, passive haptic feedback did not increase the performance of the user. Moreover, gaze tracker accuracy significantly deteriorated when subjects looked below their eye
level. Our results also point out that practitioners/trainers
should focus on reducing the time spent on searching for
the next target to improve their performance through VR
eye-hand coordination training systems. We believe that
current VR eye-hand coordination training systems are
ready to be evaluated with athletes.
HCI
https://dl.acm.org/doi/abs/10.1145/3334480.3382924
https://dl.acm.org/doi/pdf/10.1145/3334480.3382924CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, April 2020 Pages 1–8Eye-Hand Coordination Training System; Virtual Reality;
Gaze Tracking; Hand Tracking; Speed and Precision
vr
14
2020Lisa-Marie Vortmann, Felix PutzeAttention-Aware Brain Computer
Interface to avoid Distractions in
Augmented Reality
Recently, the idea of using BCIs in Augmented Reality settings to operate systems has emerged. One problem of
such head-mounted displays is the distraction caused by an
unavoidable display of control elements even when focused
on internal thoughts. In this project, we reduced this distraction by including information about the current attentional
state. A multimodal smart-home environment was altered
to adapt to the user’s state of attention. The system only
responded if the attentional orientation was classified as
"external". The classification was based on multimodal EEG
and eye tracking data. Seven users tested the attentionaware system in comparison to the unaware system. We
show that the adaptation of the interface improved the usability of the system. We conclude that more systems would
benefit from awareness of the user’s ongoing attentional
state
HCI
https://dl.acm.org/doi/abs/10.1145/3334480.3382889
https://dl.acm.org/doi/pdf/10.1145/3334480.3382889CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, April 2020 Pages 1–8BCI; EEG; Eye-Tracking; Augmented Realityar
15
2020Ulrik Günther, Kyle IS Harrington, Raimund Dachselt, Ivo F. SbalzariniBionic Tracking: Using Eye Tracking to Track Biological Cells in Virtual RealityWe present Bionic Tracking, a novel method for solving biological cell tracking problems with eye tracking in virtual reality
using commodity hardware. Using gaze data, and especially smooth pursuit eye movements, we are able to track cells in time series
of 3D volumetric datasets. The problem of tracking cells is ubiquitous in developmental biology, where large volumetric microscopy
datasets are acquired on a daily basis, often comprising hundreds or thousands of time points that span hours or days. The image
data, however, is only a means to an end, and scientists are often interested in the reconstruction of cell trajectories and cell lineage
trees. Reliably tracking cells in crowded three- dimensional space over many timepoints remains an open problem, and many current
approaches rely on tedious manual annotation and curation. In our Bionic Tracking approach, we substitute the usual 2D point-and-click
annotation to track cells with eye tracking in a virtual reality headset, where users simply have to follow a cell with their eyes in 3D space
in order to track it. We detail the interaction design of our approach and explain the graph-based algorithm used to connect different
time points, also taking occlusion and user distraction into account. We demonstrate our cell tracking method using the example of
two different biological datasets. Finally, we report on a user study with seven cell tracking experts, demonstrating the benefits of our
approach over manual point-and-click tracking, with an estimated 2- to 10-fold speedup.
Biology, UI/UX
https://arxiv.org/abs/2005.00387
https://arxiv.org/pdf/2005.00387.pdfarXiv preprint arXiv:2005.00387virtual reality, eye tracking, cell tracking, visualizationvr
16
2020Ayush Kumar, Debesh Mohanty, Kuno Kurzhals, Fabian Beck, Daniel Weiskopf, Klaus MuellerDemo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement DataEye movement data analysis plays an important role in examin-ing human cognitive processes and perceptions. Such analysis attimes needs data recording from additional sources too duringexperiments. In this paper, we study a pair programming basedcollaboration using two eye trackers, stimulus recording, and anexternal camera recording. To analyze the collected data, we intro-duce the EyeSAC system that synchronizes the data from dierentsources and that removes the noisy and missing gazes from eyetracking data with the help of visual feedback from the externalrecording. The synchronized and cleaned data is further annotatedusing our system and then exported for further analysis.Eye tracking Algorithms
https://www.researchgate.net/publication/340493435_Demo_of_the_EyeSAC_System_for_Visual_Synchronization_Cleaning_and_Annotation_of_Eye_Movement_Data
12th ACM Symposium on Eye Tracking Research & Applications (ETRA’20 Adjunct)Annotation, eye tracking, visualization, altering, synchronization,denoisingcore
17
2020Sang Yoon Han, Hyuk Jin Kwon, Yoonsik Kim, and Nam Ik ChoNoise-Robust Pupil Center Detection Through CNN-Based Segmentation With Shape-Prior LossDetecting the pupil center plays a key role in human-computer interaction, especially for gaze tracking. The conventional deep learning-based method for this problem is to train a convolutional neural network (CNN), which takes the eye image as the input and gives the pupil center as a regression result. In this paper, we propose an indirect use of the CNN for the task, which first segments the pupil region by a CNN as a classification problem, and then finds the center of the segmented region. This is based on the observation that CNN works more robustly for the pupil segmentation than for the pupil center-point regression when the inputs are noisy IR images. Specifically, we use the UNet model for the segmentation of pupil regions in IR images and then find the pupil center as the center of mass of the segment. In designing the loss function for the segmentation, we propose a new loss term that encodes the convex shape-prior for enhancing the robustness to noise. Precisely, we penalize not only the deviation of each predicted pixel from the ground truth label but also the non-convex shape of pupils caused by the noise and reflection. For the training, we make a new dataset of 111,581 images with hand-labeled pupil regions from 29 IR eye video sequences. We also label commonly used datasets ( ExCuSe and ElSe dataset) that are considered real-world noisy ones to validate our method. Experiments show that the proposed method performs better than the conventional methods that directly find the pupil center as a regression result.Eye tracking Algorithms
https://ieeexplore.ieee.org/abstract/document/9055424
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9055424IEEE Access 8 (2020)Convex shape prior, deep learning, pupil segmentation, U-Netcore
18
2020Katharina Krösl, Carmine Elvezio, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael WimmerXREye: Simulating Visual Impairments in Eye-Tracked XR
Many people suffer from visual impairments, which can be difficult for patients to describe and others to visualize. To aid in understanding what people with visual impairments experience, we demonstrate a set of medically informed simulations in eye-tracked XR of several common conditions that affect visual perception: refractive errors (myopia, hyperopia, and presbyopia), cornea disease, and age-related macular degeneration (wet and dry).Medical, Computer Graphics https://doi.org/10.1109/VRW50115.2020.00266https://www.cg.tuwien.ac.at/research/publications/2020/kroesl-2020-XREye/kroesl-2020-XREye-extended%20abstract.pdf2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)Computing methodologies, Computer Graphics,
Graphics systems and interfaces, Perception, Applied computing,
Life and medical sciences, Health informatics
vr
19
2020Nicholas DA Thomas, James D. Gardiner, Robin H. Crompton, and Rebecca LawsonPhysical and perceptual measures of walking surface complexity strongly predict gait and gaze behaviourBackground
Walking surfaces vary in complexity and are known to affect stability and fall risk whilst walking. However, existing studies define surfaces through descriptions only.

Objective
This study used a multimethod approach to measure surface complexity in order to try to characterise surfaces with respect to locomotor stability.

Methods
We assessed how physical measurements of walking surface complexity compared to participant's perceptual ratings of the effect of complexity on stability. Physical measurements included local slope measures from the surfaces themselves and shape complexity measured using generated surface models. Perceptual measurements assessed participants' perceived stability and surface roughness using Likert scales. We then determined whether these measurements were indicative of changes to stability as assessed by behavioural changes including eye angle, head pitch angle, muscle coactivation, walking speed and walking smoothness.

Results
Physical and perceptual measures were highly correlated, with more complex surfaces being perceived as more challenging to stability. Furthermore, complex surfaces, as defined from both these measurements, were associated with lowered head pitch, increased muscle coactivation and reduced walking smoothness.

Significance
Our findings show that walking surfaces defined as complex, based on physical measurements, are perceived as more challenging to our stability. Furthermore, certain behavioural measures relate better to these perceptual and physical measures than others. Crucially, for the first time this study defined walking surfaces objectively rather than just based on subjective descriptions. This approach could enable future researchers to compare results across walking surface studies. Moreover, perceptual measurements, which can be collected easily and efficiently, could be used as a proxy for estimating behavioural responses to different surfaces. This could be particularly valuable when determining risk of instability when walking for individuals with compromised stability.

Health and Safetyhttps://www.sciencedirect.com/science/article/pii/S0167945719306232Human Movement Science 71 (2020)StabilitySurface complexityGaitMeasurescore
20
2020Adithya Balasubramanyam, Ashok Kumar Patil, Young Ho ChaiGazeGuide: An Eye-Gaze-Guided Active Immersive UAV Camera.Over the years, gaze input modality has been an easy and demanding human–computer
interaction (HCI) method for various applications. The research of gaze-based interactive applications
has advanced considerably, as HCIs are no longer constrained to traditional input devices. In this
paper, we propose a novel immersive eye-gaze-guided camera (called GazeGuide) that can seamlessly
control the movements of a camera mounted on an unmanned aerial vehicle (UAV) from the eye-gaze
of a remote user. The video stream captured by the camera is fed into a head-mounted display
(HMD) with a binocular eye tracker. The user’s eye-gaze is the sole input modality to maneuver
the camera. A user study was conducted considering the static and moving targets of interest in a
three-dimensional (3D) space to evaluate the proposed framework. GazeGuide was compared with a
state-of-the-art input modality remote controller. The qualitative and quantitative results showed
that the proposed GazeGuide performed significantly better than the remote controller.
HCIhttps://www.mdpi.com/2076-3417/10/5/1668/pdfApplied Sciences 10, no. 5 (2020)eye tracking; HRI; eye-gaze; gaze-based interaction; HMD; robotics; gaze input; virtual
reality; surveillance and monitoring
core
21
2020Jakub Krukar, Antonia van EekThe Impact of Indoor/Outdoor Context on Smartphone Interaction During WalkingIt is unclear how users' interaction patterns with personal technology change, as they move between indoor and outdoor spaces. Understanding the impact of indoor/outdoor context could help to improve adaptive user interfaces of location-based services. We present a field experiment, in which participants were asked to complete a cognitive task appearing on a smartphone, while walking subsequent indoor and outdoor route segmentsHCI, Cognitive Sciencehttps://www.researchgate.net/publication/336055767_The_Impact_of_IndoorOutdoor_Context_on_Smartphone_Interaction_During_Walkinghttps://www.researchgate.net/profile/Jakub_Krukar/publication/336055767_The_Impact_of_IndoorOutdoor_Context_on_Smartphone_Interaction_During_Walking/links/5d8c264fa6fdcc25549a4bf6/The-Impact-of-Indoor-Outdoor-Context-on-Smartphone-Interaction-During-Walking.pdf22nd AGILE Conference on Geo-information Scienceuser context, smartphone interaction, eye movementcore
22
2020Brendan John, Student Member, Sophie Jorg, Sanjeev Koppal, Senior Member, Eakta JainThe Security-Utility Trade-off for Iris Authentication and Eye
Animation for Social Virtual Avatars
The gaze behavior of virtual avatars is critical to social presence and perceived eye contact during social interactions in
Virtual Reality. Virtual Reality headsets are being designed with integrated eye tracking to enable compelling virtual social interactions.
This paper shows that the near infra-red cameras used in eye tracking capture eye images that contain iris patterns of the user. Because
iris patterns are a gold standard biometric, the current technology places the user’s biometric identity at risk. Our first contribution
is an optical defocus based hardware solution to remove the iris biometric from the stream of eye tracking images. We characterize
the performance of this solution with different internal parameters. Our second contribution is a psychophysical experiment with a
same-different task that investigates the sensitivity of users to a virtual avatar’s eye movements when this solution is applied. By
deriving detection threshold values, our findings provide a range of defocus parameters where the change in eye movements would go
unnoticed in a conversational setting. Our third contribution is a perceptual study to determine the impact of defocus parameters on the
perceived eye contact, attentiveness, naturalness, and truthfulness of the avatar. Thus, if a user wishes to protect their iris biometric,
our approach provides a solution that balances biometric protection while preventing their conversation partner from perceiving a
difference in the user’s virtual avatar. This work is the first to develop secure eye tracking configurations for VR/AR/XR applications and
motivates future work in the area.
HCI, Privacy
https://ieeexplore.ieee.org/abstract/document/8998133
https://arxiv.org/pdf/2003.04250.pdfIEEE 2020 ReportSecurity, Eye Tracking, Iris Recognition, Animated Avatars, Eye Movementscore
23
2020Christian Hirt, Marcel Eckard, Andreas KunzStress generation and non-intrusive measurement in virtual environments using eye trackingIn real life, it is well understood how stress can be induced and how it is measured. While virtual reality (VR) applications can resemble such stress inducers, it is still an open question if and how stress can be measured in a non-intrusive way during VR exposure. Usually, the quality of VR applications is estimated by user acceptance in the form of presence. Presence itself describes the individual’s acceptance of a virtual environment as real and is measured by specific questionnaires. Accordingly, it is expected that stress strongly affects this presence and thus also the quality assessment. Consequently, identifying the stress level of a VR user may enable content creators to engage users more immersively by adjusting the virtual environment to the measured stress. In this paper, we thus propose to use a commercially available eye tracking device to detect stress while users are exploring a virtual environment. We describe a user study in which a VR task was implemented to induce stress, while users’ pupil diameter and pulse were measured and evaluated against a self-reported stress level. The results show a statistically significant correlation between self-reported stress and users’ pupil dilation and pulse, indicating that stress measurements can indeed be conducted during the use of a head-mounted display. If this indication can be successfully proven in a larger scope, it will open up a new era of affective VR applications using individual and dynamic adjustments in the virtual environment.Professional Performance, Pupillometry
https://doi.org/10.1007/s12652-020-01845-y
https://link.springer.com/content/pdf/10.1007/s12652-020-01845-y.pdfJournal of Ambient Intelligence and Humanized Computing, 2020Virtual reality, Stress generation,
Stress measurement, Virtual stressors, Eye tracking, VR application
vr
24
2020Tim Fischer, Christoph Schmid, Martin Kompis, Georgios Mantokoudis,
Marco Caversaccio, Wilhelm Wimmer
Pinna-Imitating Microphone
Directionality Improves Sound
Localization and Discrimination in
Bilateral Cochlear Implant Users
To compare the sound-source localization, discrimination and tracking
performance of bilateral cochlear implant users with omnidirectional (OMNI) and pinnaimitating (PI) microphone directionality modes.
Medical
https://doi.org/10.1101/2020.03.05.20023937
https://www.medrxiv.org/content/medrxiv/early/2020/03/06/2020.03.05.20023937.full.pdfmedRxiv preprint (2020)cone of confusion; auricular cues; pinna effect; auditory motion; binaural cues; sound localization; sound source
tracking; bilateral cochlear implants
core
25
2020Indu P Bodala, Bing Cai Kok, Weicong Sng, Harold SohModeling the Interplay of Trust and Attention in HRI: an Autonomous Vehicle StudyIn this work, we study and model how two factors of human cognition, trust and attention, affect the way humans interact with autonomous vehicles. We develop a probabilistic model that succinctly captures how trust and attention evolve across time to drive behavior, and present results from a human-subjects experiment where participants interacted with a simulated autonomous vehicle while engaging with a secondary task. Our main findings suggest that trust affects attention, which in turn affects the human’s decision to intervene with the autonomous vehicle.Robotics, Computer Science, HRIhttps://haroldsoh.com/wp-content/uploads/2020/02/Trust_SitAware_HRI20.pdfprobabilistic models; trust; attention; autonomous vehiclescore
26
2020Wen-Chin Li, Andreas Horn, Zhen Sun, Jingyi Zhang Graham BraithwaiteAugmented visualization cues on primary flight display facilitating pilot's monitoring performanceThere have been many aviation accidents and incidents related to mode confusion on the flight deck. The aim of this research is to evaluate human-computer interactions on a newly designed augmented visualization Primary Flight Display (PFD) compared with the traditional design of PFD. Based on statistical analysis of 20 participants interaction with the system, there are significant differences on pilots’ pupil dilation, fixation duration, fixation counts and mental demand between the traditional PFD design and augmented PFD. The results demonstrated that augmented visualisation PFD, which uses a green border around the “raw data” of airspeed, altitude or heading indications to highlight activated mode changes, can significantly enhance pilots’ situation awareness and decrease perceived workload. Pilots can identify the status of flight modes more easily, rapidly and accurately compared to the traditional PFD, thus shortening the response time on cognitive information processing. This could also be the reason why fixation durations on augmented PFDs were significantly shorter than traditional PFDs. The augmented visualization in the flight deck improves pilots’ situation awareness as indicated by increased fixation counts related to attention distribution. Simply highlighting the parameters on the PFD with a green border in association with relevant flight mode changes will greatly reduce pilots’ perceived workload and increase situation awareness. Flight deck design must focus on methods to provide pilots with enhanced situation awareness, thus decreasing cognitive processing requirements by providing intuitive understanding in time limited situations.HCI, Aviation, Professional Performance, Transportation Safety
https://www.researchgate.net/profile/Wen-Chin_Li/publication/337248475_Augmented_Visualization_Cues_on_Primary_Flight_Display_Facilitating_Pilot's_Monitoring_Performance/links/5de4cb86299bf10bc337702d/Augmented-Visualization-Cues-on-Primary-Flight-Display-Facilitating-Pilots-Monitoring-Performance
International Journal of Human-Computer Studies 135 (2020): 102377Augmented Visualization; Attention Distribution; Flight Deck Design; Human-Computer Interaction; Situation Awarenesscore
27
2020Marjan P. Hagenzieker, Sander van der Kint, Luuk Vissers, Ingrid NL G. van Schagen, Jonathan de Bruin, Paul van Gent, Jacques JF CommandeurInteractions between cyclists and automated vehicles: Results of a photo experimentCyclists may have incorrect expectations of the behaviour of automated vehicles in interactions with them, which could bring extra risks in traffic. This study investigated whether expectations and behavioural intentions of cyclists when interacting with automated cars differed from those with manually driven cars. A photo experiment was conducted with 35 participants who judged bicycle–car interactions from the perspective of the cyclist. Thirty photos were presented. An experimental design was used with between-subjects factor instruction (two levels: positive, neutral), and two within-subjects factors: car type (three levels: roof name plate, sticker – these two external features indicated automated cars; and traditional car), and series (two levels: first, second). Participants were asked how sure they were to be noticed by the car shown in the photos, whether the car would stop, and how they would behave themselves. A subset of nine participants was equipped with an eye-tracker. Findings generally point to cautious dispositions towards automated cars: participants were not more confident to be noticed when interacting with both types of automated cars than with manually driven cars. Participants were more confident that automated cars would stop for them during the second series and looked significantly longer at automated cars during the first.

Health and Safety, Transportation Safety
https://www.tandfonline.com/doi/full/10.1080/19439962.2019.1591556?af=R
https://www.tandfonline.com/doi/abs/10.1080/19439962.2019.1591556?needAccess=true#aHR0cHM6Ly93d3cudGFuZGZvbmxpbmUuY29tL2RvaS9wZGYvMTAuMTA4MC8xOTQzOTk2Mi4yMDE5LjE1OTE1NTY/bmVlZEFjY2Vzcz10cnVlQEBAMA==Journal of Transportation Safety & Security 12, no. 1 (2020): 94-115cyclists, automated driving, autonomous vehicles, road safety, interaction, expectations, road user behaviour, external featurescore
28
2020Efe Bozkir, Onur Günlü, Wolfgang Fuhl, Rafael F. Schaefer, Enkelejda KasneciDifferential Privacy for Eye Tracking with Temporal CorrelationsHead mounted displays bring eye tracking into daily use and this raises privacy concerns for users. Privacy-preservation techniques such as differential privacy mechanisms are recently applied to the eye tracking data obtained from such displays; however, standard differential privacy mechanisms are vulnerable to temporal correlations in the eye movement features. In this work, a transform coding based differential privacy mechanism is proposed for the first time in the eye tracking literature to further adapt it to statistics of eye movement feature data by comparing various low-complexity methods. Fourier Perturbation Algorithm, which is a differential privacy mechanism, is extended and a scaling mistake in its proof is corrected. Significant reductions in correlations in addition to query sensitivities are illustrated, which provide the best utility-privacy trade-off in the literature for the eye tracking dataset used. The differentially private eye movement data are evaluated also for classification accuracies for gender and document-type predictions to show that higher privacy is obtained without a reduction in the classification accuracies by using proposed methods.HCI
https://arxiv.org/abs/2002.08972
https://arxiv.org/pdf/2002.08972.pdfarXiv preprint arXiv:2002.08972 (2020).Eye tracking, Differential Privacy, Eye movements, Privacy protection, Virtual realityvr
29
2020Rakshit Kothari, Zhizhuo Yang, Christopher Kanan,Reynold Bailey,Jeff B. pelz,Gabriel J. DiazGaze-in-wild: A dataset for studying eye and head coordination in everyday activitiesThe study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen’s κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.Eye Tracking Algorithms, Computer Vision
https://www.nature.com/articles/s41598-020-59251-5
https://www.nature.com/articles/s41598-020-59251-5.pdfScientific Reports volume 10, Article number: 2539 (2020)eye movement, IMUcore
30
2020Yuki Kishita1, Hiroshi Ueda, Makio KashinoEye and Head Movements of Elite Baseball Players in Real BattingIn baseball, batters swing in response to a ball moving at high speed within a limited amount of time—about 0. 5 s. In order to make such movement possible, quick and accurate trajectory prediction followed by accurate swing motion with optimal body-eye coordination is considered essential, but the mechanisms involved are not clearly understood. The present study aims to clarify the strategies of eye and head movements adopted by elite baseball batters in actual game situations. In our experiment, six current professional baseball batters faced former professional baseball pitchers in a scenario close to a real game (i.e., without the batters informed about pitch type in advance). We measured eye movements with a wearable eye-tracker and head movements and bat trajectories with an optical motion capture system while the batters hit. In the eye movement measurements, contrary to previous studies, we found distinctive predictive saccades directed toward the predicted trajectory, of which the first saccades were initiated approximately 80–220 ms before impact for all participants. Predictive saccades were initiated significantly later when batters knew the types of pitch in advance compared to when they did not. We also found that the best three batters started predictive saccades significantly later and tended to have fewer gaze-ball errors than the other three batters. This result suggests that top batters spend slightly more time obtaining visual information by delaying the initiation of saccades. Furthermore, although all batters showed positive correlations between bat location and head direction at the time of impact, the better batters showed no correlation between bat location and gaze direction at that time. These results raise the possibility of differences in the coding process for the location of bat-ball contact; namely, that top batters might utilize head direction to encode impact locations.
Sports Performance
https://www.frontiersin.org/articles/10.3389/fspor.2020.00003/full
https://www.frontiersin.org/articles/10.3389/fspor.2020.00003/pdfFront. Sports Act. Living, 29 January 2020baseball batting, eye movements, hand-eye coordination, head movements, predictive saccadescore
31
2020Sabine U. König, Ashima Keshava, Viviane Clay, Kirsten Rittershofer, Nicolas Kuske, Peter KönigEmbodied Spatial Knowledge Acquisition in Immersive Virtual Reality:
Comparison of Direct Experience and Map Exploration
Investigating spatial navigation in virtual environments enables to study spatial learning with different sources of information. Therefore, we designed a large virtual city and investigated spatial knowledge acquisition after direct experience in the virtual environment and compared this with results after exploration with an interactive map (König et al., 2019). Our results suggest that survey knowledge measured in a straight line pointing task between houses resulted in better accuracy after direct experience in VR than tasks directly based on cardinal directions and relative orientations. In contrast, after map exploration, the opposite pattern evolved. Taken together, our results suggest that the source of spatial exploration influenced spatial knowledge acquisition.
Cognitive Science
https://doi.org/10.1101/2020.01.12.903096
https://www.biorxiv.org/content/biorxiv/early/2020/01/14/2020.01.12.903096.full.pdfPreprintvr, navigation, cognitive sciencevr
32
2020Taeha Yi, Mi Chang, Sukjoo Hong, Meereh Kim, Ji-Hyun LeeA Study on Understanding of Visitor Needs in Art Museum: Based on Analysis of Visual Perception Through Eye-TrackingThis study aims to examine the art museum experience of visitors in detail through eye tracking in the aspect of the visitor-centered approach, which is important in the contemporary art museum. To achieve this goal, we conducted the eye-tracking experiment and in-depth interview to grasp the interest and needs of visitors. We suggest the possibility of deriving their interests and needs by studying the gaze data (e.g. duration and the number of fixations) of the visitors in the museum.
Art Culture and Technology, Museology, Cognitive Science
https://link.springer.com/chapter/10.1007/978-3-030-39512-4_172
https://www.dropbox.com/s/phlca7bycwk4wd1/iHSI2020%28Yi%29.pdf?dl=0The 3rd International Conferene on Intelligent Human Systems Integration (IHSI 2020),Art museum; Visitor studies; Eye-tracking; Museum experiencecore
33
2020Robert Konrad, Anastasios Angelopoulos, Gordon WetzsteinGaze-Contingent Ocular Parallax Rendering for Virtual RealityImmersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this article, we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.Computer Graphics, Computer Science
https://doi.org/10.1145/3361330
https://arxiv.org/pdf/1906.09740.pdfACM Transactions on Graphics, Vol. 39, 2VRvr
34
2020YJ Jung, HT Zimmerman, K Perez-EdgarMobile Eye-tracking for Research in Diverse Educational SettingsMobile eye-tracking is a technology that captures visual information, such as gaze, eye-movements, and pupil dilations, when learners are mobile. Traditional eye-tracking helps researchers to obtain precise, moment-by-moment information about learners’ engagement, interactions, and learning processes, but it has some weaknesses due to its structural and stationary nature. Mobile eye-tracking can complement such weaknesses by allowing researchers to collect eye-tracking data when learners move around and interact with multiple targets. This chapter demonstrates how mobile eye-tracking can add more authenticity and nuanced information into Learning Design and Technology research, and then introduces potential research themes that can use mobile eye-tracking. This chapter also overviews the overall processes of applying mobile eye-tracking in a research study and provides an example analysis.
Learning Design and Technology
https://www.researchgate.net/publication/338676483_Mobile_Eye-tracking_for_Research_in_Diverse_Educational_Settings
https://static1.squarespace.com/static/52812781e4b0bfa86bc3c12f/t/5e23439407d67a4dac7b53a5/1579369366201/Jung%2C+Zimmerman%2C+%26+Pe%CC%81rez-Edgar+%28in+press%29.pdfResearch Methods in Learning Design and Technology. In E. Romero-Hall (Ed.),
Routledge.
education, teaching, learningcore
35
2019Eira Friström, Elias Lius, Niki Ulmanen, Paavo Hietala, Pauliina Kärkkäinen, Tommi Mäkinen, Stephan Sigg, Rainhard Dieter FindlingFree-Form Gaze Passwords from Cameras Embedded in Smart GlassesContemporary personal mobile devices support a variety of authentication approaches, featuring different levels of security and usability. With cameras embedded in smart glasses, seamless, hands-free mobile authentication based on gaze is possible. Gaze authentication relies on knowledge as a secret, and gaze passwords are composed from a series of gaze points or gaze gestures. This paper investigates the concept of free-form mobile gaze passwords. Instead of relying on gaze gestures or points, free-form gaze gestures exploit the trajectory of the gaze over time. We collect and investigate a set of 29 different free-form gaze passwords from 19 subjects. In addition, the practical security of the approach is investigated in a study with 6 attackers observing eye movements during password input to subsequently perform spoofing. Our investigation indicates that most free-form gaze passwords can be expressed as a set of common geometrical shapes. Further, our free-form gaze authentication yields a true positive rate of 81% and a false positive rate with other gaze passwords of 12%, while targeted observation and spoofing is successful in 17.5% of all cases. Our usability study reveals that further work on the usability of gaze input is required as subjects reported that they felt uncomfortable creating and performing free-form passwords.

HCI, Privacy
https://dl.acm.org/doi/10.1145/3365921.3365928
https://www.researchgate.net/publication/337720951_Free-Form_Gaze_Passwords_from_Cameras_Embedded_in_Smart_GlassesMoMM2019: Proceedings of the 17th International Conference on Advances in Mobile Computing & MultimediaUbiquitous and mobile computing, Security and privacy, Human-centered computingcore
36
2019Mark Zolotas, Yiannis DemirisTowards Explainable Shared Control using Augmented RealityShared control plays a pivotal role in establishing effective human-robot interactions. Traditional control-sharing methods strive to complement a human's capabilities at safely completing a task, and thereby rely on users forming a mental model of the expected robot behaviour. However, these methods can often bewilder or frustrate users whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. To resolve this model misalignment, we introduce Explainable Shared Control as a paradigm in which assistance and information feedback are jointly considered. Augmented reality is presented as an integral component of this paradigm, by visually unveiling the robot's inner workings to human operators. Explainable Shared Control is instantiated and tested for assistive navigation in a setup involving a robotic wheelchair and a Microsoft HoloLens with add-on eye tracking. Experimental results indicate that the introduced paradigm facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment.
Robotics, HRI
https://ieeexplore.ieee.org/document/8968117
https://www.researchgate.net/publication/336253753_Towards_Explainable_Shared_Control_using_Augmented_Reality2019 IEEE/RSJ International Conference on Intelligent Robots and Systemsaugmented reality,
feedback,
handicapped aids,
human-robot interaction,
mobile robots, wheelchairs
ar
37
2019Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew RabinovichEyeNet: A Multi-Task Network for Off-Axis Eye Gaze Estimation and User UnderstandingEye gaze estimation and simultaneous semantic understanding of a user through eye images is a crucial component in Virtual and Mixed Reality; enabling energy efficient rendering, multi-focal displays and effective interaction with 3D content. In head-mounted VR/MR devices the eyes are imaged off-axis to avoid blocking the user's gaze, this view-point makes drawing eye related inferences very challenging. In this work, we present EyeNet, the first single deep neural network which solves multiple heterogeneous tasks related to eye gaze estimation and semantic user understanding for an off-axis camera setting. The tasks include eye segmentation, blink detection, emotive expression classification, IR LED glints detection, pupil and cornea center estimation. To train EyeNet end-to-end we employ both hand labelled supervision and model based supervision. We benchmark all tasks on MagicEyes, a large and new dataset of 587 subjects with varying morphology, gender, skin-color, make-up and imaging conditions.
Computer Science, Computer Vision
https://arxiv.org/abs/1908.09060
https://arxiv.org/pdf/1908.09060.pdf2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)core
38
2019Clebeson Canuto dos Santos, Plinio Moreno, Jorge Leonide Aching Samatelo, Raquel Frizera Vassallo, José Santos-VictorAction Anticipation for Collaborative Environments:
The Impact of Contextual Information and Uncertainty-Based Prediction
For effectively interacting with humans in collaborative environments, machines need to be able to predict (i.e. anticipate) future events, in order to execute actions in a timely manner. However, the observation of the human limbs movements may not be sufficient to anticipate their actions in an unambiguous manner. In this work we consider two additional sources of information (i.e. context) over time, gaze movements and object information, and study how these additional contextual cues improve the action anticipation performance. We address action anticipation as a classification task, where the model takes the available information as the input, and predicts the most likely action. We propose to use the uncertainty about each prediction as an online decision-making criterion for action anticipation. Uncertainty is modeled as a stochastic process applied to a time-based neural network architecture, which improves the conventional class-likelihood (i.e. deterministic) criterion. The main contributions of this paper are three-fold: (i) we propose a deep architecture that outperforms previous results in the action anticipation task, when using the Acticipate collaborative dataset; (ii) we show that contextual information is important do disambiguate the interpretation of similar actions; (iii) we propose the minimization of uncertainty as a more effective criterion for action anticipation, when compared with the maximization of class probability. Our results on the Acticipate dataset showed the importance of contextual information and the uncertainty criterion for action anticipation. We achieve an average accuracy of 98.75% in the anticipation task using only an average of 25% of observations. In addition, considering that a good anticipation model should also perform well in the action recognition task, we achieve an average accuracy of 100% in action recognition on the Acticipate dataset, when the entire observation set is used.Computer Science
https://arxiv.org/abs/1910.00714
https://arxiv.org/pdf/1910.00714.pdfeprint arXiv:1910.00714Action Anticipation, Context Information, Bayesian Deep Learning, Uncertaintycore
39
2019Jue Li, Heng Li, Waleed Umer, Hongwei Wang, Xuejiao Xing, Shukai Zhao, Jun HouIdentification and classification of construction equipment operators' mental fatigue using wearable eye-tracking technologyIn the construction industry, the operator's mental fatigue is one of the most important causes of construction equipment-related accidents. Mental fatigue can easily lead to poor performance of construction equipment operations and accidents in the worst case scenario. Hence, it is necessary to propose an objective method that can accurately detect multiple levels of mental fatigue of construction equipment operators. To address such issue, this paper develops a novel method to identify and classify operator's multi-level mental fatigue using wearable eye-tracking technology. For the purpose, six participants were recruited to perform a simulated excavator operation experiment to obtain relevant data. First, a Toeplitz Inverse Covariance-Based Clustering (TICC) method was used to determine the number of levels of mental fatigue using relevant subjective and objective data collected during the experiments. The results revealed the number of mental fatigue levels to be 3 using TICC-based method. Second, four eye movement feature-sets suitable for different construction scenarios were extracted and supervised learning algorithms were used to classify multi-level mental fatigue of the operator. The classification performance analysis of the supervised learning algorithms showed Support Vector Machine (SVM) was the most suitable algorithm to classify mental fatigue in the face of various construction scenarios and subject bias (accuracy between 79.5% and 85.0%). Overall, this study demonstrates the feasibility of applying wearable eye-tracking technology to identify and classify the mental fatigue of construction equipment operators.Health and Safety, Construction
https://doi.org/10.1016/j.autcon.2019.103000
https://www.researchgate.net/profile/Jue_Li4/publication/336922024_Identification_and_classification_of_construction_equipment_operators'_mental_fatigue_using_wearable_eye-tracking_technology/links/5e63125392851c7ce04d2402/Identification-and-classification-of-construction-equipment-operators-mental-fatigue-using-wearable-eye-tracking-technology.pdfAutomation in Construction Vol. 109, 2020Mental fatigue identification and classificationConstruction equipment operatorEye-trackingMachine learningToeplitz Inverse Covariance-Based Clusteringcore
40
2019Daniel Backhaus, Ralf Engbert, Lars Oliver Martin Rothkegel, Hans Arne TrukenbrodTask-dependence in scene perception: Head unrestrained
viewing using mobile eye-tracking.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data was transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
Psychology
https://arxiv.org/abs/1911.06085
https://arxiv.org/pdf/1911.06085.pdfJournal of Visionscene viewing, real-world scenarios, mobile eye-tracking, task influence, central fixation biascore
41
2019Sebastian Marwecki, Andrew D. Wilson, Eyal Ofek, Mar Gonzalez Franco, Christian HolzMise-Unseen: Using Eye-Tracking to Hide
Virtual Reality Scene Changes in Plain Sight
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
HCI, UI/UX
https://doi.org/10.1145/3332165.3347919
https://www.microsoft.com/en-us/research/uploads/prod/2019/10/uist19a-MiseUnseen-cam.pdfUIST '19 Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
Pages 777-789
Eye-tracking; virtual reality; change blindness; inattentional
blindness, staging.
vr
42
2019Butler R, Mierzwinski GW, Bernier PM, Descoteaux M, Gilbert G, Whittingstall K
Neurophysiological basis of contrast dependent BOLD orientation tuningRecent work in early visual cortex of humans has shown that the BOLD signal exhibits contrast dependent orientation tuning, with an inverse oblique effect (oblique > cardinal) at high contrast and a horizontal effect (vertical > horizontal) at low contrast. This finding is at odds with decades of neurophysiological research demonstrating contrast invariant orientation tuning in primate visual cortex, yet the source of this discrepancy is unclear. We hypothesized that contrast dependent BOLD orientation tuning may arise due to contrast dependent influences of feedforward (FF) and feedback (FB) synaptic activity, indexed through gamma and alpha rhythms, respectively. To quantify this, we acquired EEG and BOLD in healthy humans to generate and compare orientation tuning curves across all neural frequency bands with BOLD. As expected, BOLD orientation selectivity in V1 was contrast dependent, preferring oblique orientations at high contrast and vertical at low contrast. On the other hand, EEG orientation tuning was contrast invariant, though frequency-specific, with an inverse-oblique effect in the gamma band (FF) and a horizontal effect in the alpha band (FB). Therefore, high-contrast BOLD orientation tuning closely matched FF activity, while at low contrast, BOLD best resembled FB orientation tuning. These results suggest that contrast dependent BOLD orientation tuning arises due to the reduced contribution of FF input to overall neurophysiological activity at low contrast, shifting BOLD orientation tuning towards the orientation preferences of FB at low contrast.Neuroscience
https://doi.org/10.1016/j.neuroimage.2019.116323
https://reader.elsevier.com/reader/sd/pii/S1053811919309140?token=E391AB0F6485CF6AE70093E1A41DEFABBDD014F4AAA4F9F08EB9689C49CBB938DC877FFD2030BCCDB94BF50B493F5F0ENeuroImage, 31 October 2019, 11632core
43
2019John Tyson-Carr, Vicente Soto, Katerina Kokmotou, Hannah Roberts, Nicholas Fallon, Adam Byrne, Timo Giesbrecht, Andrej StancakNeural underpinnings of value-guided choice during auction tasks: An eye-fixation related potentials studyValues are attributed to goods during free viewing of objects which entails multi- and trans-saccadic cognitive processes. Using electroencephalographic eye-fixation related potentials, the present study investigated how neural signals related to value-guided choice evolved over time when viewing household and office products during an auction task. Participants completed a Becker-DeGroot-Marschak auction task whereby half of the stimuli were presented in either a free or forced bid protocol to obtain willingness-to-pay. Stimuli were assigned to three value categories of low, medium and high value based on subjective willingness-to-pay. Eye fixations were organised into five 800 ms time-bins spanning the objects total viewing time. Independent component analysis was applied to eye-fixation related potentials. One independent component (IC) was found to represent fixations for high value products with increased activation over the left parietal region of the scalp. An IC with a spatial maximum over a fronto-central region of the scalp coded the intermediate values. Finally, one IC displaying activity that extends over the right frontal scalp region responded to intermediate- and low-value items. Each of these components responded early on during viewing an object and remained active over the entire viewing period, both during free and forced bid trials. Results suggest that the subjective value of goods are encoded using sets of brain activation patterns which are tuned to respond uniquely to either low, medium, or high values. Data indicates that the right frontal region of the brain responds to low and the left frontal region to high values. Values of goods are determined at an early point in the decision making process and carried for the duration of the decision period via trans-saccadic processes.
Neuroscience
https://doi.org/10.1016/j.neuroimage.2019.116213
https://www.sciencedirect.com/science/article/pii/S1053811919308043NeuroImage, 204Becker-DeGroot-Marschack auction; Independent component analysis; Value-based decision making; Willingness to pay; Evoked potentialscore
44
2019St-Onge Daivd, Kaufmann Marcel, Panerati Jacopo, Ramtoula Benjamin, Cao Yanjun, Coffey Emily, Beltrame Giovanni+Planetary exploration with robot teamsSince the beginning of space exploration, Mars and the moon have been examined via orbiters, landers, and rovers. More than 40 missions have targeted Mars, and over 100 have been sent to the moon. Space agencies continue to focus on developing novel strategies and technologies for probing celestial bodies. Multirobot systems are particularly promising for planetary exploration, as they are more robust to individual failure and have the potential to examine larger areas; however, there are limits to how many robots an operator can control individually. We recently took part in the European Space Agency’s (ESA’s) interdisciplinary equipment test campaign (PANGAEA-X) at a lunar/Mars analog site in Lanzarote, Spain. We used a heterogeneous fleet of unmanned aerial vehicles (UAVs)—a swarm—to study the interplay of systems operations and human factors. Human operators directed the swarm via ad-hoc networks and data-sharing protocols to explore unknown areas under two control modes: in one, the operator instructed each robot separately; in the other, the operator provided general guidance to the swarm, which self-organized via a combination of distributed decision making and consensus building. We assessed cognitive load via pupillometry for each condition and perceived task demand and intuitiveness via selfreport. Our results show that implementing higher autonomy with swarm intelligence can reduce workload, freeing the operator for other tasks such as overseeing strategy and communication. Future work will further leverage advances in swarm intelligence for exploration missions.Robotics, HRI
https://www.researchgate.net/publication/337200745_Planetary_Exploration_with_Robot_Teams_Implementing_Higher_Autonomy_With_Swarm_Intelligence
thttp://espace2.etsmtl.ca/id/eprint/19351/1/St-Onge D 2019 19351.pdfIEEE Robotics and Automation Magazine 2019space exploration, decentralized robotics, unmanned aerial vehicles, human-swarm interaction, cognitive loadcore
45
2019Surjeet Singh, Alexei Mandziak, Kalob Barr, Ashley A Blackwell, Majid H Mohajerani, Douglas G Wallace, Ian Q WhishawReach and Grasp Altered in Pantomime String-Pulling: A Test of the Action/Perception Theory in a Bilateral Reaching TaskThe action/perception theory of cortical organization is supported by the finding that pantomime hand movements of reaching and grasping are different from real movements. Frame-by-frame video analysis and MATLAB® based tracking examined real/pantomime differences in a bilaterally movement, string-pulling, pulling down a rope with hand-over-hand movements. Sensory control of string-pulling varied from visually-direct when cued, visually-indirect when non cued and somatosensory controlled in the absence of vision. Cued grasping points were visual tracked and the pupils dilated in anticipation of the grasp, but when noncued, visual tracking and pupil responses were absent. In real string-pulling, grasping and releasing the string featured an arpeggio movement in which the fingers close and open in the sequence 5 through 1 (pinki first, thumb last); in pantomime, finger order was reversed, 1 through 5. In real string-pulling, the hand is fully opened and closed to grasp and release; in pantomime, hand opening was attenuated and featured a gradual opening centered on the grasp. The temporal structure of arm movements in real string-pulling featured up-arm movements that were faster than down-arm movement. In pantomime, up/down movements had similar speed. In real string-pulling, up/down arm movements were direct and symmetric; in pantomime, they were more circular and asymmetric. That pantomime string-pulling featured less motoric and temporal complexity than real string-pulling is discussed in relation to the action/perception theory and in relation to the idea that pantomimed string-pulling may feature the substitution of gestures for real movement.Neuroscience
https://doi.org/10.1101/679811
https://www.biorxiv.org/content/biorxiv/early/2019/06/24/679811.full.pdfcore
46
2019Philipp Müller, Daniel Buschek, Michael Xuelin Huang, Andreas BullingReducing Calibration Drift in Mobile Eye Trackers by Exploiting Mobile Phone UsageAutomatic saliency-based recalibration is promising for addressing calibration drift in mobile eye trackers but existing bottom-up saliency methods neglect user's goal-directed visual attention in natural behaviour. By inspecting real-life recordings of egocentric eye tracker cameras, we reveal that users are likely to look at their phones once these appear in view. We propose two novel automatic recalibration methods that exploit mobile phone usage: The first builds saliency maps using the phone location in the egocentric view to identify likely gaze locations. The second uses the occurrence of touch events to recalibrate the eye tracker, thereby enabling privacy-preserving recalibration. Through in-depth evaluations on a recent mobile eye tracking dataset (N=17, 65 hours) we show that our approaches outperform a state-of-the-art saliency approach for automatic recalibration. As such, our approach improves mobile eye tracking and gaze-based interaction, particularly for long-term use.HCI
https://doi.org/10.1145/3314111.3319918
https://perceptual.mpi-inf.mpg.de/files/2019/04/mueller19_etra.pdfSymposium on Eye Tracking Research & Applications 2019HCI, Ubiquitous and mobile computing; Mobile eye tracking; Eye tracker recalibrationcore
47
2019Xiaoxue Fu1, Eric E. Nelson, Marcela Borge3, Kristin A. Buss, Koraly Pérez-EdgarStationary and ambulatory attention patterns are differentially associated with early temperamental risk for socioemotional problems: Preliminary evidence from a multimodal eye-tracking investigationBehavioral Inhibition (BI) is a temperament type that predicts social withdrawal in childhood and anxiety disorders later in life. However, not all BI children develop anxiety. Attention bias (AB) may enhance the vulnerability for anxiety in BI children, and interfere with their development of effective emotion regulation. In order to fully probe attention patterns, we used traditional measures of reaction time (RT), stationary eye-tracking, and recently emerging mobile eye-tracking measures of attention in a sample of 5- to 7-year-olds characterized as BI (N = 23) or non-BI (N = 58) using parent reports. There were no BI-related differences in RT or stationary eye-tracking indices of AB in a dot-probe task. However, findings in a subsample from whom eye-tracking data were collected during a live social interaction indicated that BI children (N = 12) directed fewer gaze shifts to the stranger than non-BI children (N = 25). Moreover, the frequency of gazes toward the stranger was positively associated with stationary AB only in BI, but not in non-BI, children. Hence, BI was characterized by a consistent pattern of attention across stationary and ambulatory measures. We demonstrate the utility of mobile eye-tracking as an effective tool to extend the assessment of attention and regulation to social interactive contexts.Psychology
doi:10.1017/S0954579419000427
https://static1.squarespace.com/static/52812781e4b0bfa86bc3c12f/t/5cd861b2eef1a15db85b4fdf/1557684660600/Fu+et+al+%28in+press%29+Dev.+%26+Psychopathology.pdfDevelopment and Psychopathology (2019), 1–18attention bias, behavioral inhibition, dot-probe task, eye-tracking, mobile eye-trackingcore
48
2019Ravi Teja Chadalavada, Henrik Andreasson, Maike Schindler, Rainer Palm, Achim J.LilienthalBi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human–robot interactionSafety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot’s intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.Robotics, HRI
https://doi.org/10.1016/j.rcim.2019.101830
https://reader.elsevier.com/reader/sd/pii/S0736584518303351?token=548BD2E4958B1D78F7525AC3A626A2B30A9AAF167B0A93C09D710B977F6BC00B4769F61CD2C4C4B53F76F6ED0DB61B50Robotics and Computer-Integrated Manufacturing, volume 61, Feb 2020Human–robot interaction (HRI), Mobile robots, Intention communication, Eye-tracking, Intention recognition, Spatial augmented reality, Stimulated recall interview, Obstacle avoidance, Safety, Logisticscore
49
2019Tom Arthur, Sam Vine, Mark Brosnan, Gavin BuckinghamExploring how material cues drive sensorimotor prediction across different levels of autistic-like traitsRecent research proposes that sensorimotor difficulties, such as those experienced by many autistic people, may arise from atypicalities in prediction. Accordingly, we examined the relationship between non-clinical autistic-like traits and sensorimotor prediction in the material-weight illusion, where prior expectations derived from material cues typically bias one’s perception and action. Specifically, prediction-related tendencies in perception of weight, gaze patterns, and lifting actions were probed using a combination of self-report, eye-tracking, motion-capture, and force-based measures. No prediction-related associations between autistic-like traits and sensorimotor control emerged for any of these variables. Follow-up analyses, however, revealed that greater autistic-like traits were correlated with reduced adaptation of gaze with changes in environmental uncertainty. These findings challenge proposals of gross predictive atypicalities in autistic people, but suggest that the dynamic integration of prior information and environmental statistics may be related to autistic-like traits. Further research into this relationship is warranted in autistic populations, to assist the development of future movement-based coaching methods.Psychology, Neurobiology
https://link.springer.com/article/10.1007/s00221-019-05586-z
https://link.springer.com/content/pdf/10.1007/s00221-019-05586-z.pdfExperimental Brain Research, September 2019, V 237, Issue 9, pp 2255-2267Autism, Movement, Object lifting, Weight illusion, Grip forcecore
50
2019Richard Wilkie, Callum Mole, Oscar Giles, Natasha Merat, Richard Romano, Gustav MarkkulaCognitive Load During Automation Affects Gaze Behaviours And Transitions to Manual Steering ControlAutomated vehicles (AVs) are being tested on-road with plans for imminent large-scale deployment. Many AVs are being designed to control vehicles without human input, whilst still relying on a human driver to remain vigilant and responsible for taking control in case of failure. Drivers are likely to use AV control periods to perform additional non-driving related tasks, however the impact of this load on successful steering control transitions (from AV to the human) remains unclear. Here, we used a driving simulator to examine the effect of an additional cognitive load on gaze behavior during automated driving, and on subsequent manual steering control. Drivers were asked to take-over control after a short period of automation caused trajectories to drift towards the outside edge of a bending road. Drivers needed to correct lane position when there was no additional task (“NoLoad”), or whilst also performing an auditory detection task (“Load”). Load might have affected gaze patterns, so to control for this we used either: i) Free gaze, or ii) Fixed gaze (to the road center). Results showed that Load impaired steering, causing insufficient corrections for lane drift. Free gaze patterns were influenced by the added cognitive load, but impaired steering was also observed when gaze was fixed. It seems then that the driver state (cognitive load and gaze direction) during automation may have important consequences for whether the takeover of manual vehicle control is successful.Psychology, Human Factors, Transportation Safety
https://www.researchgate.net/publication/332727693_Cognitive_Load_During_Automation_Affects_Gaze_Behaviours_and_Transitions_to_Manual_Steering_Control
https://www.researchgate.net/profile/Callum_Mole/publication/332727693_Cognitive_Load_During_Automation_Affects_Gaze_Behaviours_and_Transitions_to_Manual_Steering_Control/links/5cc6bc1792851c8d220c7e1b/Cognitive-Load-During-Automation-Affects-Gaze-Behaviours-and-Transitions-to-Manual-Steering-Control.pdf10th International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle DesignCognitive Load; Driver Distraction; Eye movements; Automationvr
51
2019Steve Grogorick, Matthias Ueberheide, Jan-Philipp Tauscher, Paul Maximilian Bittner§Gaze and Motion-aware Real-Time Dome Projection SystemWe present the ICG Dome, a research facility to explore human visual perception in a high-resolution virtual environment. Current state-of-the-art VR devices still suffer from some technical limitations, like limited field of view, screen-door effect or so-called god-rays. These issues are not present or at least strongly reduced in our system, by design. Latest technology for real-time motion capture and eye tracking open up a wide range of applications.Computer Graphics
https://ieeexplore.ieee.org/document/8797902
https://graphics.tu-bs.de/upload/publications/grogorick2019vr.pdf2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)Human-centered computing—Human computer interaction (HCI)—Interactive systems and tools; Human-centered
computing—Visualization—Visualization systems and tools; Computer systems organization—Real-time system
core
52
2019Maike Schindler, Achim J. LilienthalDomain-specific interpretation of eye tracking data: towards a refined use of the eye-mind hypothesis for the field of geometryEye tracking is getting increasingly popular in mathematics education research. Studies predominantly rely on the so-called eye-mind hypothesis (EMH), which posits that what persons fixate on closely relates to what they process. Given that the EMH was developed in reading research, we see the risk that implicit assumptions are tacitly adopted in mathematics even though they may not apply in this domain. This article investigates to what extent the EMH applies in mathematics—geometry in particular—and aims to lift the discussion of what inferences can be validly made from eye-tracking data. We use a case study to investigate the need for a refinement of the use of the EMH. In a stimulated recall interview, a student described his original thoughts perusing a gaze-overlaid video recorded when he was working on a geometry problem. Our findings contribute to better a understanding of when and how the EMH applies in the subdomain of geometry. In particular, we identify patterns of eye movements that provide valuable information on students’ geometry problem solving: certain patterns where the eye fixates on what the student is processing and others where the EMH does not hold. Identifying such patterns may contribute to an interpretation theory for students’ eye movements in geometry—exemplifying a domain-specific theory that may reduce the inherent ambiguity and uncertainty that eye tracking data analysis has.Learning Design and Technology
https://link.springer.com/article/10.1007/s10649-019-9878-z
https://link.springer.com/content/pdf/10.1007/s10649-019-9878-z.pdfEducational Studies in MathematicsEye tracking, Eye movements, Eye-mind hypothesis, Geometrycore
53
2019Georg Simhandl, Philipp Paulweber, Uwe ZdunDesign of an Executable Specification Language
Using Eye Tracking
Increasingly complex systems require powerful and easy to understand specification languages. In course of the design of an executable specification language based on the Abstract State Machines formalism we performed eye-tracking experiments to understand how newly introduced language features are comprehended by language users. In this preliminary study we carefully recruited nine engineers representing a broad range of potential users. For recording eye-gaze behavior we used Pupil Labs eye-tracking headset. An example specification and simple comprehension tasks were used as stimuli. The preliminary results of the eye-gaze behavior analysis reveal that the new language feature was understood well, but the new abstractions were frequently confused by participants. The foreknowledge of specific programming concepts is crucial how these abstractions are comprehended. More research is needed to infer this knowledge from viewing patterns. Index Terms—Gaze Behavior, Effects of Language Features, Executable Specification Language, Abstract State Machines.Computer Science
https://ieeexplore.ieee.org/document/8834704
https://eprints.cs.univie.ac.at/6022/1/simhandl2019emip.pdfEMIP '19 Proceedings of the 6th International Workshop on Eye Movements in Programming
Gaze Behavior, Effects of Language Features, Executable Specification Language, Abstract State Machinescore
54
2019Didier M Valdés Díaz, Michael Knodler, Benjamin Colucci Ríos, Alberto M Figueroa Medina, María Rojas Ibarra, Enid Colón Torres, Ricardo García Rosario, Nicholas Campbell, Francis TainterEvaluation of Safety Enhancements in School
Zones with Familiar and Unfamiliar Drivers
Traffic crashes in suburban school zones pose a serious safety concern due to a higher presence of school-age pedestrians and cyclists as well as potential speeding issues. A study that investigated speed selection and driver behavior in school zones was carried out using two populations from different topographical and cultural settings: Puerto Rico and Massachusetts. A school zone from Puerto Rico was recreated in driver simulation scenarios, and local drivers who were familiar with the environment were used as subjects. The Puerto Rico school simulation scenarios were replicated with subjects from Massachusetts to analyze the impact of drivers’ familiarity on the school-roadway environment. Twenty-four scenarios were built with pedestrians, on-street parked vehicles, and traffic flow used as simulation variables in the experiment. Results are presented in terms of speed behavior, reaction to the presence of pedestrians, speed compliance, mean reduction in speeds, and eye tracker analysis for both familiar and unfamiliar drivers.Transporation Safety, Automotive
https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/HTNPCP
http://safersim.nads-sc.uiowa.edu/final_reports/C%201%20Y2%20report_Final.pdftraffic safety, driving simulator, driving, eye-trackingcore
55
2019Nitish Padmanaban, Robert Konrad, Gordon WetzsteinAutofocals: Evaluating gaze-contingent eyeglasses for presbyopesAs humans age, they gradually lose the ability to accommodate, or refocus, to near distances because of the stiffening of the crystalline lens. This condition, known as presbyopia, affects nearly 20% of people worldwide. We design and build a new presbyopia correction, autofocals, to externally mimic the natural accommodation response, combining eye tracker and depth sensor data to automatically drive focus-tunable lenses. We evaluated 19 users on visual acuity, contrast sensitivity, and a refocusing task. Autofocals exhibit better visual acuity when compared to monovision and progressive lenses while maintaining similar contrast sensitivity. On the refocusing task, autofocals are faster and, compared to progressives, also significantly more accurate. In a separate study, a majority of 23 of 37 users ranked autofocals as the best correction in terms of ease of refocusing. Our work demonstrates the superiority of autofocals over current forms of presbyopia correction and could affect the lives of millions.HCI, Electrical Engineering
https://advances.sciencemag.org/content/5/6/eaav6187.full
https://advances.sciencemag.org/content/5/6/eaav6187.full.pdfScience Advances
Vol 5, No. 6
05 June 2019
human vision, hardware, focus tunable lensesar, core
56
2019Joohwan Kim , Michael Stengel , Alexander Majercik (Nvidia) , Shalini De Mello , David Dunn (UNC) , Samuli Laine , Morgan McGuire , David LuebkeNVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze EstimationQuality, diversity, and size of training dataset are critical factors for learning-based gaze estimators. We create two datasets satisfying these criteria for near-eye gaze estimation under infrared illumination: a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions (two million images at 1280x960), and a real-world dataset collected with 35 subjects (2.5 million images at 640x480). Using our datasets, we train a neural network for gaze estimation, achieving 2.06 (+/- 0.44) degrees of accuracy across a wide 30 x 40 degrees field of view on real subjects excluded from training and 0.5 degrees best-case accuracy (across the same field of view) when explicitly trained for one real subject. We also train a variant of our network to perform pupil estimation, showing higher robustness than previous methods. Our network requires fewer convolutional layers than previous networks, achieving sub-millisecond latency.Eye tracking Algorithms, Computer Science, Eye tracking dataset
https://research.nvidia.com/publication/2019-05_NVGaze%3A-An-Anatomically-Informed
https://users.aalto.fi/~laines9/publications/kim2019sigchi_paper.pdfACM Conference on Human-Computer-Interaction (CHI) 2019eye tracking, virtual reality, VR, NeuralNetworks, gaze estimationcore
57
2019Thiago Santini,Diederick C. Niehorster, Enkelejda KasneciGet a Grip : Slippage-Robust and Glint-Free Gaze Estimation for Real-Time Pervasive Head-Mounted Eye TrackingA key assumption conventionally made by flexible head-mounted eye-tracking systems is often invalid: The eye center does not remain stationary w.r.t. the eye camera due to slippage. For instance, eye-tracker slippage might happen due to head acceleration or explicit adjustments by the user. As a result, gaze estimation accuracy can be significantly reduced. In this work, we propose Grip, a novel gaze estimation method capable of instantaneously compensating for eye-tracker slippage without additional hardware requirements such as glints or stereo eye camera setups. Grip was evaluated using previously collected data from a large scale unconstrained pervasive eye-tracking study. Our results indicate significant slippage compensation potential, decreasing average participant median angular offset by more than 43% w.r.t. a non-slippage-robust gaze estimation method. A reference implementation of Grip was integrated into EyeRecToo, an open-source hardware-agnostic eye-tracking software, thus making it readily accessible for multiple eye trackers (Available at: www.ti.uni-tuebingen.de/perception).
Eye tracking Algorithms
https://www.researchgate.net/publication/333491983_Get_a_grip_slippage-robust_and_glint-free_gaze_estimation_for_real-time_pervasive_head-mounted_eye_tracking
https://pdfs.semanticscholar.org/b12f/8d8e23200dbb22ac64a1bc53f615ea3d07c7.pdfETRA 2019calibration; drift; embedded; eye; gaze estimation; open source; pervasive; pupil tracking; real-time; slippage; trackingcore
58
2019Almoctar Hassoumi, Christophe HurterEye Gesture in a Mixed Reality EnvironmentUsing a simple approach, we demonstrate that eye gestures could provide a highly accurate interaction modality in a mixed reality environment. Such interaction has been proposed for desktop and mobile devices. Recently, Gaze gesture has gained a special interest in Human-Computer Interaction and granted new interaction possibilities, particularly for accessibility. We introduce a new approach to investigate how gaze tracking technologies could help people with ALS or other motor impairments to interact with computing devices. In this paper, we propose a touch-free, eye movement based entry mechanism for mixed reality environments that can be used without any prior calibration. We evaluate the usability of the system with 7 participants, describe the implementation of the method and discuss its advantages over traditional input modalities.UI/UX, HCI
https://hal-enac.archives-ouvertes.fr/hal-02073441
https://hal-enac.archives-ouvertes.fr/hal-02073441/documentHUCAPP 2019 : 3rd International Conference on Human Computer Interaction Theory and Applications, Feb 2019, Prague, Czech Republic. pp 183 - 187Eye-movement, Interaction, Eye Tracking, Smooth Pursuit, Mixed Reality, Accessibilityar, core
59
2019Alexiou, Evangelos ; Xu, Peisen ; Ebrahimi, TouradjTowards modelling of visual saliency in point clouds for immersive applicationsModelling human visual attention is of great importance in the field of computer vision and has been widely explored for 3D imaging. Yet, in the absence of ground truth data, it is unclear whether such predictions are in alignment with the actual human viewing behavior in virtual reality environments. In this study, we work towards solving this problem by conducting an eye-tracking experiment in an immersive 3D scene that offers 6 degrees of freedom. A wide range of static point cloud models is inspected by human subjects, while their gaze is captured in real-time. The visual attention information is used to extract fixation density maps, that can be further exploited for saliency modelling. To obtain high quality fixation points, we devise a scheme that utilizes every recorded gaze measurement from the two eye-cameras of our set-up. The obtained fixation density maps together with the recorded gaze and head trajectories are made publicly available, to enrich visual saliency datasets for 3D models.Computer Vision
https://infoscience.epfl.ch/record/265790
https://infoscience.epfl.ch/record/265790/files/ICIP2019.pdf26th IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, September 22-25, 2019visual saliency ; immersive environments ; point clouds ; virtual reality ; eye-trackingvr
60
2019Viviane Clay, Peter König, Sabine U. KoenigEye Tracking in Virtual RealityThe intent of this paper is to provide an introduction into the bourgeoning field of eye track-ing in Virtual Reality (VR). VR itself is an emerging technology on the consumer market,which will create many new opportunities in research. It offers a lab environment with highimmersion and close alignment with reality. An experiment which is using VR takes placein a highly controlled environment and allows for amore in-depth amount of information tobe gathered about the actions of a subject. Techniques for eye tracking were introduced morethan a century ago and are now an established technique in psychological experiments, yetrecent development makes it versatile and affordable. In combination, these two techniquesallow unprecedented monitoring and control of humanbehavior in semi-realistic conditions.This paper will explore the methods and tools whichcan be applied in the implementationof experiments using eye tracking in VR following the example of one case study. Accom-panying the technical descriptions, we present research that displays the effectiveness of thetechnology and show what kind of results can be obtained when using eye tracking in VR.It is meant to guide the reader through the processof bringing VR in combination with eyetracking intothelab and to inspire ideas for new experiments.Cognitive Science
https://www.researchgate.net/publication/332780872_Eye_Tracking_in_Virtual_Reality
https://www.researchgate.net/profile/Viviane_Clay/publication/332780872_Eye_Tracking_in_Virtual_Reality/links/5cc95b74299bf120978bd0f6/Eye-Tracking-in-Virtual-Reality.pdf?origin=publication_detailJournal of Eye Movement Research. 12. 10.16910/jemr.12.1.3.Eye movement, eye tracking, virtual reality, VR, smooth pursuit, region of
interest, gaze
vr
61
2019M. Kraus, T. Kilian, J. FuchsReal-Time Gaze Mapping in Virtual EnvironmentsIn order to analyze an analyst's behavior in an immersive environment, his or her eye movements can be monitored using eye trackers. Hereby, points of individual interest can be objectively identified, for instance, to assess the usability and intuitiveness of a framework. However, this technique can be used not only as a post-event analysis tool but also to assist an ongoing exploration of a virtual environment. With this poster, we present a technique that allows a real-time gaze map creation which supports the immersed analyst by providing real-time feedback on the user's own activity. In our approach, all surfaces in the virtual environment are enwrapped with a mesh structure. The grid structure recognizes when a user drifts with his or her eyes above it and increments weights of activated node points. This allows highlighting areas that have been observed, but also those that have not been observed - also when they are occluded by other objects or surfaces. We tested our technique in a preliminary qualitative expert study and received helpful feedback for further improvements.Computer Graphics
https://dx.doi.org/10.2312/eurp.20191135
http://kops.uni-konstanz.de/bitstream/handle/123456789/46434/Kraus_2-rrvzhdtz0wt93.pdf?sequence=1&isAllowed=yEUROVIS 2019 Posters / Madeiras Pereira, João; Raidou, Renata Georgia (Hrsg.). - Genf : The Eurographics Association, 2019augmented reality, mixed realityvr
62
2019Haase, H.
How People with a Visual Field Defect Scan their Environment: An Eye-Tracking Study
The scanning behavior of people with a Visual Field Defect (VFD) is still relatively unexplored, although a VFD can have great impact on the person concerned with it. This study examined the scanning behavior of people with a VFD in a mobility situation by the means of an Eye-Tracker. Participants were asked to watch a video of everyday mobility situations, showing both walking and biking sequences from first person perspective. They were asked to imagine being that person in the video and look around naturally. In the video, 33 objects were defined as regions of interest (ROIs), as they are important to detect in order to walk or bike safely. The Eye-Tracker recorded when and where the participants looked. It was expected, that people with a visual field defect (VIPs) need more time to fixate on relevant ROIs for the first time, detect less ROIs in total, do not look at them as long as sighted people do and fixate more often on the ROIs. The results confirmed that VIPs detect less ROIs and spend less time looking at them. Contrary to what was expected, the results also showed that sighted people fixate more on the ROIs than VIPs. An additional analysis showed that the time until an ROI is first looked at, is significantly shorter for sighted people. These findings showed that there are indeed differences in scanning behavior and they could be used to help VIPs counterbalance their decreased visual field.Psychology
https://dspace.library.uu.nl/handle/1874/382794
https://dspace.library.uu.nl/bitstream/handle/1874/382794/Haase%20%286545424%29%20thesis.pdf?sequence=2&isAllowed=yMasters thesisCognitive Psychologycore
63
2019Björn Jörges, Joan López-MolinerEarth-Gravity Congruent Motion Benefits Visual Gain For Parabolic
Trajectories
There is evidence that humans rely on an earth gravity (9.81 m/s²) prior for a series of tasks involving perception and action, the reason being that gravity helps predict future positions of moving objects. Eye-movements in turn are partially guided by predictions about observed motion. Thus, the question arises whether knowledge about gravity is also used to guide eye-movements: If humans rely on a representation of earth gravity for the control of eye movements, earth-gravity-congruent motion should elicit improved visual pursuit. In a pre-registered experiment, we presented participants (n = 10) with parabolic motion governed by six different gravities (−1/0.7/0.85/1/1.15/1.3 g), two initial vertical velocities and two initial horizontal velocities in a 3D environment. Participants were instructed to follow the target with their eyes. We tracked their gaze and computed the visual gain (velocity of the eyes divided by velocity of the target) as proxy for the quality of pursuit. An LMM analysis with gravity condition as fixed effect and intercepts varying per subject showed that the gain was lower for −1 g than for 1 g (by −0.13, SE = 0.005). This model was significantly better than a null model without gravity as fixed effect (p < 0.001), supporting our hypothesis. A comparison of 1 g and the remaining gravity conditions revealed that 1.15 g (by 0.043, SE = 0.005) and 1.3 g (by 0.065, SE = 0.005) were associated with lower gains, while 0.7 g (by 0.054, SE = 0.005) and 0.85 g (by 0.029, SE = 0.005) were associated with higher gains. This model was again significantly better than a null model (p < 0.001), contradicting our hypothesis. Post-hoc analyses reveal that confounds in the 0.7/0.85/1/1.15/1.3 g condition may be responsible for these contradicting results. Despite these discrepancies, our data thus provide some support for the hypothesis that internalized knowledge about earth gravity guides eye movements.Psychology, Neuroscience
https://doi.org/10.1101/547497
https://www.biorxiv.org/content/biorxiv/early/2019/02/12/547497.full.pdfScientific Reports 9core
64
2019Xi Wang, Andreas Ley, Sebastian Koch, David Lindlbauer, James Hays, Kenneth Holmqvist, Marc AlexaThe Mental Image Revealed by Gaze TrackingHumans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user's eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.

HCI
https://doi.org/10.1145/3290605.3300839
http://cybertron.cg.tu-berlin.de/xiwang/files/mi.pdfCHI 2019, May 4–9, 2019, Glasgow, Scotland UKgaze pattern, mental imagery, eye trackingcore
65
2019Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, Andreas Bulling, Enrico RukzioA Design Space for Gaze Interaction on Head-Mounted DisplaysAugmented and virtual reality (AR/VR) has entered the mass market and, with it, will soon eye tracking as a core technology for next generation head-mounted displays (HMDs). In contrast to existing gaze interfaces, the 3D nature of AR and VR requires estimating a user's gaze in 3D. While first applications, such as foveated rendering, hint at the compelling potential of combining HMDs and gaze, a systematic analysis is missing. To fill this gap, we present the first design space for gaze interaction on HMDs. Our design space covers human depth perception and technical requirements in two dimensions aiming to identify challenges and opportunities for interaction design. As such, our design space provides a comprehensive overview and serves as an important guideline for researchers and practitioners working on gaze interaction on HMDs. We further demonstrate how our design space is used in practice by presenting two interactive applications: EyeHealth and XRay-Vision.HCI
https://dl.acm.org/doi/10.1145/3290605.3300855
https://www.perceptualui.org/publications/hirzle19_chi.pdfProceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12. 2019.design space, 3D gaze, gaze interaction, head-mounted displays, interaction design, augmented reality, virtual realityar
66
2019Julian Steil, Inken Hagestedt, Michael Xuelin Huang, Andreas BullingPrivacy-aware eye tracking using differential privacyWith eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users' privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces.HCI, Privacy
https://arxiv.org/abs/1812.08000
https://arxiv.org/pdf/1812.08000.pdfProceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, pp. 1-9. 2019.Online Survey; Data Sharing; Privacy Protection; Gaze Behaviour; Eye Movements; User Modelingcore
67
2019André KlostermannEspecial skill vs. quiet eye duration in basketball free throw: evidence for the inhibition of competing task solutionsThe quiet eye (QE) is a gaze phenomenon that has been studied over more than two decades. However, the underlying mechanisms of the well-known expertise effect, namely, longer QE durations in experts when compared to less-skilled athletes remain unclear. Therefore, from a functional perspective, an inhibition hypothesis was proposed that explains long QE durations in experts with increased inhibition requirements over movement parametrisation. This hypothesis was tested by making use of the especial-skill effect in basketball free throw which refers to the observation of higher actual performance than would be predicted on the basis of performance at the nearby locations. In line with the expectations, from the distance of the free-throw line, higher actual than predicted shooting accuracy and longer actual than predicted QE duration were revealed. This suggests that when performing free throws prolonged QE durations are required to shield the optimal against alternative task solutions within the very dense sub-space of this especial skill. These findings suggest an inhibition function of long QE durations in expert athletes.Sports Performance
https://www.ncbi.nlm.nih.gov/pubmed/30724705
https://boris.unibe.ch/133047/1/Klostermann_Especial_postprint.pdfEuropean journal of sport science 19, no. 7 (2019): 964-971.Skill, Motor Control, Team Sport, Theorycore
68
2019Markus Wallmyr, Taufik Akbar Sitompul, Tobias Holstein, Rikard LindellEvaluating Mixed Reality Notifications to Support Excavator Operator AwarenessOperating heavy vehicles, for instance an excavator, requires a high level of attention to the operation done using the vehicle and awareness of the surroundings. Digital transformation in heavy vehicles aims to improve productivity and user experience, but it can also increase the operators mental load because of a higher demand of attention to instrumentation and controls, subsequently leading to reduced situation awareness. One way to mitigate this, is to display information within the operators’ field of view, which enhances information detectability through quick glances, using mixed reality interfaces. This work explores two types of mixed reality visualizations and compares them to a traditional display setup in a simulated excavator environment. We have utilized eye-tracking glasses to study users’ attention to the task, surrounding awareness, and interfaces, followed by a NASA-RTLX questionnaire to evaluate the users’ reported mental workload. The results indicate benefits for the mixed reality approaches, with lower workload ratings together with an improved rate in detection of presented information.HCI, Construction
https://link.springer.com/chapter/10.1007/978-3-030-29381-9_44
http://www.es.mdh.se/pdf_publications/5590.pdfIFIP Conference on Human-Computer Interaction, pp. 743-762. Springer, Cham, 2019.Mixed reality, Human-vehicle interaction, Situational awareness, Head-up display, Excavator, Heavy-vehicles.core
69
2019Adithya B., Pavan Kumar B. N., Young Ho Chai, Ashok Kumar PatilInspired by Human Eye: Vestibular Ocular Reflex Based Gimbal Camera Movement to Minimize Viewpoint ChangesHuman eyeballs move relative to the head, resulting in optimal changes in the viewpoint. We tested similar vestibular ocular reflex (VOR)-based movement on Zenmuse-X3 gimbal camera relative to pre-defined YAW movements of the DJI Matrice-100 unmanned aerial vehicle (UAV). Changes in viewpoint have various consequences for visual and graphical rendering. Therefore, this study investigated how to minimize these changes. OpenGL visualization was performed to simulate and measure viewpoint changes using the proposed VOR-based eyeball movement algorithm and compared with results of VOR based gimbal movement. The gimbal camera was setup to render images (scenes) on flat monitors. Positions of pre-fixed targets in the images were used to measure the viewpoint changes. The proposed approach could successfully control and significantly reduce the viewpoint changes and stabilize the image to improve visual tracking of targets on flat monitors. The proposed method can also be used to render real-time camera feed to a head-mounted display (HMD) in an ergonomically pleasing way.Robotics, HRI
https://www.mdpi.com/2073-8994/11/1/101
https://www.mdpi.com/2073-8994/11/1/101/pdfSymmetry 11, no. 1 (2019): 101.virtual environment; eye tracker; viewpoint; visual scene; foveal range; vestibular-ocular reflex; unmanned aerial vehicle; gimbal cameracore
70
2019Christos Fidas, Marios Belk, George Hadjidemetriou, Andreas PitsillidesInfluences of mixed reality and human cognition on picture passwords: An eye tracking studyRecent research revealed that individual cognitive differences affect visual behavior and task performance of picture passwords within conventional interaction realms such as desktops and tablets. Bearing in mind that mixed reality environments necessitate from end-users to perceive, process and comprehend visually-enriched content, this paper further investigates whether this new interaction realm amplifies existing observed effects of individual cognitive differences towards user interactions in picture passwords. For this purpose, we conducted a comparative eye tracking study (N = 50) in which users performed a picture password composition task within a conventional interaction context vs. a mixed reality context. For interpreting the derived results, we adopted an accredited human cognition theory that highlights cognitive differences in visual perception and search. Analysis of results revealed that new technology realms like mixed reality extend and, in some cases, amplify the effect of human cognitive differences towards users’ visual and interaction behavior in picture passwords. Findings can be of value for improving future implementations of picture passwords by considering human cognitive differences as a personalization factor for the design of user-adaptive graphical passwords in mixed reality.HCI, Privacy
https://link.springer.com/chapter/10.1007/978-3-030-29384-0_19
http://www.ghadji.info/assets/files/publications/INTERACT%202019.pdfIFIP Conference on Human-Computer Interaction, pp. 304-313. Springer, Cham, 2019.Picture Passwords; Human Cognition; Mixed Reality; Eye Tracking; Visual Behavior; Usability; Securityar
71
2019Benedikt V. Ehinger, Katharina Groß, Inga Ibs, Peter KönigA new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000Eye-tracking experiments rely heavily on good data quality of eye-trackers. Unfortunately, it is often the case that only the spatial accuracy and precision values are available from the manufacturers. These two values alone are not sufficient to serve as a benchmark for an eye-tracker: Eye-tracking quality deteriorates during an experimental session due to head movements, changing illumination or calibration decay. Additionally, different experimental paradigms require the analysis of different types of eye movements; for instance, smooth pursuit movements, blinks or microsaccades, which themselves cannot readily be evaluated by using spatial accuracy or precision alone. To obtain a more comprehensive description of properties, we developed an extensive eye-tracking test battery. In 10 different tasks, we evaluated eye-tracking related measures such as: the decay of accuracy, fixation durations, pupil dilation, smooth pursuit movement, microsaccade classification, blink classification, or the influence of head motion. For some measures, true theoretical values exist. For others, a relative comparison to a reference eye-tracker is needed. Therefore, we collected our gaze data simultaneously from a remote EyeLink 1000 eye-tracker as the reference and compared it with the mobile Pupil Labs glasses. As expected, the average spatial accuracy of 0.57° for the EyeLink 1000 eye-tracker was better than the 0.82° for the Pupil Labs glasses (N = 15). Furthermore, we classified less fixations and shorter saccade durations for the Pupil Labs glasses. Similarly, we found fewer microsaccades using the Pupil Labs glasses. The accuracy over time decayed only slightly for the EyeLink 1000, but strongly for the Pupil Labs glasses. Finally, we observed that the measured pupil diameters differed between eye-trackers on the individual subject level but not on the group level. To conclude, our eye-tracking test battery offers 10 tasks that allow us to benchmark the many parameters of interest in stereotypical eye-tracking situations and addresses a common source of confounds in measurement errors (e.g., yaw and roll head movements). All recorded eye-tracking data (including Pupil Labs’ eye videos), the stimulus code for the test battery, and the modular analysis pipeline are freely available (https://github.com/behinger/etcomp).Cognitive Science, Eye tracker testing
https://peerj.com/articles/7086/?utm_source=TrendMD&utm_campaign=PeerJ_TrendMD_1&utm_medium=TrendMD
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6625505/pdf/peerj-07-7086.pdfPeerJ 7 (2019): e7086.Pupil dilation, Smooth pursuit, Microsaccades, Blinks, Eye-tracker benchmark,
Accuracy and precision, Head movements, EyeLink 1000, Pupil Labs glasses, Calibration decay
core
72
2019Niveta Ramkumar, Nadia Fereydooni, Orit Shaer, Andrew L. KunVisual behavior during engagement with tangible and virtual representations of archaeological artifactsIn this paper, we present results from a study of users' visual behavior while engaging with tangible and virtual representations of archaeological artifacts. We replicated and extended a recent study that introduced an augmented reality system implemented using HoloLens, for engaging with the artifacts. Our study goes beyond the original study to estimate the distribution of users' visual attention for both tangible and virtual representations of the artifacts. Our study confirmed the results of the original study in various aspects. Specifically, participants in both studies confirmed the immersive nature of the HoloLens condition and showed similar learning outcomes in terms of post-task open questions. Additionally, our findings indicate that users allocate their visual attention in similar ways when interacting with virtual and tangible learning material, in terms of total gaze duration, gaze on object duration, and object fixation duration.UI/UX, Archaeology, HCI
https://dl.acm.org/doi/10.1145/3321335.3324930
https://cs.wellesley.edu/~oshaer/PerDis19.pdfProceedings of the 8th ACM International Symposium on Pervasive Displays, pp. 1-7. 2019.Human-Centered Computing, Mixed/Augmented reality, Gesture input, Object recognition, Eye tracking, Object-based learningAR
73
2019Pranav Venuprasad, Tushal Dobhal, Anurag Paul, Tu NM Nguyen, Andrew Gilman, Pamela Cosman, Leanne ChukoskieCharacterizing joint attention behavior during real world interactions using automated object and gaze detectionJoint attention is an essential part of the development process of children, and impairments in joint attention are considered as one of the first symptoms of autism. In this paper, we develop a novel technique to characterize joint attention in real time, by studying the interaction of two human subjects with each other and with multiple objects present in the room. This is done by capturing the subjects' gaze through eye-tracking glasses and detecting their looks on predefined indicator objects. A deep learning network is trained and deployed to detect the objects in the field of vision of the subject by processing the video feed of the world view camera mounted on the eye-tracking glasses. The looking patterns of the subjects are determined and a real-time audio response is provided when a joint attention is detected, i.e., when their looks coincide. Our findings suggest a trade-off between the accuracy measure (Look Positive Predictive Value) and the latency of joint look detection for various system parameters. For more accurate joint look detection, the system has higher latency, and for faster detection, the detection accuracy goes down.Psychology, Computer Science, Computer Vision
https://dl.acm.org/doi/10.1145/3314111.3319843
http://code.ucsd.edu/pcosman/ETRA_paper.pdfProceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, pp. 1-8. 2019.Joint Attention, Object Detection, Deep Learning, Computer Vision, Autism, Eye-Tracking, Gaze Behaviorcore
74
2019Abiodun Brimmo Yusuf, Ah-Lian Kor, Hissam TawfikA Simulation Environment for investigating In-Flight Startle in General AviationLoss of control in-flight (LOC-I), precipitated by a loss of situational awareness and the presence of startled responses has been identified as a leading cause of aviation-based fatalities in recent decades. This has led to significant effort toward improving safety records, particularly in the fields of flight crew training and in-flight support technologies that aid better decision-making, as well as for training the management of reactions to a startling occurrence. One way to achieve quality decision-making in the cockpit is by providing adequate cueing and response activating mechanisms carefully designed to aid human information processing. Furthermore, these response performances, especially in the context of reactionary management of startle in-flight, could be honed through simulator based training. This paper describes the simulation environment developed as well as its key characteristics which enable such a simulation experiment in the general aviation domain. The flight simulation platform as an invaluable component, in the endeavour to create a viable avenue for investigating unexpected error input is also discussed, as well some key elements of current methods driving research into startle imparted loss of control.Transporation Safety, Aviation
https://www.researchgate.net/publication/334698822_A_Simulation_Environment_for_investigating_In-Flight_Startle_in_General_Aviation
https://www.researchgate.net/profile/Abiodun_Yusuf2/publication/334698822_A_Simulation_Environment_for_investigating_In-Flight_Startle_in_General_Aviation/links/5d6f9411a6fdcc9961acad0e/A-Simulation-Environment-for-investigating-In-Flight-Startle-in-General-Aviation.pdf2019 10th International Conference on Dependable Systems, Services and Technologies (DESSERT), pp. 180-185. IEEE, 2019.Flight Simulation, General Aviation Safety, Loss of Control, Eye Tracking, Task Performance, Human Factorscore
75
2019Lei Shi, Cosmin Copot, Steve VanlanduitWhat Are You Looking at? Detecting Human Intention in Gaze based Human-Robot InteractionIn gaze based Human-Robot Interaction (HRI), it is important to determine the human intention for further interaction. The gaze intention is often modelled as fixation. However, when looking at an object, it is not natural and it is difficult to maintain the gaze fixating on one point for a long time. Saccades may happen while a human is still focusing on the object. The prediction of human intention will be lost during saccades. In addition, while the human intention is on object, the gazes may be located outside of the object bounding box due to different noise sources, which would cause false negative predictions. In this work, we propose a novel approach to detect whether a human is focusing on an object in HRI application. We determine the gaze intention by comparing the similarity between the hypothetic gazes on objects and the actual gazes. We use Earth Mover's Distance (EMD) to measure the similarity and 1 Nearest Neighbour to classify which object a human is looking at. Our experimental results indicate that, compare to fixation, our method can successfully determine the human intention even during saccadic eye movements and increase the classification accuracy with noisy gaze data. We also demonstrate that, in the interaction with a robot, the proposed approach can obtain a high accuracy of object selection within successful predictions.HRI, Computer Vision
https://arxiv.org/abs/1909.07953
https://arxiv.org/pdf/1909.07953arXiv preprint arXiv:1909.07953 (2019).Human-Robot Interaction, fixation, saccade,
gaze, EMD
core
76
2019Peipei LiuStudy of car-bicycle safety at signalized intersections from multi-aspectsCycling is increasing in popularity in urban areas due to its individual, social and environmental benefits. However, cyclists are among the most vulnerable road users. Especially at intersections, many bicycles collide with passenger cars despite the control of traffic signals. This thesis focuses on car-bicycle safety at signalized intersections. Various factors contribute to the occurrence of road accidents, namely factors from the human, vehicle and environment aspects. The aim of this thesis is to explore potential measures to reduce car-bicycle accidents at signalized intersection from multi-aspects.

In accident prevention, one must know the world of accidents. Given that few accident analysis has specifically focused on car-bicycle accidents at signalized intersections, this thesis analyze two accident databases. The analysis deals with the characteristics of car-bicycle accidents at signalized intersections. It reveals possible accident scenarios, frequencies of each scenario and common accident causes of accidents.

Naturalistic Driving Observation (NDO) is a fast-growing method for traffic safety studies. Despite of many NDO studies, few have particularly investigated interactions between car drivers and bicyclists at signalized intersections. This thesis carries out a Quasi-NDO study in order to investigate the interactions between car drivers and cyclists at signalized intersections from perspective of car drivers. A car is instrumented with various sensors. The instrumented car is used to collect the data of driving behaviors and environment, while twenty-two participants separately drive this car in real traffic. The collected driving behaviors include dynamic driving data, drivers´ body movements and eye movements. A self-programmed Graphical User Interface and a video annotation tool are used for data analysis in order to detect car-bicycle conflicts. In addition, the collected eye movement data is analyzed in the scenario (i.e. right-hook scenario), where car turns right and bicycle goes through. With 146 detected right-hook events, the analysis reveals bicycle-scanning strategies of car drivers. According to the bicycle-scanning strategies, potential suggestions are proposed to mitigate the risk of this scenario.

Road users’ perceived risk influences individual behaviors, and therefore it plays an important role in traffic safety. Through an online survey, the perceived risk of car drivers and cyclists (no matter the consequences of the crash) are investigated for seventeen common car-bicycle scenarios at signalized intersections. A comparison between the subjective perceived risk und the objective risk shows that a discrepancy exists among both car drivers and cyclists. Moreover, it shows that cyclists tend to perceive less risk than car drivers do. The implications of these results are vital for improvement of car-bicycle safety at signalized intersections.

Safety effects of bicycle facilities are controversial, especially at intersections. By use of the Negative Binomial model, safety effects of bicycle facilities on car-bicycle crash risk at signalized intersections are estimated. The effects of other intersection factors are simultaneously considered. The estimation results shows that bicycle lanes have positive safety effects, while bicycle paths have negative safety effects.
Transportation Safety
https://depositonce.tu-berlin.de/handle/11303/8720
https://depositonce.tu-berlin.de/bitstream/11303/8720/4/liu_peipei.pdfPHD Dissertation 2019Transporation Safety, Cycling, Automitivecore
77
2019Callum Mole, Oscar Giles, Natasha Merat, Richard Romano, Gustov Markkula, Richard WilkieWhere you look during automation influences where you steer after take-overWhen driving a vehicle, gaze direction (where the driver is looking) is tightly coupled with steering actions. For example, previous research has shown that gaze direction directly influences steering behavior. In the context of transitions of control from automated to manual driving, a new question arises: Does gaze direction before a transition influence the manual steering after it? Here we addressed this question in a simplified simulated driving scenario, for maximum experimental control. Participants (N=26) were driven around a constant curvature bend by an automated vehicle, which gradually drifted toward the outside of the bend. An auditory tone cued manual take-over of steering control and participants were required to correct the drift and return to the lane center. Gaze direction was controlled using an onscreen fixation point with a position that varied from trial to trial horizontally and/or vertically. The results showed that steering during manual control was systematically biased by gaze direction during the automated period, but notably in the opposite direction to what might have been expected based on previous research. Whilst further research is needed to understand the causal mechanisms, these findings do suggest that where a driver looks during the seconds preceding a transition to manual control may be critical in determining whether the subsequent steering actions are successful.Transportation Safety
http://eprints.whiterose.ac.uk/144671/
http://eprints.whiterose.ac.uk/144671/1/Where_you_look_DA19_Revision.pdfProceedings of the 10th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design (2019 Driving Assessment Conference). University of Iowa, 2019.core
78
2019Nuno Ferreira Duarte, Mirko Raković, José Santos-VictorBiologically Inspired Controller of Human Action Behaviour for a Humanoid Robot in a Dyadic ScenarioHumans have a particular way of moving their body when interacting with the environment and with other humans. The movement of the body is commonly known and expresses the intention of the action. The express of intent by our movement is classified as non-verbal cues, and from them, it is possible to understand and anticipate the actions of humans. In robotics, humans need to understand the intention of the robot in order to efficiently and safely interact in a dyadic activity. If robots could possess the same non-verbal cues when executing the same actions, then humans would be capable of interacting with robots the way they interact with other humans. We propose a robotic controller capable of executing actions of moving objects on a table (placing) and handover objects to humans (giving) in a human-like behaviour. Our first contribution is to model the behaviour of the non-verbal cues of a human interacting with other humans while performing placing and giving actions. From the recordings of the motion of the human, we build a computational model of the trajectory of the head, torso, and arm for the different actions. Additionally, the human motion model was consolidated with the integration of a previously developed human gaze behaviour model. As a second contribution, we embedded this model in the controller of an iCub humanoid robot and compared the generated trajectories to the real human model, and additionally, compare with the existing minimum-jerk controller for the iCub (iKin). Our results show that it is possible to model the complete upper body human behaviour during placing and giving interactions, and the generated trajectories from the model give a better approximation of the human-like behaviour in a humanoid robot than the existing inverse kinematics solver. From this work, we can conclude that our controller is capable of achieving a humanlike behaviour for the robot which is a step towards robots capable of understanding and being understood by humans.Robotics, HRI
https://ieeexplore.ieee.org/abstract/document/8861629
http://vislab.isr.ist.utl.pt/wp-content/uploads/2020/01/nduarte-eurocon2019.pdfIEEE EUROCON 2019-18th International Conference on Smart Technologies, pp. 1-6. IEEE, 2019.Human Motion, Humanoid Robots, Human-like Behaviour, Motion Controllercore
79
2019Stefanie MuellerInferring target locations from gaze data: A smartphone studyAlthough smartphones are widely used in everyday life, studies of viewing behavior mainly employ desktop computers. This study examines whether closely spaced target locations on a smartphone can be decoded from gaze. Subjects wore a head-mounted eye tracker and fixated a target that successively appeared at 30 positions spaced by 10.0 × 9.0 mm. A "hand-held" (phone in subject's hand) and a "mounted" (phone on surface) condition were conducted. Linear-mixed-models were fitted to examine whether gaze differed between targets. T-tests on root-mean-squared errors were calculated to evaluate the deviation between gaze and targets. To decode target positions from gaze data we trained a classifier and assessed its performance for every subject/condition. While gaze positions differed between targets (main effect "target"), gaze deviated from the real positions. The classifier's performance for the 30 locations ranged considerably between subjects ("mounted": 30 to 93 % accuracy; "hand-held": 8 to 100 % accuracy).HCI
https://dl.acm.org/doi/10.1145/3314111.3319847
https://psycharchives.org/bitstream/20.500.12034/2115/1/ETRA_Mueller_2019.pdfProceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, pp. 1-4. 2019.fixations, mobile devices, accuracy, gaze positionscore
80
2019Caporusso, Nicholas, Angela Walters, Meng Ding, Devon Patchin, Noah Vaughn, Daniel Jachetta, and Spencer RomeiserComparative User Experience Analysis of Pervasive Wearable TechnologyAlthough the growing market of wearable devices primarily consists of smartwatches, fitness bands, and connected gadgets, its long tail includes a variety of diverse technologies based on novel types of input and interaction paradigms, such as gaze, brain signals, and gestures. As the offer of innovative wearable devices will increase, users will be presented with more sophisticated alternatives: among the several factors that influence product adoption, perceived user experience has a significant role. In this paper, we focus on human factors dynamics involved in the pre- and post-adoption phase, that is, before and after customers buy or use products. Specifically, objective of our research is to evaluate aspects that influence the perceived value of particularly innovative products and lead to purchasing them. To this end, we present the results of a pilot study that compared performance expectancy, effort expectancy, social influence, and facilitating conditions, in the pre- and post-adoption stages of three types of wearable technology, i.e., brain-computer interface, gesture controllers, and eye-tracking systems.Human Factors, Eye tracker testing
https://www.researchgate.net/publication/333784560_Comparative_User_Experience_Analysis_of_Pervasive_Wearable_Technology
https://www.researchgate.net/profile/Nicholas_Caporusso3/publication/333784560_Comparative_User_Experience_Analysis_of_Pervasive_Wearable_Technology/links/5d41f2a3a6fdcc370a713596/Comparative-User-Experience-Analysis-of-Pervasive-Wearable-Technology.pdfInternational Conference on Applied Human Factors and Ergonomics, pp. 3-13. Springer, Cham, 2019.Technology Acceptance Model, Unified Theory of Acceptance and Use of Technology, Performance expectancy, Effort expectancy, Social
influence, Eye tracking, Gaze tracking, Gesture controllers, Brain-Computer Interface
core
81
2019Maike Schindler, and Achim J. LilienthalStudents’ Creative Process in Mathematics: Insights from Eye-Tracking-Stimulated Recall Interview on Students’ Work on Multiple Solution TasksStudents’ creative process in mathematics is increasingly gaining significance in mathematics education research. Researchers often use Multiple Solution Tasks (MSTs) to foster and evaluate students’ mathematical creativity. Yet, research so far predominantly had a product-view and focused on solutions rather than the process leading to creative insights. The question remains unclear how students’ process solving MSTs looks like—and if existing models to describe (creative) problem solving can capture this process adequately. This article presents an explorative, qualitative case study, which investigates the creative process of a school student, David. Using eye-tracking technology and a stimulated recall interview, we trace David’s creative process. Our findings indicate what phases his creative process in the MST involves, how new ideas emerge, and in particular where illumination is situated in this process. Our case study illustrates that neither existing models on the creative process, nor on problem solving capture David’s creative process fully, indicating the need to partially rethink students’ creative process in MSTs.Learning Design and Technology, Mathematics
https://link.springer.com/article/10.1007/s10763-019-10033-0
https://link.springer.com/article/10.1007/s10763-019-10033-0International Journal of Science and Mathematics Education (2019): 1-22.core
82
2019Michael Haslgrübler, Benedikt Gollan, Christian Thomay, Alois Ferscha, Josef HeftbergerTowards skill recognition using eye-hand coordination in industrial productionCompanies are re-focusing and making use of human labor [4, 21] in order to create individualized lot-size-1 products and not produce the exact same mass product again and again. While human workers can produce with a least with the same quality as machines do, they are not that consistent, so it's better to combine strengths of both men and machines [14]. In this work, we investigate how we can utilize how humans behave in relation to task-required skills levels. To do so we investigate hand-eye coordination on precision tasks, its relation to fine and gross motor skills, in an unconstrained industrial setting. This setting consists of an up to 22 tasks assembly processes of two variants of a high quality product. We establish that there is a high correlation between expected task required skill level and the captured hand eye coordination of expert factory workers and that hand eye coordination can be used to distinguish between fine and gross motor skills. In addition we provide insights how this can be exploited in future work.Professional Performance
https://www.researchgate.net/publication/333366265_Towards_skill_recognition_using_eye-hand_coordination_in_industrial_production
https://www.researchgate.net/profile/Michael_Haslgruebler/publication/333366265_Towards_skill_recognition_using_eye-hand_coordination_in_industrial_production/links/5cfa2ed4a6fdccd1308851f9/Towards-skill-recognition-using-eye-hand-coordination-in-industrial-production.pdfProceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, pp. 11-20. 2019.Skill Recognition; Eye tracking; learning; wearable and pervasive
computing
core
83
2019Eduardo Machado, Ivan Carrillo, David Saldana, Liming ChenAn Assistive Augmented Reality-based Smartglasses Solution for Individuals with Autism Spectrum DisorderAcquiring daily living skills can be difficult for children and adults with autism spectrum disorders (ASD). Increasing evidences indicate that classic occupational interventions approaches such as Discrete Trial Teaching(DTT) results to be boring and frustrating to individuals with ASD. As consequence, they spend most of the time off task and face difficulties to sustaining their selective attention. Moreover, in-person interventions are both costly and difficult to access. Evidence-based research shows that the use of augmented reality(AR) strengthens and attracts the attention of individuals with ASD, enhancing their engagement and user's task performance. However, despite of the benefits, the use of AR as an assistive technology by this segment of population, still presents low rates of adaption. Platforms such as smartphone and tablet, used to run AR technologies provokes an head-down posture that decrease the user's awareness to the physical environment putting themselves in risk of injury. Moreover, they are forced to have their hands occupied and most of the existent applications are lacks to be personalized to different user's needs. This paper introduces a conceptual framework for developing real-time personalized assistive AR-based smartglasses system. The solution aims to solve the issues related to in-person occupational interventions i.e constant need for professional supervision during intervention as well as limited intervention duration and frequency. In addition, we also target issues related to classic AR-based platforms i.e head-down postures and task-specific design.HCI, Medical
https://www.researchgate.net/publication/337038793_An_Assistive_Augmented_Reality-based_Smartglasses_Solution_for_Individuals_with_Autism_Spectrum_Disorder
https://www.researchgate.net/profile/Eduardo_Machado19/publication/337038793_An_Assistive_Augmented_Reality-based_Smartglasses_Solution_for_Individuals_with_Autism_Spectrum_Disorder/links/5e3069daa6fdccd96573248c/An-Assistive-Augmented-Reality-based-Smartglasses-Solution-for-Individuals-with-Autism-Spectrum-Disorder.pdf2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), pp. 245-249. IEEE, 2019.Autism, Augmented Reality (AR)core
84
2019Kai Dierkes, Moritz Kassner Andreas BullingA fast approach to refraction-aware eye-model fitting and gaze predictionBy temporally integrating information about pupil contours extracted from eye images, model-based methods for glint-free gaze estimation can mitigate pupil detection noise. However, current approaches require time-consuming iterative solving of a nonlinear minimization problem to estimate key parameters, such as eyeball position. Based on the method presented by [Swirski and Dodgson 2013], we propose a novel approach to glint-free 3D eye-model fitting and gaze prediction using a single near-eye camera. By recasting model optimization as a least-squares intersection of lines, we make it amenable to a fast non-iterative solution. We further present a method for estimating deterministic refraction-correction functions from synthetic eye images and validate them on both synthetic and real eye images. We demonstrate the robustness of our method in the presence of pupil detection noise and show the benefit of temporal integration of pupil contour information on eyeball position and gaze estimation accuracy.Eye tracking Algorithms
https://www.researchgate.net/publication/333490770_A_fast_approach_to_refraction-aware_eye-model_fitting_and_gaze_prediction
https://perceptualui.org/publications/dierkes19_etra.pdfProceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, pp. 1-9. 2019.Eye tracking, refraction, 3D eye model, pupil detection, contourbased, glint-freecore
85
2019Noah J. Steinberg, Alexander A. Brown, Luis F. SchettinoTarget capture strategy selection in a simulated marksmanship taskThis paper examines how individuals track targets that move in relatively unpredictable trajectories. Gaze and behavioural data were captured as twenty two participants learned a simulated competitive marksmanship task known colloquially as the Death Star over six training days. Participants spontaneously selected one of two consistent target-tracking strategies with approximately equal probability. Participants employed either chasing behaviour, in which gaze follows a target’s trajectory before a shot, or ambushing behaviour, wherein gaze anticipates the trajectory and the participant intercepts a moving target predictively. All participants improved in task performance measures (completion time and number of shots), but did so at the expense of accuracy in missed shot attempts. Surprisingly, neither behavioural strategy offered a significant advantage in task performance measures, indicating that either may be equally effective in tackling a hand-eye coordination task with complex target motion such as the Death Star.Psychology, Sports Performance, Professional Performance
https://www.nature.com/articles/s41598-019-50551-z
https://www.nature.com/articles/s41598-019-50551-zScientific Reports 9, no. 1 (2019): 1-11.Marksmanshipcore
86
2019Isayas Berhe AdhanomEye Gaze Based Hands-Free Navigation in Virtual EnvironmentsEye tracking has shown great potential in supporting interaction tasks in VR, such as selection, manipulation and navigation. Previous studies have shown that eye gaze based interaction performs poorly for selection and manipulation tasks due to the low accuracy of eye gaze data. Navigation, however, requires less accuracy than selection and manipulation. In this thesis, a hands-free virtual navigation technique that utilizes eye gaze data is proposed. The technique allows users to use eye gaze for virtual navigation while leaving their hands free for other more important interaction tasks. The proposed system requires minimal physical interaction; therefore, it could also be beneficial to VR users with severe motor disabilities. A user study with 11 participants compared the proposed technique to head tilt based navigation using an obstacle avoidance navigation task. The results of the study indicate that eye gaze and head tilt have comparable performance, which is promising in comparison to previous studies. This shows that the proposed eye gaze based virtual navigation technique may be a viable navigation technique in VR.Computer Science, UI/UX
https://scholarworks.unr.edu/handle/11714/5702
http://scholarworks.unr.edu:8080/bitstream/handle/11714/5702/Adhanom_unr_0139M_12810.pdf?sequence=1&isAllowed=yPHD Dissertation 2019Virtual Reality (VR), navigation, hands-free navigationvr
87
2019Daniel Backhaus, Ralf Engbert, Lars Oliver Martin Rothkegel, Hans Arne TrukenbrodTask-dependence in scene perception: Head unrestrained viewing using mobile eye-trackingReal-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data was transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.Psychology
https://arxiv.org/abs/1911.06085
https://arxiv.org/pdf/1911.06085arXiv preprint arXiv:1911.06085 (2019).scene viewing, real-world scenarios, mobile eye-tracking, task influence, central fixation biascore
88
2019Lars Lischke, Valentin Schwind, Robin Schweigert, Paweł W. Woźniak, Niels HenzeUnderstanding pointing for workspace tasks on large high-resolution displaysNavigating on large high-resolution displays (LHRDs) using devices built for traditional desktop computers can be strenuous and negatively impact user experience. As LHRDs transition to everyday use, new user-friendly interaction techniques need to be designed to capitalise on the potential offered by the abundant screen space on LHRDs. We conducted a study which compared mouse pointing and eye-tracker assisted pointing (MAGIC pointing) on LHRDs. In a controlled experiment with 35 participants, we investigated user performance in a one-dimensional pointing task and a map-based search task. We determined that MAGIC pointing had a lower throughput, but participants had the perception of higher performance. Our work contributes insights for the design of pointing techniques for LHRDs. The results indicate that the choice of technique is scenario-dependent which contrasts with desktop computers.UI/UX, Computer Science, Navigation
https://dl.acm.org/doi/10.1145/3365610.3365636
http://lars-lischke.de/wp-content/uploads/pub/lischke2019MAGIC.pdfProceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia, pp. 1-9. 2019.Large High-Resolution Displays, Pointing, Eye-Tracking, MAGIC
pointing.
core
89
2018Yinheng Zhu, Wanli Chen, Xun Zhan, Zonglin GuoHead Mounted Pupil Tracking Using Convolutional Neural Network
Pupil tracking is an important branch of object tracking which require high precision. We investigate head mounted pupil tracking which is often more convenient and precise than remote pupil tracking, but also more challenging. When pupil tracking suffers from noise like bad illumination, detection precision dramatically decreases. Due to the appearance of head mounted recording device and public benchmark image datasets, head mounted tracking algorithms have become easier to design and evaluate. In this paper, we propose a robust head mounted pupil detection algorithm which uses a Convolutional Neural Network (CNN) to combine different features of pupil. Here we consider three features of pupil. Firstly, we use three pupil feature-based algorithms to find pupil center independently. Secondly, we use a CNN to evaluate the quality of each result. Finally, we select the best result as output. The experimental results show that our proposed algorithm performs better than the present state-of-art.
Computer Science, Eye tracking Algorithms
https://www.researchgate.net/publication/324887392_Head_Mounted_Pupil_Tracking_Using_Convolutional_Neural_Network
https://www.researchgate.net/publication/324887392_Head_Mounted_Pupil_Tracking_Using_Convolutional_Neural_Network/fulltext/5ae932a8aca2725dabb51f2f/Head-Mounted-Pupil-Tracking-Using-Convolutional-Neural-Network.pdfeprint arXiv:1805.00311core
90
2018Sebastiaan Mathôt, Jasper FabiusElle Van HeusdenStefan Van der StigchelSafe and sensible preprocessing and baseline correction of pupil-size dataMeasurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size − baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).Psychology, Pupillometry
https://link.springer.com/article/10.3758/s13428-017-1007-2
https://link.springer.com/content/pdf/10.3758%2Fs13428-017-1007-2.pdfBehavior Research Methods (2018) 50:94-106Pupillometry, Pupil size, Baseline correction, Research methodscore
91
2018Roser Cañigueral,Antonia F Hamilton, Jamie A WardDon't Look at Me, I'm Wearing an Eyetracker!Looking is a two-way process: we use our eyes to perceive the world around us, but we also use our eyes to signal to others. Eye contact in particular reveals much about our social interactions, and as such can be a rich source of information for context-aware wearable applications. But when designing these applications, it is useful to understand the effects that the head-worn eye-trackers might have on our looking behavior. Previous studies have shown that we moderate our gaze when we know our eyes are being tracked, but what happens to our gaze when we see others wearing eye trackers? Using gaze recordings from 30 dyads, we investigate what happens to a person's looking behavior when the person with whom they are speaking is also wearing an eye-tracker. In the preliminary findings reported here, we show that people tend to look less to the eyes of people who are wearing a tracker, than they do to the eyes of those who are not. We discuss possible reasons for this and suggest future directions of study.
Neuroscience
https://www.researchgate.net/publication/328681758_Don't_Look_at_Me_I'm_Wearing_an_Eyetracker/references
https://www.researchgate.net/profile/Roser_Canigueral/publication/328681758_Don't_Look_at_Me_I'm_Wearing_an_Eyetracker/links/5bdefd2c4585150b2b9e3c21/Dont-Look-at-Me-Im-Wearing-an-Eyetracker.pdf?_sg%5B0%5D=CTKEXMltkckqgNwTWDrxi7uzQZEZeivGCZpysX8cHTnax3yOCG99Cz9bbxmX5Dlp5FPf0L1IkQ2h5h-B6I0FkQ.HLJEToAiEg41evAotd6BGlRB8BfH9CYCxt0_r_HOvn07dxgidFaO-lSTIh8oY7kbfghs9TElCFvha3ixpe4HBA&_sg%5B1%5D=LYrtOMrGwXVn1QEv8U0oKrAOYjedB0dB-aqsTg6E8rHjshxGAtv0Ng4yjgEV1sZy95L7bl59RsZmvwZZXWuKFYFprbfUfh0dLwMctH_1dEYT.HLJEToAiEg41evAotd6BGlRB8BfH9CYCxt0_r_HOvn07dxgidFaO-lSTIh8oY7kbfghs9TElCFvha3ixpe4HBA&_iepl=UbiComp/ISWC'18 Adjunct, October 8–12, 2018, Singapore, SingaporeEye tracking, interaction, gaze contingency, social
behavior, eye-based computing, wearables
core
92
2018Tobias Fischer, Hyung Jin Chang, and Yiannis DemirisRT-GENE: Real-Time Eye Gaze Estimation in Natural EnvironmentsIn this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high
variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for
gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using
mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance
in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.
Computer Vision, Eye tracking Algorithms
https://link.springer.com/chapter/10.1007/978-3-030-01249-6_21
http://openaccess.thecvf.com/content_ECCV_2018/papers/Tobias_Fischer_RT-GENE_Real-Time_Eye_ECCV_2018_paper.pdfECCV 2018Gaze estimation, Gaze dataset, Convolutional Neural
Network, Semantic inpainting, Eyetracking glasses
core
93
2018Stelling-Konczak, A., Vlakveld, W.P., Van Gent, P., Commandeur, J.J.F., Van
Wee, G.P., Hagenzieker, M
A study in real traffic examining glance behaviour of teenage cyclists when listening to music: Results and ethical considerationsListening to music while cycling impairs cyclists’ auditory perception and may decrease their awareness of approaching vehicles. If the impaired auditory perception is not compensated by the cyclist himself or other road users involved, crashes may occur. The first aim of this study was to investigate in real traffic whether teenage cyclists (aged 16–18) compensate for listening to music by increasing their visual performance. Research in real traffic may pose a risk for participants. Although no standard ethical codes exist for road safety research, we took a number of ethical considerations into account to protect participants. Our second aim was to present this study as a case study demonstrating ethical dilemmas related to performing research in real traffic. The third aim was to examine to what extent the applied experimental set-up is suitable to examine bicyclists’ visual behaviour in situations crucial for their safety. Semi-naturalistic data was gathered. Participants’ eye movements were recorded by a head-mounted eye-tracker during two of their regular trips in urban environments. During one of the trips, cyclists were listening to music (music condition); during the other trip they were ‘just’ cycling (the baseline condition). As for cyclists’ visual behaviour, overall results show that it was not affected by listening to music. Descriptive statistics showed that 21–36% of participants increased their visual performance in the music condition, while 43–64% decreased their visual performance while listening to music. Due to ethical considerations, the study was therefore terminated after fourteen cyclists had participated. Potential implications of these results for cycling safety and cycling safety research are discussed. The methodology used in this study did not allow us to investigate cyclists’ behaviour in demanding traffic environment. However, for now, no other research method seems suitable to address this research gap.Transportation Safety
https://doi.org/10.1016/j.trf.2018.02.031
https://www.researchgate.net/profile/Marjan_Hagenzieker/publication/320563592_A_study_in_real_traffic_examining_glance_behaviour_of_teenage_cyclists_when_listening_to_music_Results_and_ethical_considerations/links/5ba923ab92851ca9ed225474/A-study-in-real-traffic-examining-glance-behaviour-of-teenage-cyclists-when-listening-to-music-Results-and-ethical-considerations.pdfTransportation Research Part F: Traffic Psychology and Behaviour
Volume 55, May 2018, Pages 47-57
Cycling safety, Music, Auditory perception, Visual attention, Visual performance, Research ethicscore
94
2018Trent Koessler, Harold HillFocusing on an illusion: Accommodating to perceived depth?Ocular accommodation potentially provides information about depth but there is little evidence that this information is used by the human visual system. We use the hollow-face illusion, an illusion of depth reversal, to investigate whether accommodation is linked to perceived depth. In Experiment 1 accommodation, like vergence, was in front of the physical surface of the mask when the mask was upright and people reported experiencing the illusion. Accommodation to the illusory face did not differ significantly from accommodation to the physically convex back surface of the same mask. Only accommodation to the inverted mask seen as hollow was significantly less and, like the physical surface, beyond the mid-plane of the mask. The effect on accommodation was the same for monocular as binocular viewing, showing that accommodation is not driven by binocular disparities through vergence, although voluntary vergence remains a possibility. In Experiment 2 a projected random dot pattern was used to flip perception between convex and concave in all presentation conditions. Accommodation was again in front of the physical surface when the illusion was experienced. Experiment 3 showed that projected dots are more effective in disambiguating the illusion as concave when they are sharp and provide a good accommodative stimulus than when they are objectively blurred. We interpret Experiments 1 and 2 as showing that accommodation is tied to perceived depth, directly or indirectly, even in a situation where multiple depth cues are available and feedback is not artificially open-looped. Experiment 3 is consistent with accommodation helping to disambiguate depth while not ruling out alternative explanations.Psychology
https://doi.org/10.1016/j.visres.2018.11.001
https://reader.elsevier.com/reader/sd/pii/S0042698918302360?token=6C8E97AA7D2FC8F2FC32FC77EC3E6A7A9F22CE57F070F75B86550644FE991309A2D235248384C4C9989E47025A995919Vision Research, Volume 154, January 2019, Pages 131-141Accommodation Convergence, Depth perception, Hollow-face illusioncore
95
2018Vicente Soto, John Tyson-Carr, Katerina Kokmotou, Hannah Roberts, Stephanie Cook, Nicholas Fallon, Timo Giesbrecht, Andrej StancakBrain Responses to Emotional Faces in Natural Settings: A Wireless Mobile EEG Recording StudyThe detection of a human face in a visual field and correct reading of emotional expression of faces are important elements in everyday social interactions, decision making and emotional responses. Although brain correlates of face processing have been established in previous fMRI and electroencephalography (EEG)/MEG studies, little is known about how the brain representation of faces and emotional expressions of faces in freely moving humans. The present study aimed to detect brain electrical potentials that occur during the viewing of human faces in natural settings. 64-channel wireless EEG and eye-tracking data were recorded in 19 participants while they moved in a mock art gallery and stopped at times to evaluate pictures hung on the walls. Positive, negative and neutral valence pictures of objects and human faces were displayed. The time instants in which pictures first occurred in the visual field were identified in eye-tracking data and used to reconstruct the triggers in continuous EEG data after synchronizing the time axes of the EEG and eye-tracking device. EEG data showed a clear face-related event-related potential (ERP) in the latency interval ranging from 165 to 210 ms (N170); this component was not seen whilst participants were viewing non-living objects. The face ERP component was stronger during viewing disgusted compared to neutral faces. Source dipole analysis revealed an equivalent current dipole in the right fusiform gyrus (BA37) accounting for N170 potential. Our study demonstrates for the first time the possibility of recording brain responses to human faces and emotional expressions in natural settings. This finding opens new possibilities for clinical, developmental, social, forensic, or marketing research in which information about face processing is of importance.Psychology
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6209651/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6209651/pdf/fpsyg-09-02003.pdfFront Psychology; 2018; 9: 2003.EEG, eye-movement related potentials, N170 component, source dipole analysis, MoBI, mobile brain imaging, visual evoked potential (VEP)core
96
2018Newman, Benjamin A., Aronson, Reuben M., Srinivasa, Siddhartha S., Kitani, Kris, and Admoni, HennyHARMONIC: A Multimodal Dataset of
Assistive Human-Robot Collaboration
We present HARMONIC, a large multi-modal dataset of human interactions in a shared autonomy setting. The dataset provides human, robot, and environment data streams from twenty-four people engaged in an assistive eating task with a 6 degree-of-freedom (DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the participant's forearm used to operate the joystick, third person stereo video, and the joint positions of the 6 DOF robot arm. Also included are several data streams that come as a direct result of these recordings, namely eye gaze fixations in the egocentric camera frame and body position skeletons. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable csv or yaml files.HRI, Robotics
http://harp.ri.cmu.edu/harmonic/
https://arxiv.org/pdf/1807.11154.pdfArXiv 2018Computer Science - Robotics, Computer Science - Human-Computer Interactioncore
97
2018Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, Henny AdmoniEye-Hand Behavior in Human-Robot Shared ManipulationShared autonomy systems enhance people's abilities to perform activities of daily living using robotic manipulators. Recent systems succeed by first identifying their operators' intentions, typically by analyzing the user's joystick input. To enhance this recognition, it is useful to characterize people's behavior while performing such a task. Furthermore, eye gaze is a rich source of information for understanding operator intention. The goal of this paper is to provide novel insights into the dynamics of control behavior and eye gaze in human-robot shared manipulation tasks. To achieve this goal, we conduct a data collection study that uses an eye tracker to record eye gaze during a human-robot shared manipulation activity, both with and without shared autonomy assistance. We process the gaze signals from the study to extract gaze features like saccades, fixations, smooth pursuits, and scan paths. We analyze those features to identify novel patterns of gaze behaviors and highlight where these patterns are similar to and different from previous findings about eye gaze in human-only manipulation tasks. The work described in this paper lays a foundation for a model of natural human eye gaze in human-robot shared manipulation.HRI, Robotics
https://dl.acm.org/citation.cfm?id=3171287
https://www.ri.cmu.edu/wp-content/uploads/2018/01/hri2018_aronson.pdfConference Paper, Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, March, 2018human-robot interaction, eye gaze, eye tracking, shared autonomy,
nonverbal communication
core
98
2018Reuben M. Aronson and Henny AdmoniGaze for Error Detection During Human-Robot Shared Manipulation.Human-robot collaboration systems benefit from the ability of the robot to recognize people’s intentions. People’s nonverbal behavior while performing tasks, especially gaze, has shown to be a reliable signal to recognize what people intend to do. We propose an additional usage of this signal: to recognize when something unexpected has occurred during the task. Case studies from a dataset of gaze behavior when controlling a robot indicate that people’s gaze deviates from ordinary patterns when unexpected conditions occur. By using such a system, robot collaborators can identify unexpected behaviors and smoothly take corrective action.HRI, Robotics
https://www.ri.cmu.edu/publications/gaze-for-error-detection-during-human-robot-shared-manipulation/
http://harp.ri.cmu.edu/assets/pubs/fja_rss2018_aronson.pdfJoint Action Workshop at RSS 2018Computer Science - Robotics, Computer Science - Human-Computer Interactioncore
99
2018Jeff J MacInnes, Shariq Iqbal, John Pearson, Elizabeth N JohnsonWearable Eye-tracking for Research: Automated dynamic gaze mapping and accuracy/precision comparisons across devicesWearable eye-trackers offer exciting advantages over screen-based systems, but their use in research settings has been hindered by significant analytic challenges as well as a lack of published performance measures among competing devices on the market. In this article, we address both of these limitations. We describe (and make freely available) an automated analysis pipeline for mapping gaze data from an egocentric coordinate system (i.e. the wearable eye-tracker) to a fixed reference coordinate system (i.e. a target stimulus in the environment). This pipeline allows researchers to study aggregate viewing behavior on a 2D planar target stimulus without restricting the mobility of participants. We also designed a task to directly compare calibration accuracy and precision across 3 popular models of wearable eye-trackers: Pupil Labs 120Hz Binocular glasses, SMI ETG 2 glasses, and the Tobii Pro Glasses 2. Our task encompassed multiple viewing conditions selected to approximate distances and gaze angles typical for short- to mid-range viewing experiments. This work will promote and facilitate the use of wearable eye-trackers for research in naturalistic viewing experiments.Eye tracker testing
https://www.biorxiv.org/content/early/2018/06/28/299925.abstract
https://www.biorxiv.org/content/biorxiv/early/2018/06/28/299925.full.pdfPreprintEye-tracking, wearable eye-tracking, mobile eye-tracking, feature matching, calibration
accuracy, calibration precision, gaze mapping, computer vision
core
100
2018Mohamed Khamis, Malin Eiband, Martin Zürn, Heinrich HussmannEyeSpot: Leveraging Gaze to Protect Private Text Content on Mobile Devices from Shoulder SurfingAs mobile devices allow access to an increasing amount of private data, using them in public can potentially leak sensitive information through shoulder surfing. This includes personal private data (e.g., in chat conversations) and business-related content (e.g., in emails). Leaking the former might infringe on users’ privacy, while leaking the latter is considered a breach of the EU’s General Data Protection Regulation as of May 2018. This creates a need for systems that protect sensitive data in public. We introduce EyeSpot, a technique that displays content through a spot that follows the user’s gaze while hiding the rest of the screen from an observer’s view through overlaid masks. We explore different configurations for EyeSpot in a user study in terms of users’ reading speed, text comprehension, and perceived workload. While our system is a proof of concept, we identify crystallized masks as a promising design candidate for further evaluation with regard to the security of the system in a shoulder surfing scenario.Privacy, HCI
https://doi.org/10.3390/mti2030045
https://www.mdpi.com/2414-4088/2/3/45/pdfMultimodal Technologies Interact. 2018, 2(3), 45;mobile devices; privacy; gaze; eye tracking; securitycore
Loading...