ABCDEFGHIJKLMNOPQR
1
ConferenceYearTitleDOILinkFirstPageLastPagePaperTypeAbstractAuthorNames-DedupedAuthorNamesAuthorAffiliationInternalReferencesAuthorKeywords
AminerCitationCount
CitationCount_CrossRef
PubsCited_CrossRef
Award
2
Vis2022
Photosensitive Accessibility for Interactive Data Visualizations
10.1109/TVCG.2022.3209359
http://dx.doi.org/10.1109/TVCG.2022.3209359
374384J
Accessibility guidelines place restrictions on the use of animations and interactivity on webpages to lessen the likelihood of webpages inadvertently producing sequences with flashes, patterns, or color changes that may trigger seizures for individuals with photosensitive epilepsy. Online data visualizations often incorporate elements of animation and interactivity to create a narrative, engage users, or encourage exploration. These design guidelines have been empirically validated by perceptual studies in visualization literature, but the impact of animation and interaction in visualizations on users with photosensitivity, who may experience seizures in response to certain visual stimuli, has not been considered. We systematically gathered and tested 1,132 interactive and animated visualizations for seizure-inducing risk using established methods and found that currently available methods for determining photosensitive risk are not reliable when evaluating interactive visualizations, as risk scores varied significantly based on the individual interacting with the visualization. To address this issue, we introduce a theoretical model defining the degree of control visualization designers have over three determinants of photosensitive risk in potentially seizure-inducing sequences: the size, frequency, and color of flashing content. Using an analysis of 375 visualizations hosted on bl.ocks.org, we created a theoretical model of photosensitive risk in visualizations by arranging the photosensitive risk determinants according to the degree of control visualization authors have over whether content exceeds photosensitive accessibility thresholds. We then use this model to propose a new method of testing for photosensitive risk that focuses on elements of visualizations that are subject to greater authorial control - and are therefore more robust to variations in the individual user - producing more reliable risk assessments than existing methods when applied to interactive visualizations. A full copy of this paper and all study materials are available at https://osf.io/8kzmg/.
Laura South;Michelle BorkinLaura South;Michelle A. BorkinNortheastern University, USA;Northeastern University, USA
10.1109/TVCG.2011.185;10.1109/TVCG.2021.3114829;10.1109/TVCG.2007.70539;10.1109/TVCG.2019.2934431;10.1109/TVCG.2021.3114846;10.1109/TVCG.2014.2346452;10.1109/TVCG.2021.3114770;10.1109/TVCG.2009.113;10.1109/TVCG.2016.2599030;10.1109/TVCG.2014.2346352
accessibility,photosensitive epilepsy,photosensitivity,interaction,data visualization163
3
Vis2022
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning
10.1109/TVCG.2022.3209347
http://dx.doi.org/10.1109/TVCG.2022.3209347
310319J
Horizontal federated learning (HFL) enables distributed clients to train a shared model and keep their data privacy. In training high-quality HFL models, the data heterogeneity among clients is one of the major concerns. However, due to the security issue and the complexity of deep learning models, it is challenging to investigate data heterogeneity across different clients. To address this issue, based on a requirement analysis we developed a visual analytics tool, HetVis, for participating clients to explore data heterogeneity. We identify data heterogeneity through comparing prediction behaviors of the global federated model and the stand-alone model trained with local data. Then, a context-aware clustering of the inconsistent records is done, to provide a summary of data heterogeneity. Combining with the proposed comparison techniques, we develop a novel set of visualizations to identify heterogeneity issues in HFL. We designed three case studies to introduce how HetVis can assist client analysts in understanding different types of heterogeneity issues. Expert reviews and a comparative study demonstrate the effectiveness of HetVis.
Xumeng Wang;Wei Chen 0001;Jiazhi Xia;Zhen Wen;Rongchen Zhu;Tobias Schreck
Xumeng Wang;Wei Chen;Jiazhi Xia;Zhen Wen;Rongchen Zhu;Tobias Schreck
TMCC, CS, Nankai University, China;State Key Lab of CAD&CG, Zhejiang University, China;School of Computer Science and Engineering, Central South University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Graz University of Technology, Austria
10.1109/TVCG.2015.2467618;10.1109/TVCG.2019.2934251;10.1109/VAST.2017.8585720;10.1109/TVCG.2018.2865027;10.1109/TVCG.2017.2744938;10.1109/TVCG.2017.2744358;10.1109/TVCG.2016.2598828;10.1109/VAST50239.2020.00006;10.1109/TVCG.2019.2934619;10.1109/TVCG.2018.2864499
Federated learning,data heterogeneity,cluster analysis,visual analysis343
4
Vis2022
Rigel: Transforming Tabular Data by Declarative Mapping
10.1109/TVCG.2022.3209385
http://dx.doi.org/10.1109/TVCG.2022.3209385
128138J
We present Rigel, an interactive system for rapid transformation of tabular data. Rigel implements a new declarative mapping approach that formulates the data transformation procedure as direct mappings from data to the row, column, and cell channels of the target table. To construct such mappings, Rigel allows users to directly drag data attributes from input data to these three channels and indirectly drag or type data values in a spreadsheet, and possible mappings that do not contradict these interactions are recommended to achieve efficient and straightforward data transformation. The recommended mappings are generated by enumerating and composing data variables based on the row, column, and cell channels, thereby revealing the possibility of alternative tabular forms and facilitating open-ended exploration in many data transformation scenarios, such as designing tables for presentation. In contrast to existing systems that transform data by composing operations (like transposing and pivoting), Rigel requires less prior knowledge on these operations, and constructing tables from the channels is more efficient and results in less ambiguity than generating operation sequences as done by the traditional by-example approaches. User study results demonstrated that Rigel is significantly less demanding in terms of time and interactions and suits more scenarios compared to the state-of-the-art by-example approach. A gallery of diverse transformation cases is also presented to show the potential of Rigel's expressiveness.
Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu
Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Zhejiang University of Technology, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
10.1109/TVCG.2021.3114830;10.1109/VAST47406.2019.8986909;10.1109/TVCG.2011.185;10.1109/VAST.2011.6102441;10.1109/TVCG.2012.219;10.1109/TVCG.2020.3030462;10.1109/TVCG.2022.3209354;10.1109/TVCG.2019.2934593;10.1109/VAST.2011.6102440;10.1109/TVCG.2016.2599030;10.1109/TVCG.2015.2467191;10.1109/TVCG.2022.3209470
Data transformation,self-service data transformation,programming by example,declarative specification368
5
Vis2022
BeauVis: A Validated Scale for Measuring the Aesthetic Pleasure of Visual Representations
10.1109/TVCG.2022.3209390
http://dx.doi.org/10.1109/TVCG.2022.3209390
363373J
We developed and validated a rating scale to assess the aesthetic pleasure (or beauty) of a visual data representation: the BeauVis scale. With our work we offer researchers and practitioners a simple instrument to compare the visual appearance of different visualizations, unrelated to data or context of use. Our rating scale can, for example, be used to accompany results from controlled experiments or be used as informative data points during in-depth qualitative studies. Given the lack of an aesthetic pleasure scale dedicated to visualizations, researchers have mostly chosen their own terms to study or compare the aesthetic pleasure of visualizations. Yet, many terms are possible and currently no clear guidance on their effectiveness regarding the judgment of aesthetic pleasure exists. To solve this problem, we engaged in a multi-step research process to develop the first validated rating scale specifically for judging the aesthetic pleasure of a visualization (osf.io/fxs76). Our final BeauVis scale consists of five items, “enjoyable,” “likable,” “pleasing,” “nice,” and “appealing.” Beyond this scale itself, we contribute (a) a systematic review of the terms used in past research to capture aesthetics, (b) an investigation with visualization experts who suggested terms to use for judging the aesthetic pleasure of a visualization, and (c) a confirmatory survey in which we used our terms to study the aesthetic pleasure of a set of 3 visualizations.
Tingying He;Petra Isenberg;Raimund Dachselt;Tobias Isenberg 0001
Tingying He;Petra Isenberg;Raimund Dachselt;Tobias Isenberg
Université Paris-Saclay, CNRS, Inria, LISN, France;Université Paris-Saclay, CNRS, Inria, LISN, France;Technische Universität Dresden, Germany;Université Paris-Saclay, CNRS, Inria, LISN, France
10.1109/INFVIS.2005.1532128;10.1109/TVCG.2006.187;10.1109/TVCG.2008.166;10.1109/TVCG.2020.3030411;10.1109/TVCG.2009.122;10.1109/INFVIS.1997.636793;10.1109/TVCG.2020.3030456;10.1109/TVCG.2010.134;10.1109/TVCG.2013.196;10.1109/TVCG.2010.199;10.1109/TVCG.2014.2352953;10.1109/TVCG.2020.3030400;10.1109/TVCG.2015.2467411;10.1109/TVCG.2012.189
Aesthetics,aesthetic pleasure,validated scale,scale development,visual representations179
6
Vis2022
NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural Network Synthesis
10.1109/TVCG.2022.3209361
http://dx.doi.org/10.1109/TVCG.2022.3209361
299309J
The success of DL can be attributed to hours of parameter and architecture tuning by human experts. Neural Architecture Search (NAS) techniques aim to solve this problem by automating the search procedure for DNN architectures making it possible for non-experts to work with DNNs. Specifically, One-shot NAS techniques have recently gained popularity as they are known to reduce the search time for NAS techniques. One-Shot NAS works by training a large template network through parameter sharing which includes all the candidate NNs. This is followed by applying a procedure to rank its components through evaluating the possible candidate architectures chosen randomly. However, as these search models become increasingly powerful and diverse, they become harder to understand. Consequently, even though the search results work well, it is hard to identify search biases and control the search progression, hence a need for explainability and human-in-the-loop (HIL) One-Shot NAS. To alleviate these problems, we present NAS-Navigator, a visual analytics (VA) system aiming to solve three problems with One-Shot NAS; explainability, HIL design, and performance improvements compared to existing state-of-the-art (SOTA) techniques. NAS-Navigator gives full control of NAS back in the hands of the users while still keeping the perks of automated search, thus assisting non-expert users. Analysts can use their domain knowledge aided by cues from the interface to guide the search. Evaluation results confirm the performance of our improved One-Shot NAS algorithm is comparable to other SOTA techniques. While adding Visual Analytics (VA) using NAS-Navigator shows further improvements in search time and performance. We designed our interface in collaboration with several deep learning researchers and evaluated NAS-Navigator through a control experiment and expert interviews.
Anjul Tyagi;Cong Xie;Klaus Mueller 0001Anjul Tyagi;Cong Xie;Klaus MuellerComputer Science Department, Visual Analytics and Imaging Lab, Stony Brook University, New York, USA;Computer Science Department, Visual Analytics and Imaging Lab, Stony Brook University, New York, USA;Computer Science Department, Visual Analytics and Imaging Lab, Stony Brook University, New York, USA
10.1109/VAST.2012.6400490;10.1109/TVCG.2019.2934261;10.1109/TVCG.2018.2864477;10.1109/TVCG.2017.2745085;10.1109/TVCG.2017.2745158;10.1109/TVCG.2013.125;10.1109/VAST.2007.4388999;10.1109/TVCG.2017.2744805;10.1109/VAST47406.2019.8986923;10.1109/TVCG.2018.2864499
Deep Learning,Neural Network Architecture Search,Visual Analytics,Explainability063
7
Vis2022
In Defence of Visual Analytics Systems: Replies to Critics
10.1109/TVCG.2022.3209360
http://dx.doi.org/10.1109/TVCG.2022.3209360
10261036J
The last decade has witnessed many visual analytics (VA) systems that make successful applications to wide-ranging domains like urban analytics and explainable AI. However, their research rigor and contributions have been extensively challenged within the visualization community. We come in defence of VA systems by contributing two interview studies for gathering critics and responses to those criticisms. First, we interview 24 researchers to collect criticisms the review comments on their VA work. Through an iterative coding and refinement process, the interview feedback is summarized into a list of 36 common criticisms. Second, we interview 17 researchers to validate our list and collect their responses, thereby discussing implications for defending and improving the scientific values and rigor of VA systems. We highlight that the presented knowledge is deep, extensive, but also imperfect, provocative, and controversial, and thus recommend reading with an inclusive and critical eye. We hope our work can provide thoughts and foundations for conducting VA research and spark discussions to promote the research field forward more rigorously and vibrantly.
Aoyu Wu;Dazhen Deng;Furui Cheng;Yingcai Wu;Shixia Liu;Huamin Qu
Aoyu Wu;Dazhen Deng;Furui Cheng;Yingcai Wu;Shixia Liu;Huamin Qu
Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;School of Software, Tsinghua University, China;Hong Kong University of Science and Technology, China
10.1109/TVCG.2020.3030338;10.1109/TVCG.2021.3114836;10.1109/TVCG.2021.3114797;10.1109/TVCG.2013.226;10.1109/TVCG.2021.3114810;10.1109/TVCG.2019.2934790;10.1109/TVCG.2021.3114855;10.1109/TVCG.2016.2598827;10.1109/TVCG.2013.126;10.1109/VISUAL.2003.1250401;10.1109/TVCG.2019.2934264;10.1109/TVCG.2021.3114800;10.1109/TVCG.2017.2744319;10.1109/TVCG.2021.3114766;10.1109/TVCG.2018.2865022;10.1109/TVCG.2016.2598432;10.1109/TVCG.2019.2934593;10.1109/TVCG.2016.2598831;10.1109/TVCG.2021.3114789;10.1109/TVCG.2014.2346331;10.1109/TVCG.2019.2934539;10.1109/TVCG.2009.111;10.1109/TVCG.2021.3114827;10.1109/TVCG.2021.3114820;10.1109/TVCG.2021.3114858;10.1109/TVCG.2021.3114959;10.1109/TVCG.2021.3114812;10.1109/TVCG.2016.2598838;10.1109/TVCG.2013.120;10.1109/TVCG.2012.213;10.1109/TVCG.2020.3030396;10.1109/TVCG.2021.3114857;10.1109/TVCG.2021.3114878;10.1109/TVCG.2021.3114781;10.1109/TVCG.2021.3114787;10.1109/TVCG.2021.3114821;10.1109/TVCG.2021.3114840;10.1109/TVCG.2021.3114794;10.1109/TVCG.2021.3114790;10.1109/TVCG.2014.2346920;10.1109/TVCG.2018.2865041;10.1109/TVCG.2019.2934656
Visual Analytics,Theory,Qualitative Study,Design Study,Application,Theoretical and Empirical Research779
8
Vis2022
Studying Early Decision Making with Progressive Bar Charts
10.1109/TVCG.2022.3209426
http://dx.doi.org/10.1109/TVCG.2022.3209426
407417J
We conduct a user study to quantify and compare user performance for a value comparison task using four bar chart designs, where the bars show the mean values of data loaded progressively and updated every second (progressive bar charts). Progressive visualization divides different stages of the visualization pipeline—data loading, processing, and visualization—into iterative animated steps to limit the latency when loading large amounts of data. An animated visualization appearing quickly, unfolding, and getting more accurate with time, enables users to make early decisions. However, intermediate mean estimates are computed only on partial data and may not have time to converge to the true means, potentially misleading users and resulting in incorrect decisions. To address this issue, we propose two new designs visualizing the history of values in progressive bar charts, in addition to the use of confidence intervals. We comparatively study four progressive bar chart designs: with/without confidence intervals, and using near-history representation with/without confidence intervals, on three realistic data distributions. We evaluate user performance based on the percentage of correct answers (accuracy), response time, and user confidence. Our results show that, overall, users can make early and accurate decisions with 92% accuracy using only 18% of the data, regardless of the design. We find that our proposed bar chart design with only near-history is comparable to bar charts with only confidence intervals in performance, and the qualitative feedback we received indicates a preference for designs with history.
Ameya D. Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete
Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete
University of Washington, Seattle, USA;Inria & Université Paris-Saclay, France;Rice University, USA;Carnegie Mellon University, USA;Inria & Université Paris-Saclay, France
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2021.3114803;10.1109/TVCG.2014.2346298;10.1109/TVCG.2019.2934287;10.1109/TVCG.2011.175;10.1109/TVCG.2018.2864909;10.1109/TVCG.2014.2346452;10.1109/TVCG.2008.125;10.1109/TVCG.2014.2346320
Progressive visualization,Uncertainty,Bar charts,Confidence intervals060
9
Vis2022
HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data
10.1109/TVCG.2022.3209354
http://dx.doi.org/10.1109/TVCG.2022.3209354
139148J
Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.
Guozheng Li 0002;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu 0002;Guoren Wang
Guozheng Li;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu;Guoren Wang
Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Shenzhen University, China;Beijing Institute of Technology, China
10.1109/TVCG.2022.3209385;10.1109/TVCG.2014.2346260;10.1109/TVCG.2013.173;10.1109/TVCG.2011.250;10.1109/TVCG.2019.2934535;10.1109/TVCG.2017.2745298;10.1109/TVCG.2014.2346279;10.1109/TVCG.2021.3114773;10.1109/TVCG.2017.2745078;10.1109/TVCG.2017.2744458
data transformation,tabular data,hierarchical tabular data,tabular visualization446
10
Vis2022
Visualizing Ensemble Predictions of Music Mood
10.1109/TVCG.2022.3209379
http://dx.doi.org/10.1109/TVCG.2022.3209379
864874J
Music mood classification has been a challenging problem in comparison with other music classification problems (e.g., genre, composer, or period). One solution for addressing this challenge is to use an ensemble of machine learning models. In this paper, we show that visualization techniques can effectively convey the popular prediction as well as uncertainty at different music sections along the temporal axis while enabling the analysis of individual ML models in conjunction with their application to different musical data. In addition to the traditional visual designs, such as stacked line graph, ThemeRiver, and pixel-based visualization, we introduce a new variant of ThemeRiver, called “dual-flux ThemeRiver”, which allows viewers to observe and measure the most popular prediction more easily than stacked line graph and ThemeRiver. Together with pixel-based visualization, dual-flux ThemeRiver plots can also assist in model-development workflows, in addition to annotating music using ensemble model predictions.
Zelin Ye;Min Chen 0001Zelin Ye;Min ChenUniversity of Oxford, United Kingdom;University of Oxford, United Kingdom
10.1109/TVCG.2010.150;10.1109/TVCG.2008.166;10.1109/TVCG.2016.2598868;10.1109/INFVIS.2000.885098;10.1109/TVCG.2013.141;10.1109/TVCG.2010.162;10.1109/TVCG.2017.2745178;10.1109/TVCG.2016.2598838;10.1109/TVCG.2018.2864838;10.1109/TVCG.2010.181;10.1109/TVCG.2012.253;10.1109/TVCG.2016.2598829;10.1109/TVCG.2013.143
time-series visualization,ensemble learning,music mood classification092
11
Vis2022
KiriPhys: Exploring New Data Physicalization Opportunities
10.1109/TVCG.2022.3209365
http://dx.doi.org/10.1109/TVCG.2022.3209365
225235J
We present KiriPhys, a new type of data physicalization based on kirigami, a traditional Japanese art form that uses paper-cutting. Within the kirigami possibilities, we investigate how different aspects of cutting patterns offer opportunities for mapping data to both independent and dependent physical variables. As a first step towards understanding the data physicalization opportunities in KiriPhys, we conducted a qualitative study in which 12 participants interacted with four KiriPhys examples. Our observations of how people interact with, understand, and respond to KiriPhys suggest that KiriPhys: 1) provides new opportunities for interactive, layered data exploration, 2) introduces elastic expansion as a new sensation that can reveal data, and 3) offers data mapping possibilities while providing a pleasurable experience that stimulates curiosity and engagement.
Foroozan Daneshzand;Charles Perin;Sheelagh Carpendale
Foroozan Daneshzand;Charles Perin;Sheelagh CarpendaleSimon Fraser University, Canada;University of Victoria, Canada;Simon Fraser University, Canada
10.1109/TVCG.2019.2934283;10.1109/TVCG.2014.2346292;10.1109/TVCG.2018.2865159;10.1109/TVCG.2018.2865237;10.1109/TVCG.2014.2352953;10.1109/TVCG.2016.2598498;10.1109/TVCG.2007.70577
data visualization,physicalization,kirigami,interaction,visual representation design,art & graphic design,aesthetics063
12
Vis2022
Traveler: Navigating Task Parallel Traces for Performance Analysis
10.1109/TVCG.2022.3209375
http://dx.doi.org/10.1109/TVCG.2022.3209375
788797J
Understanding the behavior of software in execution is a key step in identifying and fixing performance issues. This is especially important in high performance computing contexts where even minor performance tweaks can translate into large savings in terms of computational resource use. To aid performance analysis, developers may collect an execution trace—a chronological log of program activity during execution. As traces represent the full history, developers can discover a wide array of possibly previously unknown performance issues, making them an important artifact for exploratory performance analysis. However, interactive trace visualization is difficult due to issues of data size and complexity of meaning. Traces represent nanosecond-level events across many parallel processes, meaning the collected data is often large and difficult to explore. The rise of asynchronous task parallel programming paradigms complicates the relation between events and their probable cause. To address these challenges, we conduct a continuing design study in collaboration with high performance computing researchers. We develop diverse and hierarchical ways to navigate and represent execution trace data in support of their trace analysis tasks. Through an iterative design process, we developed Traveler, an integrated visualization platform for task parallel traces. Traveler provides multiple linked interfaces to help navigate trace data from multiple contexts. We evaluate the utility of Traveler through feedback from users and a case study, finding that integrating multiple modes of navigation in our design supported performance analysis tasks and led to the discovery of previously unknown behavior in a distributed array library.
Sayef Azad Sakin;Alex Bigelow;R. Tohid;Connor Scully-Allison;Carlos Scheidegger;Steven R. Brandt;Christopher Taylor;Kevin A. Huck;Hartmut Kaiser;Katherine E. Isaacs
Sayef Azad Sakin;Alex Bigelow;R. Tohid;Connor Scully-Allison;Carlos Scheidegger;Steven R. Brandt;Christopher Taylor;Kevin A. Huck;Hartmut Kaiser;Katherine E. Isaacs
University of Arizona, USA;Stardog, USA;Louisiana State University, USA;University of Arizona, USA;RStudio, USA;Louisiana State University, USA;Tactical Computing Labs, USA;University of Arizona, USA;Louisiana State University, USA;University of Utah, USA
10.1109/TVCG.2011.185;10.1109/TVCG.2019.2934790;10.1109/TVCG.2014.2346456;10.1109/TVCG.2009.196;10.1109/TVCG.2012.213;10.1109/TVCG.2019.2934285;10.1109/TVCG.2018.2865026;10.1109/TVCG.2007.70515
software visualization,parallel computing,traces,performance analysis,event sequence visualization142
13
Vis2022
Dispersion vs Disparity: Hiding Variability Can Encourage Stereotyping When Visualizing Social Outcomes
10.1109/TVCG.2022.3209377
http://dx.doi.org/10.1109/TVCG.2022.3209377
624634J
Visualization research often focuses on perceptual accuracy or helping readers interpret key messages. However, we know very little about how chart designs might influence readers' perceptions of the people behind the data. Specifically, could designs interact with readers' social cognitive biases in ways that perpetuate harmful stereotypes? For example, when analyzing social inequality, bar charts are a popular choice to present outcome disparities between race, gender, or other groups. But bar charts may encourage deficit thinking, the perception that outcome disparities are caused by groups' personal strengths or deficiencies, rather than external factors. These faulty personal attributions can then reinforce stereotypes about the groups being visualized. We conducted four experiments examining design choices that influence attribution biases (and therefore deficit thinking). Crowdworkers viewed visualizations depicting social outcomes that either mask variability in data, such as bar charts or dot plots, or emphasize variability in data, such as jitter plots or prediction intervals. They reported their agreement with both personal and external explanations for the visualized disparities. Overall, when participants saw visualizations that hide within-group variability, they agreed more with personal explanations. When they saw visualizations that emphasize within-group variability, they agreed less with personal explanations. These results demonstrate that data visualizations about social inequity can be misinterpreted in harmful ways and lead to stereotyping. Design choices can influence these biases: Hiding variability tends to increase stereotyping while emphasizing variability reduces it.
Eli Holder;Cindy XiongEli Holder;Cindy Xiong3iap, China;UMass Amherst, United States
10.1109/TVCG.2014.2346298;10.1109/TVCG.2019.2934287;10.1109/TVCG.2011.255;10.1109/TVCG.2019.2934786;10.1109/TVCG.2020.3030335;10.1109/TVCG.2018.2864909;10.1109/TVCG.2021.3114684;10.1109/TVCG.2019.2934400;10.1109/TVCG.2021.3114823;10.1109/TVCG.2019.2934399
Deficit Thinking,Fundamental Attribution Error,Correspondence Bias,Storytelling,Diversity,Equity082
14
Vis2022
Temporal Merge Tree Maps: A Topology-Based Static Visualization for Temporal Scalar Data
10.1109/TVCG.2022.3209387
http://dx.doi.org/10.1109/TVCG.2022.3209387
11571167J
Creating a static visualization for a time-dependent scalar field is a non-trivial task, yet very insightful as it shows the dynamics in one picture. Existing approaches are based on a linearization of the domain or on feature tracking. Domain linearizations use space-filling curves to place all sample points into a 1D domain, thereby breaking up individual features. Feature tracking methods explicitly respect feature continuity in space and time, but generally neglect the data context in which those features live. We present a feature-based linearization of the spatial domain that keeps features together and preserves their context by involving all data samples. We use augmented merge trees to linearize the domain and show that our linearized function has the same merge tree as the original data. A greedy optimization scheme aligns the trees over time providing temporal continuity. This leads to a static 2D visualization with one temporal dimension, and all spatial dimensions compressed into one. We compare our method against other domain linearizations as well as feature-tracking approaches, and apply it to several real-world data sets.
Wiebke Köpp;Tino WeinkaufWiebke Köpp;Tino WeinkaufKTH Royal Institute of Technology, Stockholm, Sweden;KTH Royal Institute of Technology, Stockholm, Sweden
10.1109/TVCG.2014.2346448;10.1109/TVCG.2019.2934368;10.1109/VISUAL.1999.809896;10.1109/TVCG.2021.3114839;10.1109/VISUAL.2005.1532851;10.1109/TVCG.2008.163;10.1109/TVCG.2007.70601;10.1109/TVCG.2018.2864510;10.1109/TVCG.2020.3030473
Scalar field visualization,augmented merge tree,pixel-based visualization146
15
Vis2022
GRay: Ray Casting for Visualization and Interactive Data Exploration of Gaussian Mixture Models
10.1109/TVCG.2022.3209374
http://dx.doi.org/10.1109/TVCG.2022.3209374
526536J
The Gaussian mixture model (GMM) describes the distribution of random variables from several different populations. GMMs have widespread applications in probability theory, statistics, machine learning for unsupervised cluster analysis and topic modeling, as well as in deep learning pipelines. So far, few efforts have been made to explore the underlying point distribution in combination with the GMMs, in particular when the data becomes high-dimensional and when the GMMs are composed of many Gaussians. We present an analysis tool comprising various GPU-based visualization techniques to explore such complex GMMs. To facilitate the exploration of high-dimensional data, we provide a novel navigation system to analyze the underlying data. Instead of projecting the data to 2D, we utilize interactive 3D views to better support users in understanding the spatial arrangements of the Gaussian distributions. The interactive system is composed of two parts: (1) raycasting-based views that visualize cluster memberships, spatial arrangements, and support the discovery of new modes. (2) overview visualizations that enable the comparison of Gaussians with each other, as well as small multiples of different choices of basis vectors. Users are supported in their exploration with customization tools and smooth camera navigations. Our tool was developed and assessed by five domain experts, and its usefulness was evaluated with 23 participants. To demonstrate the effectiveness, we identify interesting features in several data sets.
Kai Lawonn;Monique Meuschke;Pepe Eulzer;Matthias Mitterreiter;Joachim Giesen;Tobias Günther
Kai Lawonn;Monique Meuschke;Pepe Eulzer;Matthias Mitterreiter;Joachim Giesen;Tobias Günther
Friedrich Schiller University of Jena, Germany;Otto von Guericke University of Magdeburg, Germany;Friedrich Schiller University of Jena, Germany;Friedrich Schiller University of Jena, Germany;Friedrich Schiller University of Jena, Germany;Friedrich-Alexander-Universitä t Erlangen-Nürnberg, Germany
10.1109/INFVIS.2004.68;10.1109/TVCG.2011.229;10.1109/TVCG.2011.201;10.1109/TVCG.2008.153;10.1109/INFVIS.2005.1532141;10.1109/VAST.2010.5652484;10.1109/TVCG.2013.160;10.1109/VAST.2010.5652398;10.1109/TVCG.2020.3030379;10.1109/VISUAL.2000.885740;10.1109/INFVIS.2004.3;10.1109/VAST.2009.5332628;10.1109/TVCG.2007.70589;10.1109/TVCG.2009.179
Scientific visualization,Gaussian mixture models,ray casting,volume visualization056HM
16
Vis2022
Seeing What You Believe or Believing What You See? Belief Biases Correlation Estimation
10.1109/TVCG.2022.3209405
http://dx.doi.org/10.1109/TVCG.2022.3209405
493503J
When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. Therefore, one bedrock principle of science is to minimize that bias by testing the predictions of one's belief against objective data. But interpreting visualized data is a complex perceptual and cognitive process. Through two crowdsourced experiments, we demonstrate that supposedly objective assessments of the strength of a correlational relationship can be influenced by how strongly a viewer believes in the existence of that relationship. Participants viewed scatterplots depicting a relationship between meaningful variable pairs (e.g., number of environmental regulations and air quality) and estimated their correlations. They also estimated the correlation of the same scatterplots labeled instead with generic ‘X’ and ‘Y’ axes. In a separate section, they also reported how strongly they believed there to be a correlation between the meaningful variable pairs. Participants estimated correlations more accurately when they viewed scatterplots labeled with generic axes compared to scatterplots labeled with meaningful variable pairs. Furthermore, when viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1. While data visualizations are typically thought to present objective truths to the viewer, these results suggest that existing personal beliefs can bias even objective statistical values people extract from data.
Cindy Xiong;Chase Stokes;Yea-Seul Kim;Steven Franconeri
Cindy Xiong;Chase Stokes;Yea-Seul Kim;Steven FranconeriUMass Amherst, USA;UC Berkeley, USA;University of Wisconsin Madison, USA;Northwestern University, USA
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2013.124;10.1109/TVCG.2021.3114803;10.1109/TVCG.2014.2346298;10.1109/TVCG.2013.183;10.1109/TVCG.2014.2346979;10.1109/TVCG.2021.3114783;10.1109/TVCG.2019.2934287;10.1109/TVCG.2020.3030335;10.1109/TVCG.2018.2864909;10.1109/TVCG.2020.3029412;10.1109/TVCG.2015.2467671;10.1109/TVCG.2020.3028984;10.1109/TVCG.2022.3209467;10.1109/VAST.2017.8585669;10.1109/TVCG.2021.3114862;10.1109/TVCG.2019.2934399
Data Visualization,Visual Analysis,Data Interpretation,Perception,Cognition,Beliefs,Motivated Perception688
17
Vis2022
Visual Analysis and Detection of Contrails in Aircraft Engine Simulations
10.1109/TVCG.2022.3209356
http://dx.doi.org/10.1109/TVCG.2022.3209356
798808J
Contrails are condensation trails generated from emitted particles by aircraft engines, which perturb Earth's radiation budget. Simulation modeling is used to interpret the formation and development of contrails. These simulations are computationally intensive and rely on high-performance computing solutions, and the contrail structures are not well defined. We propose a visual computing system to assist in defining contrails and their characteristics, as well as in the analysis of parameters for computer-generated aircraft engine simulations. The back-end of our system leverages a contrail-formation criterion and clustering methods to detect contrails' shape and evolution and identify similar simulation runs. The front-end system helps analyze contrails and their parameters across multiple simulation runs. The evaluation with domain experts shows this approach successfully aids in contrail data investigation.
Nafiul Nipu;Carla Floricel;Negar Naghashzadeh;Roberto Paoli;G. Elisabeta Marai
Nafiul Nipu;Carla Floricel;Negar Naghashzadeh;Roberto Paoli;G. Elisabeta Marai
University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA
10.1109/TVCG.2016.2598869;10.1109/SciVis.2015.7429487;10.1109/TVCG.2011.185;10.1109/TVCG.2010.190;10.1109/TVCG.2014.2346448;10.1109/TVCG.2015.2467204;10.1109/TVCG.2016.2598868;10.1109/TVCG.2021.3114810;10.1109/TVCG.2011.203;10.1109/TVCG.2009.141;10.1109/TVCG.2017.2745178;10.1109/TVCG.2015.2467431;10.1109/TVCG.2018.2864849;10.1109/TVCG.2017.2744459;10.1109/VAST.2006.261451;10.1109/TVCG.2014.2346455;10.1109/TVCG.2014.2346755;10.1109/TVCG.2010.181;10.1109/VAST.2015.7347635;10.1109/TVCG.2016.2598830;10.1109/TVCG.2013.143
Scalar Field Data,Physical & Environmental Sciences,Mathematics,Feature Detection,Tracking & Transformation096
18
Vis2022
PMU Tracker: A Visualization Platform for Epicentric Event Propagation Analysis in the Power Grid
10.1109/TVCG.2022.3209380
http://dx.doi.org/10.1109/TVCG.2022.3209380
10811090J
The electrical power grid is a critical infrastructure, with disruptions in transmission having severe repercussions on daily activities, across multiple sectors. To identify, prevent, and mitigate such events, power grids are being refurbished as ‘smart’ systems that include the widespread deployment of GPS-enabled phasor measurement units (PMUs). PMUs provide fast, precise, and time-synchronized measurements of voltage and current, enabling real-time wide-area monitoring and control. However, the potential benefits of PMUs, for analyzing grid events like abnormal power oscillations and load fluctuations, are hindered by the fact that these sensors produce large, concurrent volumes of noisy data. In this paper, we describe working with power grid engineers to investigate how this problem can be addressed from a visual analytics perspective. As a result, we have developed PMU Tracker, an event localization tool that supports power grid operators in visually analyzing and identifying power grid events and tracking their propagation through the power grid's network. As a part of the PMU Tracker interface, we develop a novel visualization technique which we term an epicentric cluster dendrogram, which allows operators to analyze the effects of an event as it propagates outwards from a source location. We robustly validate PMU Tracker with: (1) a usage scenario demonstrating how PMU Tracker can be used to analyze anomalous grid events, and (2) case studies with power grid operators using a real-world interconnection dataset. Our results indicate that PMU Tracker effectively supports the analysis of power grid events; we also demonstrate and discuss how PMU Tracker's visual analytics approach can be generalized to other domains composed of time-varying networks with epicentric event characteristics.
Anjana Arunkumar;Andrea Pinceti;Lalitha Sankar;Chris Bryan
Anjana Arunkumar;Andrea Pinceti;Lalitha Sankar;Chris Bryan
Arizona State University, USA;Arizona State University, USA;Arizona State University, USA;Arizona State University, USA
10.1109/TVCG.2017.2744419;10.1109/INFVIS.2000.885101;10.1109/TVCG.2008.140;10.1109/TVCG.2018.2864825
Human-centered computing,Dendrograms,Visualization design and evaluation methods,Cyber-physical networks045
19
Vis2022
sMolBoxes: Dataflow Model for Molecular Dynamics Exploration
10.1109/TVCG.2022.3209411
http://dx.doi.org/10.1109/TVCG.2022.3209411
581590J
We present sMolBoxes, a dataflow representation for the exploration and analysis of long molecular dynamics (MD) simulations. When MD simulations reach millions of snapshots, a frame-by-frame observation is not feasible anymore. Thus, biochemists rely to a large extent only on quantitative analysis of geometric and physico-chemical properties. However, the usage of abstract methods to study inherently spatial data hinders the exploration and poses a considerable workload. sMolBoxes link quantitative analysis of a user-defined set of properties with interactive 3D visualizations. They enable visual explanations of molecular behaviors, which lead to an efficient discovery of biochemically significant parts of the MD simulation. sMolBoxes follow a node-based model for flexible definition, combination, and immediate evaluation of properties to be investigated. Progressive analytics enable fluid switching between multiple properties, which facilitates hypothesis generation. Each sMolBox provides quick insight to an observed property or function, available in more detail in the bigBox View. The case studies illustrate that even with relatively few sMolBoxes, it is possible to express complex analytical tasks, and their use in exploratory analysis is perceived as more efficient than traditional scripting-based methods.
Pavol Ulbrich;Manuela Waldner;Katarína Furmanová;Sérgio M. Marques;David Bednar;Barbora Kozlíková;Jan Byska
Pavol Ulbrich;Manuela Waldner;Katarína Furmanová;Sérgio M. Marques;David Bednář;Barbora Kozlíková;Jan Byška
Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;TU Wien, Vienna, Austria;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;Department of Experimental Biology, Loschmidt Laboratories, Faculty of Science, RECETOX, Masaryk University, Brno, Czech Republic;Department of Experimental Biology, Loschmidt Laboratories, Faculty of Science, RECETOX, Masaryk University, Brno, Czech Republic;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic
10.1109/TVCG.2018.2864851;10.1109/VAST.2007.4389013;10.1109/TVCG.2012.213;10.1109/TVCG.2011.225;10.1109/TVCG.2016.2598497;10.1109/TVCG.2019.2934668
Molecular dynamics,structure,node-based visualization,progressive analytics039
20
Vis2022
OBTracker: Visual Analytics of Off-ball Movements in Basketball
10.1109/TVCG.2022.3209373
http://dx.doi.org/10.1109/TVCG.2022.3209373
929939J
In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.
Yihong Wu;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang 0051;Yingcai Wu
Yihong Wu;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China
10.1109/VAST.2014.7042478;10.1109/TVCG.2013.207;10.1109/TVCG.2013.192;10.1109/TVCG.2019.2934243;10.1109/TVCG.2014.2346445;10.1109/VAST.2014.7042477;10.1109/TVCG.2017.2745181;10.1109/TVCG.2015.2468111;10.1109/TVCG.2019.2934630;10.1109/TVCG.2021.3114832;10.1109/TVCG.2017.2744218;10.1109/TVCG.2018.2865041;10.1109/TVCG.2020.3030359;10.1109/TVCG.2020.3030392;10.1109/TVCG.2021.3114877;10.1109/TVCG.2018.2864503
Sports visualization,basketball tracking data,off-ball movement analysis178HM
21
Vis2022
PC-Expo: A Metrics-Based Interactive Axes Reordering Method for Parallel Coordinate Displays
10.1109/TVCG.2022.3209392
http://dx.doi.org/10.1109/TVCG.2022.3209392
712722J
Parallel coordinate plots (PCPs) have been widely used for high-dimensional (HD) data storytelling because they allow for presenting a large number of dimensions without distortions. The axes ordering in PCP presents a particular story from the data based on the user perception of PCP polylines. Existing works focus on directly optimizing for PCP axes ordering based on some common analysis tasks like clustering, neighborhood, and correlation. However, direct optimization for PCP axes based on these common properties is restrictive because it does not account for multiple properties occurring between the axes, and for local properties that occur in small regions in the data. Also, many of these techniques do not support the human-in-the-loop (HIL) paradigm, which is crucial (i) for explainability and (ii) in cases where no single reordering scheme fits the users' goals. To alleviate these problems, we present PC-Expo, a real-time visual analytics framework for all-in-one PCP line pattern detection and axes reordering. We studied the connection of line patterns in PCPs with different data analysis tasks and datasets. PC-Expo expands prior work on PCP axes reordering by developing real-time, local detection schemes for the 12 most common analysis tasks (properties). Users can choose the story they want to present with PCPs by optimizing directly over their choice of properties. These properties can be ranked, or combined using individual weights, creating a custom optimization scheme for axes reordering. Users can control the granularity at which they want to work with their detection scheme in the data, allowing exploration of local regions. PC-Expo also supports HIL axes reordering via local-property visualization, which shows the regions of granular activity for every axis pair. Local-property visualization is helpful for PCP axes reordering based on multiple properties, when no single reordering scheme fits the user goals. A comprehensive evaluation was done with real users and diverse datasets confirm the efficacy of PC-Expo in data storytelling with PCPs.
Anjul Tyagi;Tyler Estro;Geoffrey H. Kuenning;Erez Zadok;Klaus Mueller 0001
Anjul Tyagi;Tyler Estro;Geoff Kuenning;Erez Zadok;Klaus Mueller
Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Harvey Mudd College, Claremont, California, USA;Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Stony Brook University, New York, USA
10.1109/INFVIS.2005.1532136;10.1109/INFVIS.1998.729559;10.1109/TVCG.2010.184;10.1109/TVCG.2006.138;10.1109/TVCG.2007.70535;10.1109/VISUAL.1997.663916;10.1109/VISUAL.1990.146402;10.1109/TVCG.2015.2466992;10.1109/INFVIS.2005.1532138;10.1109/TVCG.2015.2467132;10.1109/TVCG.2009.111;10.1109/INFVIS.2004.15;10.1109/VAST47406.2019.8986923;10.1109/INFVIS.2005.1532142
High dimensional data visualization,Parallel Coordinates Chart,Data Storytelling,Data Analysis054
22
Vis2022
Striking a Balance: Reader Takeaways and Preferences when Integrating Text and Charts
10.1109/TVCG.2022.3209383
http://dx.doi.org/10.1109/TVCG.2022.3209383
12331243J
While visualizations are an effective way to represent insights about information, they rarely stand alone. When designing a visualization, text is often added to provide additional context and guidance for the reader. However, there is little experimental evidence to guide designers as to what is the right amount of text to show within a chart, what its qualitative properties should be, and where it should be placed. Prior work also shows variation in personal preferences for charts versus textual representations. In this paper, we explore several research questions about the relative value of textual components of visualizations. 302 participants ranked univariate line charts containing varying amounts of text, ranging from no text (except for the axes) to a written paragraph with no visuals. Participants also described what information they could take away from line charts containing text with varying semantic content. We find that heavily annotated charts were not penalized. In fact, participants preferred the charts with the largest number of textual annotations over charts with fewer annotations or text alone. We also find effects of semantic content. For instance, the text that describes statistical or relational components of a chart leads to more takeaways referring to statistics or relational comparisons than text describing elemental or encoded components. Finally, we find different effects for the semantic levels based on the placement of the text on the chart; some kinds of information are best placed in the title, while others should be placed closer to the data. We compile these results into four chart design guidelines and discuss future implications for the combination of text and charts.
Chase Stokes;Vidya Setlur;Bridget Cogley;Arvind Satyanarayan;Marti A. Hearst
Chase Stokes;Vidya Setlur;Bridget Cogley;Arvind Satyanarayan;Marti A. Hearst
UC Berkeley, USA;Tableau Research, USA;Versalytix, USA;MIT CSAIL, USA;UC Berkeley, USA
10.1109/TVCG.2015.2467732;10.1109/TVCG.2013.234;10.1109/TVCG.2017.2744684;10.1109/TVCG.2011.255;10.1109/TVCG.2013.119;10.1109/TVCG.2021.3114846;10.1109/TVCG.2021.3114802;10.1109/TVCG.2021.3114770;10.1109/TVCG.2010.179;10.1109/TVCG.2018.2865145
Visualization,text,annotation,user preference,takeaways,design,line charts159
23
Vis2022
Taurus: Towards a Unified Force Representation and Universal Solver for Graph Layout
10.1109/TVCG.2022.3209371
http://dx.doi.org/10.1109/TVCG.2022.3209371
886895J
Over the past few decades, a large number of graph layout techniques have been proposed for visualizing graphs from various domains. In this paper, we present a general framework, Taurus, for unifying popular techniques such as the spring-electrical model, stress model, and maxent-stress model. It is based on a unified force representation, which formulates most existing techniques as a combination of quotient-based forces that combine power functions of graph-theoretical and Euclidean distances. This representation enables us to compare the strengths and weaknesses of existing techniques, while facilitating the development of new methods. Based on this, we propose a new balanced stress model (BSM) that is able to layout graphs in superior quality. In addition, we introduce a universal augmented stochastic gradient descent (SGD) optimizer that efficiently finds proper solutions for all layout techniques. To demonstrate the power of our framework, we conduct a comprehensive evaluation of existing techniques on a large number of synthetic and real graphs. We release an open-source package, which facilitates easy comparison of different graph layout methods for any graph input as well as effectively creating customized graph layout techniques.
Mingliang Xue;Zhi Wang;Fahai Zhong;Yong Wang 0021;Mingliang Xu;Oliver Deussen;Yunhai Wang
Mingliang Xue;Zhi Wang;Fahai Zhong;Yong Wang;Mingliang Xu;Oliver Deussen;Yunhai Wang
Department of Computer Science, Shandong University, China;Department of Computer Science, Shandong University, China;Department of Computer Science, Shandong University, China;School of Computing and Information Systems, Singapore Management University, Singapore;Zhengzhou University, Zhengzhou, China;Computer and Information Science, University of Konstanz, Konstanz, Germany;Department of Computer Science, Shandong University, China
10.1109/TVCG.2011.185;10.1109/TVCG.2008.155;10.1109/TVCG.2017.2745919;10.1109/TVCG.2020.3030447
Graph Layout,Gradient Descent,Framework043
24
Vis2022
Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots
10.1109/TVCG.2022.3209348
http://dx.doi.org/10.1109/TVCG.2022.3209348
602612J
Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/
Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan A. Rossi;Fan Du;Eunyee Koh;Matthew Kay 0001
Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan Rossi;Fan Du;Eunyee Koh;Matthew Kay
Northwestern University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Northwestern University, USA
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2013.124;10.1109/TVCG.2021.3114803;10.1109/TVCG.2014.2346298;10.1109/TVCG.2021.3114813;10.1109/TVCG.2020.3029413;10.1109/TVCG.2011.175;10.1109/TVCG.2020.3030335;10.1109/TVCG.2018.2864909;10.1109/TVCG.2012.279;10.1109/TVCG.2021.3114684;10.1109/TVCG.2017.2744184;10.1109/TVCG.2018.2864914
Uncertainty visualisations,missing values,data imputation,multivariate data051HM
25
Vis2022
Probablement, Wahrscheinlich, Likely? A Cross-Language Study of How People Verbalize Probabilities in Icon Array Visualizations
10.1109/TVCG.2022.3209367
http://dx.doi.org/10.1109/TVCG.2022.3209367
11891199J
Visualizations today are used across a wide range of languages and cultures. Yet the extent to which language impacts how we reason about data and visualizations remains unclear. In this paper, we explore the intersection of visualization and language through a cross-language study on estimative probability tasks with icon-array visualizations. Across Arabic, English, French, German, and Mandarin, $n=50$ participants per language both chose probability expressions — e.g. likely, probable — to describe icon-array visualizations (Vis-to-Expression), and drew icon-array visualizations to match a given expression (Expression-to-Vis). Results suggest that there is no clear one-to-one mapping of probability expressions and associated visual ranges between languages. Several translated expressions fell significantly above or below the range of the corresponding English expressions. Compared to other languages, French and German respondents appear to exhibit high levels of consistency between the visualizations they drew and the words they chose. Participants across languages used similar words when describing scenarios above 80% chance, with more variance in expressions targeting mid-range and lower values. We discuss how these results suggest potential differences in the expressiveness of language as it relates to visualization interpretation and design goals, as well as practical implications for translation efforts and future studies at the intersection of languages, culture, and visualization. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/g5d4r/.
Noëlle Rakotondravony;Yiren Ding;Lane HarrisonNoëlle Rakotondravony;Yiren Ding;Lane HarrisonWorcester Polytechnic Institute, United States;Worcester Polytechnic Institute, United States;Worcester Polytechnic Institute, United States
10.1109/TVCG.2015.2467758;10.1109/TVCG.2014.2346320
Visualization,Cross-Language Study,Icon-Arrays050
26
Vis2022
Towards Natural Language-Based Visualization Authoring
10.1109/TVCG.2022.3209357
http://dx.doi.org/10.1109/TVCG.2022.3209357
12221232J
A key challenge to visualization authoring is the process of getting familiar with the complex user interfaces of authoring tools. Natural Language Interface (NLI) presents promising benefits due to its learnability and usability. However, supporting NLIs for authoring tools requires expertise in natural language processing, while existing NLIs are mostly designed for visual analytic workflow. In this paper, we propose an authoring-oriented NLI pipeline by introducing a structured representation of users' visualization editing intents, called editing actions, based on a formative study and an extensive survey on visualization construction tools. The editing actions are executable, and thus decouple natural language interpretation and visualization applications as an intermediate layer. We implement a deep learning-based NL interpreter to translate NL utterances into editing actions. The interpreter is reusable and extensible across authoring tools. The authoring tools only need to map the editing actions into tool-specific operations. To illustrate the usages of the NL interpreter, we implement an Excel chart editor and a proof-of-concept authoring tool, VisTalk. We conduct a user study with VisTalk to understand the usage patterns of NL-based authoring systems. Finally, we discuss observations on how users author charts with natural language, as well as implications for future research.
Yun Wang 0012;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang 0001
Yun Wang;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang
Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Tsinghua University, China;Carnegie Mellon University, USA;Oxford University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2011.185;10.1109/TVCG.2017.2744684;10.1109/TVCG.2016.2598620;10.1109/TVCG.2021.3114848;10.1109/TVCG.2007.70594;10.1109/TVCG.2018.2865240;10.1109/TVCG.2020.3030378;10.1109/TVCG.2014.2346291;10.1109/TVCG.2018.2865158;10.1109/TVCG.2019.2934281;10.1109/TVCG.2016.2599030;10.1109/TVCG.2017.2745219;10.1109/INFVIS.2000.885086;10.1109/INFVIS.2005.1532146;10.1109/TVCG.2019.2934668
Visualization authoring,Natural language interface,Natural language understanding275
27
Vis2022
Revealing the Semantics of Data Wrangling Scripts With Comantics
10.1109/TVCG.2022.3209470
http://dx.doi.org/10.1109/TVCG.2022.3209470
117127J
Data workers usually seek to understand the semantics of data wrangling scripts in various scenarios, such as code debugging, reusing, and maintaining. However, the understanding is challenging for novice data workers due to the variety of programming languages, functions, and parameters. Based on the observation that differences between input and output tables highly relate to the type of data transformation, we outline a design space including 103 characteristics to describe table differences. Then, we develop Comantics, a three-step pipeline that automatically detects the semantics of data transformation scripts. The first step focuses on the detection of table differences for each line of wrangling code. Second, we incorporate a characteristic-based component and a Siamese convolutional neural network-based component for the detection of transformation types. Third, we derive the parameters of each data transformation by employing a “slot filling” strategy. We design experiments to evaluate the performance of Comantics. Further, we assess its flexibility using three example applications in different domains.
Kai Xiong;Zhongsu Luo;Siwei Fu;Yongheng Wang;Mingliang Xu;Yingcai Wu
Kai Xiong;Zhongsu Luo;Siwei Fu;Yongheng Wang;Mingliang Xu;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Zhejiang University of Technology, Hangzhou, China;Zhejiang Lab, Hangzhou, China;Zhejiang Lab, Hangzhou, China;School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
10.1109/TVCG.2021.3114830;10.1109/VAST47406.2019.8986909;10.1109/TVCG.2022.3209385;10.1109/VAST.2011.6102441;10.1109/TVCG.2020.3030462;10.1109/TVCG.2022.3209354;10.1109/TVCG.2019.2934593;10.1109/VAST.2011.6102440;10.1109/TVCG.2021.3114848;10.1109/TVCG.2017.2745298
Data Transformation,Semantic Inference,Program Understanding,Table Comparison170
28
Vis2022
PuzzleFixer: A Visual Reassembly System for Immersive Fragments Restoration
10.1109/TVCG.2022.3209388
http://dx.doi.org/10.1109/TVCG.2022.3209388
429439J
We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.
Shuainan Ye;Zhutian Chen;Xiangtong Chu;Kang Li 0005;Juntong Luo;Yi Li;Guohua Geng;Yingcai Wu
Shuainan Ye;Zhutian Chen;Xiangtong Chu;Kang Li;Juntong Luo;Yi Li;Guohua Geng;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, China;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;State Key Lab of CAD&CG, Zhejiang University, China;Northwest University, USA;Xi'an Beilin Museum, China;Xi'an Beilin Museum, China;Northwest University, USA;State Key Lab of CAD&CG, Zhejiang University, China
10.1109/TVCG.2019.2934332;10.1109/TVCG.2021.3114861;10.1109/TVCG.2019.2934251;10.1109/TVCG.2021.3114812;10.1109/TVCG.2006.189;10.1109/TVCG.2020.3030400;10.1109/TVCG.2020.3030392
Immersive visualization,interactive exploration,fragment reassembly,cultural heritage073
29
Vis2022
Tac-Trainer: A Visual Analytics System for IoT-based Racket Sports Training
10.1109/TVCG.2022.3209352
http://dx.doi.org/10.1109/TVCG.2022.3209352
951961J
Conventional racket sports training highly relies on coaches' knowledge and experience, leading to biases in the guidance. To solve this problem, smart wearable devices based on Internet of Things technology (IoT) have been extensively investigated to support data-driven training. Considerable studies introduced methods to extract valuable information from the sensor data collected by IoT devices. However, the information cannot provide actionable insights for coaches due to the large data volume and high data dimensions. We proposed an IoT + VA framework, Tac-Trainer, to integrate the sensor data, the information, and coaches' knowledge to facilitate racket sports training. Tac-Trainer consists of four components: device configuration, data interpretation, training optimization, and result visualization. These components collect trainees' kinematic data through IoT devices, transform the data into attributes and indicators, generate training suggestions, and provide an interactive visualization interface for exploration, respectively. We further discuss new research opportunities and challenges inspired by our work from two perspectives, VA for IoT and IoT for VA.
Jiachen Wang;Ji Ma;Kangping Hu;Zheng Zhou;Hui Zhang 0051;Xiao Xie;Yingcai Wu
Jiachen Wang;Ji Ma;Kangping Hu;Zheng Zhou;Hui Zhang;Xiao Xie;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China
10.1109/TVCG.2013.178;10.1109/TVCG.2019.2934280;10.1109/TVCG.2021.3114806;10.1109/TVCG.2020.3030342;10.1109/TVCG.2021.3114861;10.1109/TVCG.2015.2468292;10.1109/TVCG.2011.208;10.1109/TVCG.2013.192;10.1109/TVCG.2019.2934243;10.1109/TVCG.2014.2346445;10.1109/TVCG.2017.2745181;10.1109/TVCG.2019.2934630;10.1109/TVCG.2017.2744218;10.1109/TVCG.2018.2865041;10.1109/TVCG.2020.3030359
IoT,racket sports,training,sensor data,visual analytics091
30
Vis2022
Computing a Stable Distance on Merge Trees
10.1109/TVCG.2022.3209395
http://dx.doi.org/10.1109/TVCG.2022.3209395
11681177J
Distances on merge trees facilitate visual comparison of collections of scalar fields. Two desirable properties for these distances to exhibit are 1) the ability to discern between scalar fields which other, less complex topological summaries cannot and 2) to still be robust to perturbations in the dataset. The combination of these two properties, known respectively as stability and discriminativity, has led to theoretical distances which are either thought to be or shown to be computationally complex and thus their implementations have been scarce. In order to design similarity measures on merge trees which are computationally feasible for more complex merge trees, many researchers have elected to loosen the restrictions on at least one of these two properties. The question still remains, however, if there are practical situations where trading these desirable properties is necessary. Here we construct a distance between merge trees which is designed to retain both discriminativity and stability. While our approach can be expensive for large merge trees, we illustrate its use in a setting where the number of nodes is small. This setting can be made more practical since we also provide a proof that persistence simplification increases the outputted distance by at most half of the simplified value. We demonstrate our distance measure on applications in shape comparison and on detection of periodicity in the von Kármán vortex street.
Brian Bollen;Pasindu Tennakoon;Joshua A. LevineBrian Bollen;Pasindu Tennakoon;Joshua A. LevineDepartment of Mathematics, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA
10.1109/TVCG.2012.287;10.1109/TVCG.2014.2346403;10.1109/TVCG.2008.110;10.1109/TVCG.2007.70603;10.1109/TVCG.2006.186;10.1109/TVCG.2011.236;10.1109/TVCG.2017.2743938;10.1109/TVCG.2009.163;10.1109/TVCG.2019.2934242
Merge trees,scalar fields,distance measure,stability,edit distance,persistence045
31
Vis2022
Multi-View Design Patterns and Responsive Visualization for Genomics Data
10.1109/TVCG.2022.3209398
http://dx.doi.org/10.1109/TVCG.2022.3209398
559569J
A series of recent studies has focused on designing cross-resolution and cross-device visualizations, i.e., responsive visualization, a concept adopted from responsive web design. However, these studies mainly focused on visualizations with a single view to a small number of views, and there are still unresolved questions about how to design responsive multi-view visualizations. In this paper, we present a reusable and generalizable framework for designing responsive multi-view visualizations focused on genomics data. To gain a better understanding of existing design challenges, we review web-based genomics visualization tools in the wild. By characterizing tools based on a taxonomy of responsive designs, we find that responsiveness is rarely supported in existing tools. To distill insights from the survey results in a systematic way, we classify typical view composition patterns, such as “vertically long,” “horizontally wide,” “circular,” and “cross-shaped” compositions. We then identify their usability issues in different resolutions that stem from the composition patterns, as well as discussing approaches to address the issues and to make genomics visualizations responsive. By extending the Gosling visualization grammar to support responsive constructs, we show how these approaches can be supported. A valuable follow-up study would be taking different input modalities into account, such as mouse and touch interactions, which was not considered in our study.
Sehi L'Yi;Nils GehlenborgSehi L'Yi;Nils GehlenborgHarvard Medical School, Boston, MA, USA;Harvard Medical School, Boston, MA, USA
10.1109/TVCG.2013.234;10.1109/TVCG.2013.124;10.1109/TVCG.2020.3030338;10.1109/TVCG.2017.2744199;10.1109/TVCG.2020.3030371;10.1109/TVCG.2019.2934786;10.1109/TVCG.2010.162;10.1109/TVCG.2021.3114782;10.1109/TVCG.2017.2743859;10.1109/TVCG.2011.179;10.1109/TVCG.2020.3030419;10.1109/TVCG.2021.3114876;10.1109/TVCG.2010.163;10.1109/TVCG.2018.2864884;10.1109/TVCG.2022.3209407;10.1109/TVCG.2014.2346445;10.1109/TVCG.2017.2744198;10.1109/TVCG.2016.2599030;10.1109/TVCG.2016.2598796;10.1109/TVCG.2020.3030423
Responsive visualization,multi-view visualization,genomics,visualization grammar00
32
Vis2022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
10.1109/TVCG.2022.3209384
http://dx.doi.org/10.1109/TVCG.2022.3209384
831841J
Traditional deep learning interpretability methods which are suitable for model users cannot explain network behaviors at the global level and are inflexible at providing fine-grained explanations. As a solution, concept-based explanations are gaining attention due to their human intuitiveness and their flexibility to describe both global and local model behaviors. Concepts are groups of similarly meaningful pixels that express a notion, embedded within the network's latent space and have commonly been hand-generated, but have recently been discovered by automated approaches. Unfortunately, the magnitude and diversity of discovered concepts makes it difficult to navigate and make sense of the concept space. Visual analytics can serve a valuable role in bridging these gaps by enabling structured navigation and exploration of the concept space to provide concept-based insights of model behavior to users. To this end, we design, develop, and validate ConceptExplainer, a visual analytics system that enables people to interactively probe and explore the concept space to explain model behavior at the instance/class/global level. The system was developed via iterative prototyping to address a number of design challenges that model users face in interpreting the behavior of deep learning models. Via a rigorous user study, we validate how ConceptExplainer supports these challenges. Likewise, we conduct a series of usage scenarios to demonstrate how the system supports the interactive analysis of model behavior across a variety of tasks and explanation granularities, such as identifying concepts that are important to classification, identifying bias in training data, and understanding how concepts can be shared across diverse and seemingly dissimilar classes.
Jinbin Huang;Aditi Mishra;Bum Chul Kwon;Chris BryanJinbin Huang;Aditi Mishra;Bum Chul Kwon;Chris BryanArizona State University, USA;Arizona State University, USA;IBM research, USA;Arizona State University, USA
10.1109/TVCG.2019.2934659;10.1109/TVCG.2014.2346248;10.1109/TVCG.2021.3114858;10.1109/TVCG.2019.2934629;10.1109/TVCG.2021.3114837
Explainable AI,Concept Activation Vectors,Interactive Visual Analytics155
33
Vis2022
The Quest for Omnioculars: Embedded Visualization for Augmenting Basketball Game Viewing Experiences
10.1109/TVCG.2022.3209353
http://dx.doi.org/10.1109/TVCG.2022.3209353
962971J
Sports game data is becoming increasingly complex, often consisting of multivariate data such as player performance stats, historical team records, and athletes' positional tracking information. While numerous visual analytics systems have been developed for sports analysts to derive insights, few tools target fans to improve their understanding and engagement of sports data during live games. By presenting extra data in the actual game views, embedded visualization has the potential to enhance fans' game-viewing experience. However, little is known about how to design such kinds of visualizations embedded into live games. In this work, we present a user-centered design study of developing interactive embedded visualizations for basketball fans to improve their live game-watching experiences. We first conducted a formative study to characterize basketball fans' in-game analysis behaviors and tasks. Based on our findings, we propose a design framework to inform the design of embedded visualizations based on specific data-seeking contexts. Following the design framework, we present five novel embedded visualization designs targeting five representative contexts identified by the fans, including shooting, offense, defense, player evaluation, and team comparison. We then developed Omnioculars, an interactive basketball game-viewing prototype that features the proposed embedded visualizations for fans' in-game data analysis. We evaluated Omnioculars in a simulated basketball game with basketball fans. The study results suggest that our design supports personalized in-game data analysis and enhances game understanding and engagement.
Tica Lin;Zhutian Chen;Yalong Yang 0001;Daniele Chiappalupi;Johanna Beyer;Hanspeter Pfister
Tica Lin;Zhutian Chen;Yalong Yang;Daniele Chiappalupi;Johanna Beyer;Hanspeter Pfister
John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;Department of Computer Science, Virginia Tech, Blacksburg, VA, USA;Department of Computer Science, ETH Zürich, Switzerland;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
10.1109/TVCG.2013.124;10.1109/TVCG.2021.3114806;10.1109/TVCG.2021.3114861;10.1109/TVCG.2013.192;10.1109/TVCG.2017.2745181;10.1109/TVCG.2016.2598608;10.1109/TVCG.2020.3030392
Sports Analytics,Embedded Visualization,Data Visualization257HM
34
Vis2022
Cultivating Visualization Literacy for Children Through Curiosity and Play
10.1109/TVCG.2022.3209442
http://dx.doi.org/10.1109/TVCG.2022.3209442
257267J
Fostering data visualization literacy (DVL) as part of childhood education could lead to a more data literate society. However, most work in DVL for children relies on a more formal educational context (i.e., a teacher-led approach) that limits children's engagement with data to classroom-based environments and, consequently, children's ability to ask questions about and explore data on topics they find personally meaningful. We explore how a curiosity-driven, child-led approach can provide more agency to children when they are authoring data visualizations. This paper explores how informal learning with crafting physicalizations through play and curiosity may foster increased literacy and engagement with data. Employing a constructionist approach, we designed a do-it-yourself toolkit made out of everyday materials (e.g., paper, cardboard, mirrors) that enables children to create, customize, and personalize three different interactive visualizations (bar, line, pie). We used the toolkit as a design probe in a series of in-person workshops with 5 children (6 to 11-year-olds) and interviews with 5 educators. Our observations reveal that the toolkit helped children creatively engage and interact with visualizations. Children with prior knowledge of data visualization reported the toolkit serving as more of an authoring tool that they envision using in their daily lives, while children with little to no experience found the toolkit as an engaging introduction to data visualization. Our study demonstrates the potential of using the constructionist approach to cultivate children's DVL through curiosity and play.
Sandra Bae;Rishi Vanukuru;Ruhan Yang;Peter Gyory;Ran Zhou 0003;Ellen Yi-Luen Do;Danielle Albers Szafir
S. Sandra Bae;Rishi Vanukuru;Ruhan Yang;Peter Gyory;Ran Zhou;Ellen Yi-Luen Do;Danielle Albers Szafir
CU Boulder, USA;CU Boulder, USA;CU Boulder, USA;CU Boulder, USA;CU Boulder, USA;CU Boulder, USA;UNC-Chapel Hill, USA
10.1109/TVCG.2019.2934804;10.1109/TVCG.2011.185;10.1109/TVCG.2014.2346984;10.1109/TVCG.2019.2934397;10.1109/INFVIS.2004.64;10.1109/TVCG.2014.2346292;10.1109/TVCG.2020.3030464;10.1109/TVCG.2018.2865241;10.1109/TVCG.2016.2598920
Data visualization literacy,children,constructionism,informal learning277
35
Vis2022
Animated Vega-Lite: Unifying Animation with a Grammar of Interactive Graphics
10.1109/TVCG.2022.3209369
http://dx.doi.org/10.1109/TVCG.2022.3209369
149159J
We present Animated Vega-Lite, a set of extensions to Vega-Lite that model animated visualizations as time-varying data queries. In contrast to alternate approaches for specifying animated visualizations, which prize a highly expressive design space, Animated Vega-Lite prioritizes unifying animation with the language's existing abstractions for static and interactive visualizations to enable authors to smoothly move between or combine these modalities. Thus, to compose animation with static visualizations, we represent time as an encoding channel. Time encodings map a data field to animation keyframes, providing a lightweight specification for animations without interaction. To compose animation and interaction, we also represent time as an event stream; Vega-Lite selections, which provide dynamic data queries, are now driven not only by input events but by timer ticks as well. We evaluate the expressiveness of our approach through a gallery of diverse examples that demonstrate coverage over taxonomies of both interaction and animation. We also critically reflect on the conceptual affordances and limitations of our contribution by interviewing five expert developers of existing animation grammars. These reflections highlight the key motivating role of in-the-wild examples, and identify three central tradeoffs: the language design process, the types of animated transitions supported, and how the systems model keyframes.
Jonathan Zong;Josh Pollock;Dylan Wootton;Arvind Satyanarayan
Jonathan Zong;Josh Pollock;Dylan Wootton;Arvind Satyanarayan
MIT CSAIL, USA;MIT CSAIL, USA;MIT CSAIL, USA;MIT CSAIL, USA
10.1109/TVCG.2011.185;10.1109/TVCG.2014.2346424;10.1109/TVCG.2007.70539;10.1109/TVCG.2018.2864909;10.1109/TVCG.2020.3030360;10.1109/TVCG.2014.2346250;10.1109/TVCG.2018.2865240;10.1109/TVCG.2008.125;10.1109/TVCG.2019.2934281;10.1109/TVCG.2022.3209369;10.1109/TVCG.2015.2467091;10.1109/TVCG.2020.3030396;10.1109/TVCG.2015.2467191;10.1109/TVCG.2007.70515;10.1109/TVCG.2020.3030367
Information visualization,Animation,Interaction,Toolkits,Systems,Declarative Specification152
36
Vis2022
Roboviz: A Game-Centered Project for Information Visualization Education
10.1109/TVCG.2022.3209402
http://dx.doi.org/10.1109/TVCG.2022.3209402
268277J
Due to their pedagogical advantages, large final projects in information visualization courses have become standard practice. Students take on a client–real or simulated–a dataset, and a vague set of goals to create a complete visualization or visual analytics product. Unfortunately, many projects suffer from ambiguous goals, over or under-constrained client expectations, and data constraints that have students spending their time on non-visualization problems (e.g., data cleaning). These are important skills, but are often secondary course objectives, and unforeseen problems can majorly hinder students. We created an alternative for our information visualization course: Roboviz, a real-time game for students to play by building a visualization-focused interface. By designing the game mechanics around four different data types, the project allows students to create a wide array of interactive visualizations. Student teams play against their classmates with the objective to collect the most (good) robots. The flexibility of the strategies encourages variability, a range of approaches, and solving wicked design constraints. We describe the construction of this game and report on student projects over two years. We further show how the game mechanics can be extended or adapted to other game-based projects.
Eytan Adar;Elsie Lee-RobbinsEytan Adar;Elsie Lee-RobbinsUniversity of Michigan, School of Information, USA;University of Michigan, School of Information, USA
10.1109/TVCG.2020.3030375;10.1109/VISUAL.1998.745348;10.1109/TVCG.2016.2599338;10.1109/TVCG.2020.3030464;10.1109/INFVIS.2004.27;10.1109/VAST.2009.5333245;10.1109/TVCG.2007.70515
pedagogy,final project,game interfaces059
37
Vis2022
RISeer: Inspecting the Status and Dynamics of Regional Industrial Structure via Visual Analytics
10.1109/TVCG.2022.3209351
http://dx.doi.org/10.1109/TVCG.2022.3209351
10701080J
Restructuring the regional industrial structure (RIS) has the potential to halt economic recession and achieve revitalization. Understanding the current status and dynamics of RIS will greatly assist in studying and evaluating the current industrial structure. Previous studies have focused on qualitative and quantitative research to rationalize RIS from a macroscopic perspective. Although recent studies have traced information at the industrial enterprise level to complement existing research from a micro perspective, the ambiguity of the underlying variables contributing to the industrial sector and its composition, the dynamic nature, and the large number of multivariant features of RIS records have obscured a deep and fine-grained understanding of RIS. To this end, we propose an interactive visualization system, RISeer, which is based on interpretable machine learning models and enhanced visualizations designed to identify the evolutionary patterns of the RIS and facilitate inter-regional inspection and comparison. Two case studies confirm the effectiveness of our approach, and feedback from experts indicates that RISeer helps them to gain a fine-grained understanding of the dynamics and evolution of the RIS.
Longfei Chen;Yang Ouyang;Haipeng Zhang;Suting Hong;Quan Li
Longfei Chen;Yang Ouyang;Haipeng Zhang;Suting Hong;Quan Li
School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;School of Entrepreneurship and Management, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China
10.1109/TVCG.2009.122;10.1109/TVCG.2013.173;10.1109/VAST.2018.8802454;10.1109/TVCG.2006.179;10.1109/TVCG.2016.2598838;10.1109/INFVIS.2005.1532152;10.1109/TVCG.2015.2468078;10.1109/TVCG.2017.2744738
Spatiotemporal dynamics,multivariate time series,regional industrial structure,visualization070
38
Vis2022
DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine Learning with Treemaps
10.1109/TVCG.2022.3209425
http://dx.doi.org/10.1109/TVCG.2022.3209425
320330J
In this paper, we present DendroMap, a novel approach to interactively exploring large-scale image datasets for machine learning (ML). ML practitioners often explore image datasets by generating a grid of images or projecting high-dimensional representations of images into 2-D using dimensionality reduction techniques (e.g., t-SNE). However, neither approach effectively scales to large datasets because images are ineffectively organized and interactions are insufficiently supported. To address these challenges, we develop DendroMap by adapting Treemaps, a well-known visualization technique. DendroMap effectively organizes images by extracting hierarchical cluster structures from high-dimensional representations of images. It enables users to make sense of the overall distributions of datasets and interactively zoom into specific areas of interests at multiple levels of abstraction. Our case studies with widely-used image datasets for deep learning demonstrate that users can discover insights about datasets and trained models by examining the diversity of images, identifying underperforming subgroups, and analyzing classification errors. We conducted a user study that evaluates the effectiveness of DendroMap in grouping and searching tasks by comparing it with a gridified version of t-SNE and found that participants preferred DendroMap. DendroMap is available at https://div-lab.github.io/dendromap/.
Donald Bertucci;Md Montaser Hamid;Yashwanthi Anand;Anita Ruangrotsakun;Delyar Tabatabai;Melissa Perez;Minsuk Kahng
Donald Bertucci;Md Montaser Hamid;Yashwanthi Anand;Anita Ruangrotsakun;Delyar Tabatabai;Melissa Perez;Minsuk Kahng
Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA
10.1109/INFVIS.2005.1532136;10.1109/VAST47406.2019.8986948;10.1109/TVCG.2020.3030342;10.1109/TVCG.2013.212;10.1109/TVCG.2013.162;10.1109/TVCG.2014.2346276;10.1109/TVCG.2021.3114855;10.1109/TVCG.2019.2934659;10.1109/TVCG.2017.2744718;10.1109/TVCG.2016.2598445;10.1109/TVCG.2016.2598838;10.1109/TVCG.2016.2598828;10.1109/TVCG.2019.2934619;10.1109/VAST47406.2019.8986943;10.1109/TVCG.2007.70515;10.1109/VAST.2014.7042476;10.1109/TVCG.2020.3030383;10.1109/TVCG.2021.3114837
Visualization for machine learning,image data,treemaps,visual analytics,data-centric AI,error analysis171
39
Vis2022
Incorporation of Human Knowledge into Data Embeddings to Improve Pattern Significance and Interpretability
10.1109/TVCG.2022.3209382
http://dx.doi.org/10.1109/TVCG.2022.3209382
723733J
Embedding is a common technique for analyzing multi-dimensional data. However, the embedding projection cannot always form significant and interpretable visual structures that foreshadow underlying data patterns. We propose an approach that incorporates human knowledge into data embeddings to improve pattern significance and interpretability. The core idea is (1) externalizing tacit human knowledge as explicit sample labels and (2) adding a classification loss in the embedding network to encode samples' classes. The approach pulls samples of the same class with similar data features closer in the projection, leading to more compact (significant) and class-consistent (interpretable) visual structures. We give an embedding network with a customized classification loss to implement the idea and integrate the network into a visualization system to form a workflow that supports flexible class creation and pattern exploration. Patterns found on open datasets in case studies, subjects' performance in a user study, and quantitative experiment results illustrate the general usability and effectiveness of the approach.
Jie Li 0006;Chun-qi ZhouJie Li;Chun-qi ZhouCollege of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China
10.1109/TVCG.2011.185;10.1109/TVCG.2020.3030443;10.1109/TVCG.2016.2598468;10.1109/TVCG.2013.212;10.1109/TVCG.2018.2865194;10.1109/VAST.2017.8585498;10.1109/TVCG.2019.2934433;10.1109/TVCG.2019.2934251;10.1109/TVCG.2013.157;10.1109/TVCG.2015.2467615;10.1109/TVCG.2021.3114863;10.1109/TVCG.2021.3114687;10.1109/TVCG.2018.2865240;10.1109/TVCG.2020.3030347;10.1109/TVCG.2015.2467591;10.1109/TVCG.2014.2346481;10.1109/TVCG.2016.2598495;10.1109/TVCG.2021.3114870;10.1109/TVCG.2015.2468078;10.1109/VISUAL.2005.1532781;10.1109/TVCG.2017.2745078;10.1109/TVCG.2017.2745258;10.1109/TVCG.2017.2744098
Tabular Data,Multi-dimensional Exploration,Embedding Projection,Explicit Knowledge Generation,Visual Analytics382
40
Vis2022
Exploring Interactions with Printed Data Visualizations in Augmented Reality
10.1109/TVCG.2022.3209386
http://dx.doi.org/10.1109/TVCG.2022.3209386
418428J
This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.
Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu
Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu
Hong Kong University of Science and Technology, Hong Kong, USA;Harvard University, USA;Carnegie Mellon University, USA;Hong Kong University of Science and Technology, Hong Kong, USA;Hong Kong University of Science and Technology, Hong Kong, USA;University of Edinburgh, United Kingdom;Hong Kong University of Science and Technology, Hong Kong, USA
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2015.2467201;10.1109/TVCG.2013.124;10.1109/TVCG.2021.3114806;10.1109/TVCG.2021.3114861;10.1109/TVCG.2019.2934283;10.1109/TVCG.2020.3030334;10.1109/TVCG.2013.121;10.1109/TVCG.2013.134;10.1109/TVCG.2017.2744319;10.1109/TVCG.2017.2744019;10.1109/TVCG.2012.204;10.1109/TVCG.2020.3028948;10.1109/TVCG.2010.177;10.1109/TVCG.2014.2346249;10.1109/TVCG.2015.2467091;10.1109/TVCG.2018.2865152;10.1109/TVCG.2012.237;10.1109/TVCG.2020.3030392;10.1109/TVCG.2007.70515
Interaction design,augmented reality,paper interaction,tangible user interface,printed data visualization184HM
41
Vis2022
Breaking the Fourth Wall of Data Stories through Interaction
10.1109/TVCG.2022.3209409
http://dx.doi.org/10.1109/TVCG.2022.3209409
972982J
Interaction is increasingly integrating into data stories to support data exploration and explanation. Interaction can also be combined with the narrative device, breaking the fourth wall (BTFW), to build a deeper connection between readers and data stories. BTFW interaction directly addresses readers by requiring their input. Such user input is then integrated into the narrative or visuals of data stories to encourage readers to inspect the stories more closely. In this work, we explore the design patterns of BTFW interaction commonly used in data stories. Six design patterns were identified through the analysis of 58 high-quality data stories collected from a range of online sources. Specifically, the data stories were categorized using a coding framework, including the input of BTFW interaction provided by readers and the output of BTFW interaction generated by data stories to respond to the input. To explore the benefits as well as concerns of using BTFW interaction, we conducted a three-session user study including the reading, interview, and recall sessions. The results of our user study suggested that BTFW interaction has a positive impact on self-story connection, user engagement, and information recall. We also discussed design implications to address the possible negative effects on the interactivity-comprehensibility balance, information privacy, and the learning curve of interaction brought by BTFW interaction.
Yang Shi 0007;Tian Gao;Xiaohan Jiao;Nan CaoYang Shi;Tian Gao;Xiaohan Jiao;Nan CaoIntelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China
10.1109/TVCG.2015.2467201;10.1109/TVCG.2013.124;10.1109/TVCG.2019.2934283;10.1109/TVCG.2013.130;10.1109/TVCG.2013.120;10.1109/TVCG.2010.179;10.1109/TVCG.2007.70515
Interaction,data-driven storytelling,narrative devices00HM
42
Vis2022
Interactive Visual Analysis of Structure-borne Noise Data
10.1109/TVCG.2022.3209478
http://dx.doi.org/10.1109/TVCG.2022.3209478
778787J
Numerical simulation has become omnipresent in the automotive domain, posing new challenges such as high-dimensional parameter spaces and large as well as incomplete and multi-faceted data. In this design study, we show how interactive visual exploration and analysis of high-dimensional, spectral data from noise simulation can facilitate design improvements in the context of conflicting criteria. Here, we focus on structure-borne noise, i.e., noise from vibrating mechanical parts. Detecting problematic noise sources early in the design and production process is essential for reducing a product's development costs and its time to market. In a close collaboration of visualization and automotive engineering, we designed a new, interactive approach to quickly identify and analyze critical noise sources, also contributing to an improved understanding of the analyzed system. Several carefully designed, interactive linked views enable the exploration of noises, vibrations, and harshness at multiple levels of detail, both in the frequency and spatial domain. This enables swift and smooth changes of perspective; selections in the frequency domain are immediately reflected in the spatial domain, and vice versa. Noise sources are quickly identified and shown in the context of their neighborhood, both in the frequency and spatial domain. We propose a novel drill-down view, especially tailored to noise data analysis. Split boxplots and synchronized 3D geometry views support comparison tasks. With this solution, engineers iterate over design optimizations much faster, while maintaining a good overview at each iteration. We evaluated the new approach in the automotive industry, studying noise simulation data for an internal combustion engine.
Rainer Splechtna;Denis Gracanin;Goran Todorovic;Stanislav Goja;Boris Bedic;Helwig Hauser;Kresimir Matkovic
Rainer Splechtna;Denis Gračanin;Goran Todorović;Stanislav Goja;Boris Bedić;Helwig Hauser;Krešimir Matković
VRVis Research Center, Austria;Virginia Tech, Blacksburg, VA, USA;AVL-AST d.o.o., Zagreb, Croatia;AVL-AST d.o.o., Zagreb, Croatia;AVL-AST d.o.o., Zagreb, Croatia;University of Bergen, Norway;VRVis Research Center, Austria
10.1109/TVCG.2021.3114797;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2005.1532774;10.1109/INFVIS.2005.1532144;10.1109/INFVIS.2005.1532144;10.1109/VISUAL.2002.1183798
structure-borne noise,NVH analysis,interactive visual analysis029
43
Vis2022
SizePairs: Achieving Stable and Balanced Temporal Treemaps using Hierarchical Size-based Pairing
10.1109/TVCG.2022.3209450
http://dx.doi.org/10.1109/TVCG.2022.3209450
193202J
We present SizePairs, a new technique to create stable and balanced treemap layouts that visualize values changing over time in hierarchical data. To achieve an overall high-quality result across all time steps in terms of stability and aspect ratio, SizePairs employs a new hierarchical size-based pairing algorithm that recursively pairs two nodes that complement their size changes over time and have similar sizes. SizePairs maximizes the visual quality and stability by optimizing the splitting orientation of each internal node and flipping leaf nodes, if necessary. We also present a comprehensive comparison of SizePairs against the state-of-the-art treemaps developed for visualizing time-dependent data. SizePairs outperforms existing techniques in both visual quality and stability, while being faster than the local moves technique.
Chang Han;Jaemin Jo;Anyi Li;Bongshin Lee;Oliver Deussen;Yunhai Wang
Chang Han;Jaemin Jo;Anyi Li;Bongshin Lee;Oliver Deussen;Yunhai Wang
Shandong University, China;Sungkyunkwan University, South Korea;Shandong University, China;Microsoft Research, USA;University Konstanz, Germany;Shandong University, China
10.1109/TVCG.2020.3030404;10.1109/VISUAL.1991.175815;10.1109/TVCG.2010.186;10.1109/TVCG.2018.2865265;10.1109/INFVIS.2001.963283;10.1109/TVCG.2017.2745140;10.1109/TVCG.2007.70529
Treemaps,stability,compensation,temporal treemaps030
44
Vis2022
ErgoExplorer: Interactive Ergonomic Risk Assessment from Video Collections
10.1109/TVCG.2022.3209432
http://dx.doi.org/10.1109/TVCG.2022.3209432
4352J
Ergonomic risk assessment is now, due to an increased awareness, carried out more often than in the past. The conventional risk assessment evaluation, based on expert-assisted observation of the workplaces and manually filling in score tables, is still predominant. Data analysis is usually done with a focus on critical moments, although without the support of contextual information and changes over time. In this paper we introduce ErgoExplorer, a system for the interactive visual analysis of risk assessment data. In contrast to the current practice, we focus on data that span across multiple actions and multiple workers while keeping all contextual information. Data is automatically extracted from video streams. Based on carefully investigated analysis tasks, we introduce new views and their corresponding interactions. These views also incorporate domain-specific score tables to guarantee an easy adoption by domain experts. All views are integrated into ErgoExplorer, which relies on coordinated multiple views to facilitate analysis through interaction. ErgoExplorer makes it possible for the first time to examine complex relationships between risk assessments of individual body parts over long sessions that span multiple operations. The newly introduced approach supports analysis and exploration at several levels of detail, ranging from a general overview, down to inspecting individual frames in the video stream, if necessary. We illustrate the usefulness of the newly proposed approach applying it to several datasets.
Manlio Massiris Fernández;Sanjin Rados;Kresimir Matkovic;M. Eduard Gröller;Claudio Delrieux
Manlio Massiris Fernández;Sanjin Radoš;Krešimir Matković;M. Eduard Gröller;Claudio Delrieux
Departamento de Ing. Electrica y Computadoras, Universidad Nacional del Sur, Escuela de Ingenierías Industriales, Universidad de Extremadura, CONICET, Argentina;VRVis Research Center in Vienna, Austria;VRVis Research Center in Vienna, Austria;TU Wien, Austria;Departamento de Ing. Electrica y Computadoras, Universidad Nacional del Sur, CONICET, Argentina
10.1109/TVCG.2019.2934280;10.1109/TVCG.2009.167
Ergonomic assessment,workplace safety,visual analysis033
45
Vis2022
Relaxed Dot Plots: Faithful Visualization of Samples and Their Distribution
10.1109/TVCG.2022.3209429
http://dx.doi.org/10.1109/TVCG.2022.3209429
278287J
We introduce relaxed dot plots as an improvement of nonlinear dot plots for unit visualization. Our plots produce more faithful data representations and reduce moiré effects. Their contour is based on a customized kernel frequency estimation to match the shape of the distribution of underlying data values. Previous nonlinear layouts introduce column-centric nonlinear scaling of dot diameters for visualization of high-dynamic-range data with high peaks. We provide a mathematical approach to convert that column-centric scaling to our smooth envelope shape. This formalism allows us to use linear, root, and logarithmic scaling to find ideal dot sizes. Our method iteratively relaxes the dot layout for more correct and aesthetically pleasing results. To achieve this, we modified Lloyd's algorithm with additional constraints and heuristics. We evaluate the layouts of relaxed dot plots against a previously existing nonlinear variant and show that our algorithm produces less error regarding the underlying data while establishing the blue noise property that works against moiré effects. Further, we analyze the readability of our relaxed plots in three crowd-sourced experiments. The results indicate that our proposed technique surpasses traditional dot plots.
Nils Rodrigues;Christoph Schulz 0001;Sören Döring;Daniel Baumgartner;Tim Krake;Daniel Weiskopf
Nils Rodrigues;Christoph Schulz;Sören Döring;Daniel Baumgartner;Tim Krake;Daniel Weiskopf
University of Stuttgart, Visualization Research Center (VISUS), Germany;University of Stuttgart, Visualization Research Center (VISUS), Germany;University of Stuttgart, Germany;University of Stuttgart, Germany;University of Stuttgart, Visualization Research Center (VISUS), Germany;University of Stuttgart, Visualization Research Center (VISUS), Germany
10.1109/VISUAL.2002.1183777;10.1109/TVCG.2017.2744018;10.1109/TVCG.2009.127;10.1109/TVCG.2011.227
Dot plot,statistical graphics,Lloyd relaxation,layout,kernel frequency estimation033
46
Vis2022
Uncertainty-Aware Multidimensional Scaling
10.1109/TVCG.2022.3209420
http://dx.doi.org/10.1109/TVCG.2022.3209420
2332J
We present an extension of multidimensional scaling (MDS) to uncertain data, facilitating uncertainty visualization of multidimensional data. Our approach uses local projection operators that map high-dimensional random vectors to low-dimensional space to formulate a generalized stress. In this way, our generic model supports arbitrary distributions and various stress types. We use our uncertainty-aware multidimensional scaling (UAMDS) concept to derive a formulation for the case of normally distributed random vectors and a squared stress. The resulting minimization problem is numerically solved via gradient descent. We complement UAMDS by additional visualization techniques that address the sensitivity and trustworthiness of dimensionality reduction under uncertainty. With several examples, we demonstrate the usefulness of our approach and the importance of uncertainty-aware techniques.
David Hägele;Tim Krake;Daniel WeiskopfDavid Hägele;Tim Krake;Daniel WeiskopfUniversity of Stuttgart, Germany;University of Stuttgart, Germany;University of Stuttgart, Germany
10.1109/INFVIS.1998.729560;10.1109/VAST.2009.5332611;10.1109/TVCG.2019.2934812;10.1109/TVCG.2018.2864889;10.1109/TVCG.2016.2599106;10.1109/TVCG.2015.2467591;10.1109/TVCG.2016.2598919
Uncertainty visualization,dimensionality reduction,multidimensional scaling,non-linear projection037BP
47
Vis2022
A Framework for Multiclass Contour Visualization
10.1109/TVCG.2022.3209482
http://dx.doi.org/10.1109/TVCG.2022.3209482
353362J
Multiclass contour visualization is often used to interpret complex data attributes in such fields as weather forecasting, computational fluid dynamics, and artificial intelligence. However, effective and accurate representations of underlying data patterns and correlations can be challenging in multiclass contour visualization, primarily due to the inevitable visual cluttering and occlusions when the number of classes is significant. To address this issue, visualization design must carefully choose design parameters to make visualization more comprehensible. With this goal in mind, we proposed a framework for multiclass contour visualization. The framework has two components: a set of four visualization design parameters, which are developed based on an extensive review of literature on contour visualization, and a declarative domain-specific language (DSL) for creating multiclass contour rendering, which enables a fast exploration of those design parameters. A task-oriented user study was conducted to assess how those design parameters affect users' interpretations of real-world data. The study results offered some suggestions on the value choices of design parameters in multiclass contour visualization.
Sihang Li;Jiacheng Yu;Mingxuan Li;Le Liu;Xiaolong Zhang 0001;Xiaoru Yuan
Sihang Li;Jiacheng Yu;Mingxuan Li;Le Liu;Xiaolong Luke Zhang;Xiaoru Yuan
Ministry of Education, Key Laboratory of Machine Perception, School of AI, Peking University, China;Ministry of Education, Key Laboratory of Machine Perception, School of AI, Peking University, China;Ministry of Education, Key Laboratory of Machine Perception, School of AI, Peking University, China;School of Computer Science, Northwestern Polytechnical University, China;College of Information Sciences and Technology, Pennsylvania State University, USA;Ministry of Education, Key Laboratory of Machine Perception, School of AI, Peking University, China
10.1109/TVCG.2010.154;10.1109/TVCG.2018.2865139;10.1109/TVCG.2014.2346322;10.1109/TVCG.2009.122;10.1109/TVCG.2010.144;10.1109/TVCG.2018.2865141;10.1109/TVCG.2019.2934667;10.1109/TVCG.2019.2934811;10.1109/TVCG.2010.210;10.1109/TVCG.2013.130;10.1109/TVCG.2017.2744184;10.1109/TVCG.2016.2599030;10.1109/TVCG.2018.2864841;10.1109/TVCG.2020.3030372;10.1109/TVCG.2009.175;10.1109/TVCG.2013.143;10.1109/TVCG.2020.3030432;10.1109/TVCG.2012.238
Contour,multiclass visualization,visualization framework,domain-specific language,visualization design038
48
Vis2022
Self-Supervised Color-Concept Association via Image Colorization
10.1109/TVCG.2022.3209481
http://dx.doi.org/10.1109/TVCG.2022.3209481
247256J
The interpretation of colors in visualizations is facilitated when the assignments between colors and concepts in the visualizations match human's expectations, implying that the colors can be interpreted in a semantic manner. However, manually creating a dataset of suitable associations between colors and concepts for use in visualizations is costly, as such associations would have to be collected from humans for a large variety of concepts. To address the challenge of collecting this data, we introduce a method to extract color-concept associations automatically from a set of concept images. While the state-of-the-art method extracts associations from data with supervised learning, we developed a self-supervised method based on colorization that does not require the preparation of ground truth color-concept associations. Our key insight is that a set of images of a concept should be sufficient for learning color-concept associations, since humans also learn to associate colors to concepts mainly from past visual input. Thus, we propose to use an automatic colorization method to extract statistical models of the color-concept associations that appear in concept images. Specifically, we take a colorization model pre-trained on ImageNet and fine-tune it on the set of images associated with a given concept, to predict pixel-wise probability distributions in Lab color space for the images. Then, we convert the predicted probability distributions into color ratings for a given color library and aggregate them for all the images of a concept to obtain the final color-concept associations. We evaluate our method using four different evaluation metrics and via a user study. Experiments show that, although the state-of-the-art method based on supervised learning with user-provided ratings is more effective at capturing relative associations, our self-supervised method obtains overall better results according to metrics like Earth Mover's Distance (EMD) and Entropy Difference (ED), which are closer to human perception of color distributions.
Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang 0004
Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui HuangShenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Carleton University, School of Computer Science, Canada;Shenzhen University, Visual Computing Research Center, China
10.1109/TVCG.2016.2598604;10.1109/TVCG.2021.3114780;10.1109/TVCG.2019.2934536;10.1109/TVCG.2018.2865147;10.1109/TVCG.2020.3030434;10.1109/TVCG.2015.2467471
Color-concept association,colorization,EMD139
49
Vis2022
Lotse: A Practical Framework for Guidance in Visual Analytics
10.1109/TVCG.2022.3209393
http://dx.doi.org/10.1109/TVCG.2022.3209393
11241134J
Co-adaptive guidance aims to enable efficient human-machine collaboration in visual analytics, as proposed by multiple theoretical frameworks. This paper bridges the gap between such conceptual frameworks and practical implementation by introducing an accessible model of guidance and an accompanying guidance library, mapping theory into practice. We contribute a model of system-provided guidance based on design templates and derived strategies. We instantiate the model in a library called Lotse that allows specifying guidance strategies in definition files and generates running code from them. Lotse is the first guidance library using such an approach. It supports the creation of reusable guidance strategies to retrofit existing applications with guidance and fosters the creation of general guidance strategy patterns. We demonstrate its effectiveness through first-use case studies with VA researchers of varying guidance design expertise and find that they are able to effectively and quickly implement guidance with Lotse. Further, we analyze our framework's cognitive dimensions to evaluate its expressiveness and outline a summary of open research questions for aligning guidance practice with its intricate theory.
Fabian Sperrle;Davide Ceneda;Mennatallah El-AssadyFabian Sperrle;Davide Ceneda;Mennatallah El-AssadyUniversity of Konstanz, Germany;TU Wien, Austria;ETH AI Center, Switzerland
10.1109/TVCG.2011.185;10.1109/TVCG.2013.124;10.1109/TVCG.2016.2598468;10.1109/TVCG.2016.2598471;10.1109/TVCG.2020.3030360;10.1109/TVCG.2014.2346481;10.1109/TVCG.2016.2599030;10.1109/TVCG.2012.213;10.1109/VAST47406.2019.8986917;10.1109/TVCG.2007.70589
Guidance Theory,Guidance Implementation341
50
Vis2022
Visual Concept Programming: A Visual Analytics Approach to Injecting Human Intelligence at Scale
10.1109/TVCG.2022.3209466
http://dx.doi.org/10.1109/TVCG.2022.3209466
7483J
Data-centric AI has emerged as a new research area to systematically engineer the data to land AI models for real-world applications. As a core method for data-centric AI, data programming helps experts inject domain knowledge into data and label data at scale using carefully designed labeling functions (e.g., heuristic rules, logistics). Though data programming has shown great success in the NLP domain, it is challenging to program image data because of a) the challenge to describe images using visual vocabulary without human annotations and b) lacking efficient tools for data programming of images. We present Visual Concept Programming, a first-of-its-kind visual analytics approach of using visual concepts to program image data at scale while requiring a few human efforts. Our approach is built upon three unique components. It first uses a self-supervised learning approach to learn visual representation at the pixel level and extract a dictionary of visual concepts from images without using any human annotations. The visual concepts serve as building blocks of labeling functions for experts to inject their domain knowledge. We then design interactive visualizations to explore and understand visual concepts and compose labeling functions with concepts without writing code. Finally, with the composed labeling functions, users can label the image data at scale and use the labeled data to refine the pixel-wise visual representation and concept quality. We evaluate the learned pixel-wise visual representation for the downstream task of semantic segmentation to show the effectiveness and usefulness of our approach. In addition, we demonstrate how our approach tackles real-world problems of image retrieval for autonomous driving.
Md. Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Ren Liu
Md Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren
University of Maryland, USA;Bosch Research North America, USA;Robert Bosch GmbH, Germany;Bosch Research North America, USA;Bosch Research North America, USA
10.1109/TVCG.2017.2744818;10.1109/TVCG.2020.3030350;10.1109/TVCG.2021.3114855;10.1109/TVCG.2019.2934659;10.1109/TVCG.2018.2864843;10.1109/TVCG.2021.3114858;10.1109/TVCG.2017.2744158;10.1109/TVCG.2019.2934619;10.1109/VAST47406.2019.8986943;10.1109/TVCG.2021.3114837
Visual concept programming,data-centric AI,data programming,self-supervised learning,semantic segmentation241
51
Vis2022
IDLat: An Importance-Driven Latent Generation Method for Scientific Data
10.1109/TVCG.2022.3209419
http://dx.doi.org/10.1109/TVCG.2022.3209419
679689J
Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
Jingyi Shen;Haoyu Li;Jiayi Xu 0001;Ayan Biswas;Han-Wei Shen
Jingyi Shen;Haoyu Li;Jiayi Xu;Ayan Biswas;Han-Wei ShenDepartment of Computer Science and Engineering, The Ohio State University, USA;Department of Computer Science and Engineering, The Ohio State University, USA;Department of Computer Science and Engineering, The Ohio State University, USA;Los Alamos National Laboratory, USA;Department of Computer Science and Engineering, The Ohio State University, USA
10.1109/TVCG.2020.3030346;10.1109/VISUAL.2003.1250374;10.1109/TVCG.2006.152;10.1109/VISUAL.2004.48;10.1109/TVCG.2008.140
Latent space,scientific data representation,deep Learning044
52
Vis2022
Thirty-Two Years of IEEE VIS: Authors, Fields of Study and Citations
10.1109/TVCG.2022.3209422
http://dx.doi.org/10.1109/TVCG.2022.3209422
10161025J
The IEEE VIS Conference (VIS) recently rebranded itself as a unified conference and officially positioned itself within the discipline of Data Science. Driven by this movement, we investigated (1) who contributed to VIS, and (2) where VIS stands in the scientific world. We examined the authors and fields of study of 3,240 VIS publications in the past 32 years based on data collected from OpenAlex and IEEE Xplore, among other sources. We also examined the citation flows from referenced papers (i.e., those referenced in VIS) to VIS, and from VIS to citing papers (i.e., those citing VIS). We found that VIS has been becoming increasingly popular and collaborative. The number of publications, of unique authors, and of participating countries have been steadily growing. Both cross-country collaborations, and collaborations between educational and non-educational affiliations, namely “cross-type collaborations”, are increasing. The dominance of the US is decreasing, and authors from China are now an important part of VIS. In terms of author affiliation types, VIS is increasingly dominated by authors from universities. We found that the topics, inspirations, and influences of VIS research is limited such that (1) VIS, and their referenced and citing papers largely fall into the Computer Science domain, and (2) citations flow mostly between the same set of subfields within Computer Science. Our citation analyses showed that award-winning VIS papers had higher citations. Interactive visualizations, replication data, source code and supplementary material are available at https://32vis.hongtaoh.com and https://osf.io/zkvjm.
Hongtao Hao 0002;Yumian Cui;Zhengxiang Wang;Yea-Seul Kim
Hongtao Hao;Yumian Cui;Zhengxiang Wang;Yea-Seul KimUniversity of Wisconsin-Madison, USA;University of Wisconsin-Madison, USA;Stony Brook University, Stony Brook, NY, USA;University of Wisconsin-Madison, USA
10.1109/TVCG.2011.185;10.1109/INFVIS.2004.23;10.1109/TVCG.2016.2598827;10.1109/INFVIS.2004.45;10.1109/INFVIS.2004.22;10.1109/TVCG.2021.3114787;10.1109/INFVIS.2004.4;10.1109/INFVIS.2004.37
Visualization,scientometric analysis,OpenAlex,author affiliation,scientific collaboration,citation analysis044
53
Vis2022
VDL-Surrogate: A View-Dependent Latent-based Model for Parameter Space Exploration of Ensemble Simulations
10.1109/TVCG.2022.3209413
http://dx.doi.org/10.1109/TVCG.2022.3209413
820830J
We propose VDL-Surrogate, a view-dependent neural-network-latent-based surrogate model for parameter space exploration of ensemble simulations that allows high-resolution visualizations and user-specified visual mappings. Surrogate-enabled parameter space exploration allows domain scientists to preview simulation results without having to run a large number of computationally costly simulations. Limited by computational resources, however, existing surrogate models may not produce previews with sufficient resolution for visualization and analysis. To improve the efficient use of computational resources and support high-resolution exploration, we perform ray casting from different viewpoints to collect samples and produce compact latent representations. This latent encoding process reduces the cost of surrogate model training while maintaining the output quality. In the model training stage, we select viewpoints to cover the whole viewing sphere and train corresponding VDL-Surrogate models for the selected viewpoints. In the model inference stage, we predict the latent representations at previously selected viewpoints and decode the latent representations to data space. For any given viewpoint, we make interpolations over decoded data at selected viewpoints and generate visualizations with user-specified visual mappings. We show the effectiveness and efficiency of VDL-Surrogate in cosmological and ocean simulations with quantitative and qualitative evaluations. Source code is publicly available at https://github.com/trainsn/VDL-Surrogate.
Neng Shi;Jiayi Xu 0001;Haoyu Li;Hanqi Guo 0001;Jonathan Woodring;Han-Wei Shen
Neng Shi;Jiayi Xu;Haoyu Li;Hanqi Guo;Jonathan Woodring;Han-Wei Shen
Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA;Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA;Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA;Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, USA;Los Alamos National Laboratory, Applied Computer Science Group (CCS-7), Los Alamos, NM, USA;Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA
10.1109/TVCG.2016.2598869;10.1109/SciVis.2015.7429487;10.1109/TVCG.2010.190;10.1109/TVCG.2013.147;10.1109/TVCG.2019.2934255;10.1109/TVCG.2020.3030346;10.1109/TVCG.2019.2934591;10.1109/TVCG.2019.2934312;10.1109/TVCG.2009.155;10.1109/TVCG.2018.2865051;10.1109/TVCG.2014.2346755;10.1109/VAST.2015.7347635;10.1109/TVCG.2010.215;10.1109/TVCG.2016.2598830
Parameter space exploration,ensemble visualization,surrogate modeling,view-dependent visualization045HM
54
Vis2022
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models
10.1109/TVCG.2022.3209479
http://dx.doi.org/10.1109/TVCG.2022.3209479
11461156J
State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.
Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush
Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush
IBM Research, China;Brown University, USA;Huggingface, USA;IBM Research, China;Harvard SEAS, USA;Harvard SEAS, USA;Huggingface, USA
10.1109/TVCG.2020.3028976;10.1109/TVCG.2021.3114683;10.1109/TVCG.2018.2865230;10.1109/VAST.2017.8585721;10.1109/TVCG.2017.2744158
Natural language processing,language modeling,zero-shot models646
55
Vis2022
On-Tube Attribute Visualization for Multivariate Trajectory Data
10.1109/TVCG.2022.3209400
http://dx.doi.org/10.1109/TVCG.2022.3209400
12881298J
Stylized tubes are an established visualization primitive for line data as encountered in many scientific fields, ranging from characteristic lines in flow fields, fiber tracks reconstructed from diffusion tensor imaging, to trajectories of moving objects as they arise from cyber-physical systems in many engineering disciplines. Typical challenges include large data set sizes demanding for efficient rendering techniques as well as a large number of attributes that cannot be mapped simultaneously to the basic visual attributes provided by a tube-based visualization. In this work, we tackle both challenges with a new on-tube visualization approach. We improve recent work on high-quality GPU ray casting of Hermite spline tubes supporting ambient occlusion and extend it by a new layered procedural texturing technique. In the proposed framework, a large number of data set attributes can be mapped simultaneously to a variety of glyphs and plots that are embedded in texture space and organized in layers. Efficient rendering with minimal data transfer is achieved by generating the glyphs procedurally and drawing them in a deferred shading pass. We integrated these techniques in a prototype visualization tool that facilitates flexible mapping of data set attributes to visual tube and glyph attributes. We studied our approach on a variety of example data from different fields and found it to provide a highly adaptable and extensible toolbox to quickly craft tailor-made tube-based trajectory visualizations.
Benjamin Russig;David Groß;Raimund Dachselt;Stefan Gumhold
Benjamin Russig;David Groß;Raimund Dachselt;Stefan Gumhold
Chair of Computer Graphics and Visualization, TU Dresden, Germany;Chair of Computer Graphics and Visualization, TU Dresden, Germany;Interactive Media Lab, TU Dresden, Germany;Chair of Computer Graphics and Visualization, TU Dresden, Germany
10.1109/TVCG.2016.2598416;10.1109/TVCG.2018.2864811;10.1109/TVCG.2009.138;10.1109/TVCG.2020.3028954;10.1109/TVCG.2006.151;10.1109/TVCG.2007.70532;10.1109/VISUAL.2005.1532859;10.1109/TVCG.2012.265
Visualization,Rendering,Line Data,Trajectories,Multivariate Data047
56
Vis2022
Calibrate: Interactive Analysis of Probabilistic Model Output
10.1109/TVCG.2022.3209489
http://dx.doi.org/10.1109/TVCG.2022.3209489
853863J
Analyzing classification model performance is a crucial task for machine learning practitioners. While practitioners often use count-based metrics derived from confusion matrices, like accuracy, many applications, such as weather prediction, sports betting, or patient risk prediction, rely on a classifier's predicted probabilities rather than predicted labels. In these instances, practitioners are concerned with producing a calibrated model, that is, one which outputs probabilities that reflect those of the true distribution. Model calibration is often analyzed visually, through static reliability diagrams, however, the traditional calibration visualization may suffer from a variety of drawbacks due to the strong aggregations it necessitates. Furthermore, count-based approaches are unable to sufficiently analyze model calibration. We present Calibrate, an interactive reliability diagram that addresses the aforementioned issues. Calibrate constructs a reliability diagram that is resistant to drawbacks in traditional approaches, and allows for interactive subgroup analysis and instance-level inspection. We demonstrate the utility of Calibrate through use cases on both real-world and synthetic data. We further validate Calibrate by presenting the results of a think-aloud experiment with data scientists who routinely analyze model calibration.
Peter Xenopoulos;João Rulff;Luis Gustavo Nonato;Brian Barr;Cláudio T. Silva
Peter Xenopoulos;João Rulff;Luis Gustavo Nonato;Brian Barr;Claudio Silva
New York University, USA;New York University, USA;ICMC-USP, São Carlos, Brazil;Capital One, USA;New York University, USA
10.1109/TVCG.2014.2346660;10.1109/TVCG.2011.185;10.1109/VAST47406.2019.8986948;10.1109/TVCG.2018.2865043;10.1109/TVCG.2017.2744718;10.1109/TVCG.2016.2598828
calibration,performance analysis,model understanding,reliability diagram148
57
Vis2022
A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias
10.1109/TVCG.2022.3209476
http://dx.doi.org/10.1109/TVCG.2022.3209476
483492J
The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley
Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley
Washington University, USA;Washington University, USA;Washington University, USA;Washington University, USA
10.1109/TVCG.2014.2346575;10.1109/TVCG.2016.2598468;10.1109/VAST.2017.8585665;10.1109/TVCG.2018.2865117;10.1109/TVCG.2020.3030430;10.1109/TVCG.2009.111;10.1109/TVCG.2021.3114827;10.1109/TVCG.2015.2467551;10.1109/VAST.2017.8585669;10.1109/TVCG.2021.3114862
Visual Analytics,Analytic Provenance,User Interaction Modeling,Machine Learning,Benchmark Study249
58
Vis2022
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias
10.1109/TVCG.2022.3209484
http://dx.doi.org/10.1109/TVCG.2022.3209484
473482J
With the rise of AI, algorithms have become better at learning underlying patterns from the training data including ingrained social biases based on gender, race, etc. Deployment of such algorithms to domains such as hiring, healthcare, law enforcement, etc. has raised serious concerns about fairness, accountability, trust and interpretability in machine learning algorithms. To alleviate this problem, we propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases from tabular datasets. It uses a graphical causal model to represent causal relationships among different features in the dataset and as a medium to inject domain knowledge. A user can detect the presence of bias against a group, say females, or a subgroup, say black females, by identifying unfair causal relationships in the causal network and using an array of fairness metrics. Thereafter, the user can mitigate bias by refining the causal model and acting on the unfair causal edges. For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset based on the current causal model while ensuring a minimal change from the original dataset. Users can visually assess the impact of their interactions on different fairness metrics, utility metrics, data distortion, and the underlying data distribution. Once satisfied, they can download the debiased dataset and use it for any downstream application for fairer predictions. We evaluate D-BIAS by conducting experiments on 3 datasets and also a formal user study. We found that D-BIAS helps reduce bias significantly compared to the baseline debiasing approach across different fairness metrics while incurring little data distortion and a small loss in utility. Moreover, our human-in-the-loop based approach significantly outperforms an automated approach on trust, interpretability and accountability.
Bhavya Ghai;Klaus Mueller 0001Bhavya Ghai;Klaus MuellerComputer Science department, Stony Brook University, USA;Computer Science department, Stony Brook University, USA
10.1109/TVCG.2019.2934262;10.1109/VAST.2017.8585647;10.1109/TVCG.2021.3114850;10.1109/TVCG.2020.3028957
Algorithmic Fairness,Causality,Debiasing,Human-in-the-loop,Visual Analytics250
59
Vis2022
TrafficVis: Visualizing Organized Activity and Spatio-Temporal Patterns for Detecting and Labeling Human Trafficking
10.1109/TVCG.2022.3209403
http://dx.doi.org/10.1109/TVCG.2022.3209403
5362J
Law enforcement and domain experts can detect human trafficking (HT) in online escort websites by analyzing suspicious clusters of connected ads. How can we explain clustering results intuitively and interactively, visualizing potential evidence for experts to analyze? We present TrafficVis, the first interface for cluster-level HT detection and labeling. Developed through months of participatory design with domain experts, TrafficVis provides coordinated views in conjunction with carefully chosen backend algorithms to effectively show spatio-temporal and text patterns to a wide variety of anti-HT stakeholders. We build upon state-of-the-art text clustering algorithms by incorporating shared metadata as a signal of connected and possibly suspicious activity, then visualize the results. Domain experts can use TrafficVis to label clusters as HT, or other, suspicious, but non-HT activity such as spam and scam, quickly creating labeled datasets to enable further HT research. Through domain expert feedback and a usage scenario, we demonstrate TRAFFICVIS's efficacy. The feedback was overwhelmingly positive, with repeated high praises for the usability and explainability of our tool, the latter being vital for indicting possible criminals.
Catalina Vajiac;Polo Chau;Andreas M. Olligschlaeger;Rebecca Mackenzie;Pratheeksha Nair;Meng-Chieh Lee;Yifei Li;Namyong Park;Reihaneh Rabbany;Christos Faloutsos
Catalina Vajiac;Duen Horng Chau;Andreas Olligschlaeger;Rebecca Mackenzie;Pratheeksha Nair;Meng-Chieh Lee;Yifei Li;Namyong Park;Reihaneh Rabbany;Christos Faloutsos
Carnegie Mellon University, USA;Georgia Institute of Technology, USA;Marinus Analytics, USA;University of Pittsburgh, USA;McGill MILA, Canada;Carnegie Mellon University, USA;McGill MILA, Canada;Carnegie Mellon University, USA;McGill MILA, Canada;Carnegie Mellon University, USA10.1109/TVCG.2017.2744818Human trafficking,Labeling,Visualization,Infoshield051HM
60
Vis2022
Visualizing the Passage of Time with Video Temporal Pyramids
10.1109/TVCG.2022.3209454
http://dx.doi.org/10.1109/TVCG.2022.3209454
171181J
What can we learn about a scene by watching it for months or years? A video recorded over a long timespan will depict interesting phenomena at multiple timescales, but identifying and viewing them presents a challenge. The video is too long to watch in full, and some things are too slow to experience in real-time, such as glacial retreat or the gradual shift from summer to fall. Timelapse videography is a common approach to summarizing long videos and visualizing slow timescales. However, a timelapse is limited to a single chosen temporal frequency, and often appears flickery due to aliasing. Also, the length of the timelapse video is directly tied to its temporal resolution, which necessitates tradeoffs between those two facets. In this paper, we propose Video Temporal Pyramids, a technique that addresses these limitations and expands the possibilities for visualizing the passage of time. Inspired by spatial image pyramids from computer vision, we developed an algorithm that builds video pyramids in the temporal domain. Each level of a Video Temporal Pyramid visualizes a different timescale; for instance, videos from the monthly timescale are usually good for visualizing seasonal changes, while videos from the one-minute timescale are best for visualizing sunrise or the movement of clouds across the sky. To help explore the different pyramid levels, we also propose a Video Spectrogram to visualize the amount of activity across the entire pyramid, providing a holistic overview of the scene dynamics and the ability to explore and discover phenomena across time and timescales. To demonstrate our approach, we have built Video Temporal Pyramids from ten outdoor scenes, each containing months or years of data. We compare Video Temporal Pyramid layers to naive timelapse and find that our pyramids enable alias-free viewing of longer-term changes. We also demonstrate that the Video Spectrogram facilitates exploration and discovery of phenomena across pyramid levels, by enabling both overview and detail-focused perspectives.
Melissa E. Swift;Wyatt Ayers;Sophie Pallanck;Scott Wehrwein
Melissa E. Swift;Wyatt Ayers;Sophie Pallanck;Scott Wehrwein
Western Washington University, USA;Western Washington University, USA;Western Washington University, USA;Western Washington University, USA
10.1109/TVCG.2020.3030398;10.1109/TVCG.2012.222;10.1109/TVCG.2008.185
Time,time-frequency,video visualization,multi-scale,webcam051
61
Vis2022
RankAxis: Towards a Systematic Combination of Projection and Ranking in Multi-Attribute Data Exploration
10.1109/TVCG.2022.3209463
http://dx.doi.org/10.1109/TVCG.2022.3209463
701711J
Projection and ranking are frequently used analysis techniques in multi-attribute data exploration. Both families of techniques help analysts with tasks such as identifying similarities between observations and determining ordered subgroups, and have shown good performances in multi-attribute data exploration. However, they often exhibit problems such as distorted projection layouts, obscure semantic interpretations, and non-intuitive effects produced by selecting a subset of (weighted) attributes. Moreover, few studies have attempted to combine projection and ranking into the same exploration space to complement each other's strengths and weaknesses. For this reason, we propose RankAxis, a visual analytics system that systematically combines projection and ranking to facilitate the mutual interpretation of these two techniques and jointly support multi-attribute data exploration. A real-world case study, expert feedback, and a user study demonstrate the efficacy of RankAxis.
Qiangqiang Liu;Yukun Ren;Zhihua Zhu;Dai Li;Xiaojuan Ma;Quan Li
Qiangqiang Liu;Yukun Ren;Zhihua Zhu;Dai Li;Xiaojuan Ma;Quan Li
ShanghaiTech, China;Corporate Development Group, Tencent, USA;Corporate Development Group, Tencent, USA;Corporate Development Group, Tencent, USA;The Hong Kong University of Science and Technology, China;School of Information Science and Technology, ShanghaiTech University, China
10.1109/VAST.2010.5652433;10.1109/VAST.2018.8802486;10.1109/TVCG.2010.184;10.1109/TVCG.2013.157;10.1109/TVCG.2013.173;10.1109/VISUAL.1990.146402;10.1109/TVCG.2015.2467615;10.1109/TVCG.2016.2598446;10.1109/VAST.2018.8802454;10.1109/TVCG.2016.2598589;10.1109/INFVIS.2004.15;10.1109/INFVIS.2004.3;10.1109/VAST.2009.5332628;10.1109/TVCG.2017.2745078;10.1109/TVCG.2018.2865126;10.1109/TVCG.2017.2745258;10.1109/TVCG.2017.2744098;10.1109/TVCG.2013.150;10.1109/TVCG.2017.2744738
Ranking,projection,multi-attribute data exploration052
62
Vis2022
RASIPAM: Interactive Pattern Mining of Multivariate Event Sequences in Racket Sports
10.1109/TVCG.2022.3209452
http://dx.doi.org/10.1109/TVCG.2022.3209452
940950J
Experts in racket sports like tennis and badminton use tactical analysis to gain insight into competitors' playing styles. Many data-driven methods apply pattern mining to racket sports data — which is often recorded as multivariate event sequences — to uncover sports tactics. However, tactics obtained in this way are often inconsistent with those deduced by experts through their domain knowledge, which can be confusing to those experts. This work introduces RASIPAM, a RAcket-Sports Interactive PAttern Mining system, which allows experts to incorporate their knowledge into data mining algorithms to discover meaningful tactics interactively. RASIPAM consists of a constraint-based pattern mining algorithm that responds to the analysis demands of experts: Experts provide suggestions for finding tactics in intuitive written language, and these suggestions are translated into constraints to run the algorithm. RASIPAM further introduces a tailored visual interface that allows experts to compare the new tactics with the original ones and decide whether to apply a given adjustment. This interactive workflow iteratively progresses until experts are satisfied with all tactics. We conduct a quantitative experiment to show that our algorithm supports real-time interaction. Two case studies in tennis and in badminton respectively, each involving two domain experts, are conducted to show the effectiveness and usefulness of the system.
Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai WuJiang Wu;Dongyu Liu;Ziyang Guo;Yingcai WuState Key Lab of CAD&CG, Zhejiang University, China;MIT, USA;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China
10.1109/TVCG.2017.2745278;10.1109/TVCG.2021.3114861;10.1109/VAST.2006.261421;10.1109/TVCG.2013.173;10.1109/TVCG.2018.2865018;10.1109/TVCG.2015.2467325;10.1109/TVCG.2021.3114848;10.1109/TVCG.2012.271;10.1109/TVCG.2012.213;10.1109/TVCG.2015.2467931;10.1109/VAST.2017.8585647;10.1109/TVCG.2019.2934630;10.1109/VAST50239.2020.00009;10.1109/TVCG.2021.3114832;10.1109/TVCG.2017.2744218;10.1109/TVCG.2018.2865041;10.1109/TVCG.2020.3030359;10.1109/TVCG.2021.3114877;10.1109/TVCG.2022.3209447;10.1109/TVCG.2019.2934668
Sports Analytics,Multivariate Event Sequence,Interactive Pattern Mining,Comparative Visual Design052
63
Vis2022
Quick Clusters: A GPU-Parallel Partitioning for Efficient Path Tracing of Unstructured Volumetric Grids
10.1109/TVCG.2022.3209418
http://dx.doi.org/10.1109/TVCG.2022.3209418
537547J
We propose a simple yet effective method for clustering finite elements to improve preprocessing times and rendering performance of unstructured volumetric grids without requiring auxiliary connectivity data. Rather than building bounding volume hierarchies (BVHs) over individual elements, we sort elements along with a Hilbert curve and aggregate neighboring elements together, improving BVH memory consumption by over an order of magnitude. Then to further reduce memory consumption, we cluster the mesh on the fly into sub-meshes with smaller indices using a series of efficient parallel mesh re-indexing operations. These clusters are then passed to a highly optimized ray tracing API for point containment queries and ray-cluster intersection testing. Each cluster is assigned a maximum extinction value for adaptive sampling, which we rasterize into non-overlapping view-aligned bins allocated along the ray. These maximum extinction bins are then used to guide the placement of samples along the ray during visualization, reducing the number of samples required by multiple orders of magnitude (depending on the dataset), thereby improving overall visualization interactivity. Using our approach, we improve rendering performance over a competitive baseline on the NASA Mars Lander dataset from 6× (1 frame per second (fps) and 1.0 M rays per second (rps) up to now 6 fps and 12.4 M rps, now including volumetric shadows) while simultaneously reducing memory consumption by 3×(33 GB down to 11 GB) and avoiding any offline preprocessing steps, enabling high-quality interactive visualization on consumer graphics cards. Then by utilizing the full 48 GB of an RTX 8000, we improve the performance of Lander by 17 × (1 fps up to 17 fps, 1.0 M rps up to 35.6 M rps).
Nathan Morrical;Alper Sahistan;Ugur Güdükbay;Ingo Wald;Valerio Pascucci
Nate Morrical;Alper Sahistan;Uğur Güdükbay;Ingo Wald;Valerio Pascucci
SCI Institute, University of Utah, USA;Bilkent University, Turkey;Bilkent University, Turkey;NVIDIA, USA;SCI Institute, University of Utah, USA
10.1109/TVCG.2014.2346333;10.1109/TVCG.2011.252;10.1109/TVCG.2011.216;10.1109/TVCG.2021.3114869
Ray Tracing,Path Tracing,Volume Rendering,Scientific Visualization,Delta Tracking153HM
64
Vis2022
Unifying Effects of Direct and Relational Associations for Visual Communication
10.1109/TVCG.2022.3209443
http://dx.doi.org/10.1109/TVCG.2022.3209443
385395J
People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations (inferred mappings) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.
Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss
Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss
Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA;Cognitive Science, University of California, Merced, USA;Philosophy, Centre for Vision Research, York University, USA;Mechanical and Industrial Engineering, Northeastern University, USA;Psychology, Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA
10.1109/TVCG.2017.2743978;10.1109/TVCG.2016.2598918;10.1109/VISUAL.2002.1183788;10.1109/TVCG.2021.3114780;10.1109/TVCG.2016.2599106;10.1109/TVCG.2019.2934536;10.1109/TVCG.2018.2865147;10.1109/TVCG.2020.3030434;10.1109/TVCG.2015.2467471;10.1109/TVCG.2019.2934284;10.1109/TVCG.2017.2744359
Visual reasoning,information visualization,colormap data visualizations,visual encoding,color cognition053
65
Vis2022
Dual Space Coupling Model Guided Overlap-Free Scatterplot
10.1109/TVCG.2022.3209459
http://dx.doi.org/10.1109/TVCG.2022.3209459
657667J
The overdraw problem of scatterplots seriously interferes with the visual tasks. Existing methods, such as data sampling, node dispersion, subspace mapping, and visual abstraction, cannot guarantee the correspondence and consistency between the data points that reflect the intrinsic original data distribution and the corresponding visual units that reveal the presented data distribution, thus failing to obtain an overlap-free scatterplot with unbiased and lossless data distribution. A dual space coupling model is proposed in this paper to represent the complex bilateral relationship between data space and visual space theoretically and analytically. Under the guidance of the model, an overlap-free scatterplot method is developed through integration of the following: a geometry-based data transformation algorithm, namely DistributionTranscriptor; an efficient spatial mutual exclusion guided view transformation algorithm, namely PolarPacking; an overlap-free oriented visual encoding configuration model and a radius adjustment tool, namely $f_{r_{draw}}$. Our method can ensure complete and accurate information transfer between the two spaces, maintaining consistency between the newly created scatterplot and the original data distribution on global and local features. Quantitative evaluation proves our remarkable progress on computational efficiency compared with the state-of-the-art methods. Three applications involving pattern enhancement, interaction improvement, and overdraw mitigation of trajectory visualization demonstrate the broad prospects of our method.
Zeyu Li 0003;Ruizhi Shi;Yan Liu;Shizhuo Long;Ziheng Guo;Shichao Jia;Jiawan Zhang
Zeyu Li;Ruizhi Shi;Yan Liu;Shizhuo Long;Ziheng Guo;Shichao Jia;Jiawan Zhang
College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China
10.1109/VAST.2010.5652460;10.1109/TVCG.2014.2346594;10.1109/TVCG.2019.2934541;10.1109/TVCG.2021.3114880;10.1109/TVCG.2009.122;10.1109/TVCG.2014.2346276;10.1109/TVCG.2013.183;10.1109/TVCG.2019.2934667;10.1109/TVCG.2017.2744378;10.1109/TVCG.2020.3030365;10.1109/TVCG.2017.2744184;10.1109/VAST47406.2019.8986943;10.1109/TVCG.2020.3030432
Scatterplot,overdraw,overlap-free,scalability,circle packing054
66
Vis2022
Predicting User Preferences of Dimensionality Reduction Embedding Quality
10.1109/TVCG.2022.3209449
http://dx.doi.org/10.1109/TVCG.2022.3209449
745755J
A plethora of dimensionality reduction techniques have emerged over the past decades, leaving researchers and analysts with a wide variety of choices for reducing their data, all the more so given some techniques come with additional hyper-parametrization (e.g., t-SNE, UMAP, etc.). Recent studies are showing that people often use dimensionality reduction as a black-box regardless of the specific properties the method itself preserves. Hence, evaluating and comparing 2D embeddings is usually qualitatively decided, by setting embeddings side-by-side and letting human judgment decide which embedding is the best. In this work, we propose a quantitative way of evaluating embeddings, that nonetheless places human perception at the center. We run a comparative study, where we ask people to select “good” and “misleading” views between scatterplots of low-dimensional embeddings of image datasets, simulating the way people usually select embeddings. We use the study data as labels for a set of quality metrics for a supervised machine learning model whose purpose is to discover and quantify what exactly people are looking for when deciding between embeddings. With the model as a proxy for human judgments, we use it to rank embeddings on new datasets, explain why they are relevant, and quantify the degree of subjectivity when people select preferred embeddings.
Cristina Morariu;Adrien Bibal;René Cutura;Benoît Frénay;Michael Sedlmair
Cristina Morariu;Adrien Bibal;Rene Cutura;Benoît Frénay;Michael Sedlmair
University of Stuttgart, Germany;Université catholique de Louvain, Belgium;University of Stuttgart, Germany;University of Namur, Belgium;University of Stuttgart, Germany
10.1109/TVCG.2011.229;10.1109/VAST.2010.5652392;10.1109/TVCG.2013.153;10.1109/INFVIS.2005.1532142
Dimensionality reduction,Manifold learning,Human-centered computing154
67
Vis2022
Level Set Restricted Voronoi Tessellation for Large scale Spatial Statistical Analysis
10.1109/TVCG.2022.3209473
http://dx.doi.org/10.1109/TVCG.2022.3209473
548558J
Spatial statistical analysis of multivariate volumetric data can be challenging due to scale, complexity, and occlusion. Advances in topological segmentation, feature extraction, and statistical summarization have helped overcome the challenges. This work introduces a new spatial statistical decomposition method based on level sets, connected components, and a novel variation of the restricted centroidal Voronoi tessellation that is better suited for spatial statistical decomposition and parallel efficiency. The resulting data structures organize features into a coherent nested hierarchy to support flexible and efficient out-of-core region-of-interest extraction. Next, we provide an efficient parallel implementation. Finally, an interactive visualization system based on this approach is designed and then applied to turbulent combustion data. The combined approach enables an interactive spatial statistical analysis workflow for large-scale data with a top-down approach through multiple-levels-of-detail that links phase space statistics with spatial features.
Tyson Neuroth;Martin Rieth;Aditya Konduri;Myoungkyu Lee;Jacqueline Chen;Kwan-Liu Ma
Tyson Neuroth;Martin Rieth;Konduri Aditya;Myoungkyu Lee;Jacqueline H Chen;Kwan-Liu Ma
University of California, Davis, USA;Sandia National Laboratories, USA;India Institute of Science, USA;University of Alabama, USA;Sandia National Laboratories, USA;University of California, Davis, USA
10.1109/TVCG.2011.199;10.1109/VISUAL.2004.13;10.1109/TVCG.2016.2598604;10.1109/TVCG.2017.2744099;10.1109/TVCG.2019.2934368
Level sets,isosurfaces,Voronoi decomposition,scientific visualization,large data,statistical summarization055
68
Vis2022
MEDLEY: Intent-based Recommendations to Support Dashboard Composition
10.1109/TVCG.2022.3209421
http://dx.doi.org/10.1109/TVCG.2022.3209421
11351145J
Despite the ever-growing popularity of dashboards across a wide range of domains, their authoring still remains a tedious and complex process. Current tools offer considerable support for creating individual visualizations but provide limited support for discovering groups of visualizations that can be collectively useful for composing analytic dashboards. To address this problem, we present Medley, a mixed-initiative interface that assists in dashboard composition by recommending dashboard collections (i.e., a logically grouped set of views and filtering widgets) that map to specific analytical intents. Users can specify dashboard intents (namely, measure analysis, change analysis, category analysis, or distribution analysis) explicitly through an input panel in the interface or implicitly by selecting data attributes and views of interest. The system recommends collections based on these analytic intents, and views and widgets can be selected to compose a variety of dashboards. Medley also provides a lightweight direct manipulation interface to configure interactions between views in a dashboard. Based on a study with 13 participants performing both targeted and open-ended tasks, we discuss how Medley's recommendations guide dashboard composition and facilitate different user workflows. Observations from the study identify potential directions for future work, including combining manual view specification with dashboard recommendations and designing natural language interfaces for dashboard authoring.
Aditeya Pandey;Arjun Srinivasan;Vidya SetlurAditeya Pandey;Arjun Srinivasan;Vidya SetlurNortheastern University, USA;Tableau Research, Germany;Tableau Research, Germany
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2013.124;10.1109/TVCG.2020.3030338;10.1109/TVCG.2020.3030424;10.1109/TVCG.2021.3114860;10.1109/TVCG.2021.3114848;10.1109/TVCG.2007.70594;10.1109/TVCG.2020.3030378;10.1109/TVCG.2017.2744198;10.1109/TVCG.2018.2864903;10.1109/TVCG.2017.2744184;10.1109/TVCG.2016.2599030;10.1109/TVCG.2013.120;10.1109/TVCG.2018.2865145;10.1109/TVCG.2019.2934398;10.1109/TVCG.2015.2467191;10.1109/TVCG.2021.3114826
Dashboards,intent,recommendations,direct manipulation,multi-view coordination055HM
69
Vis2022
CohortVA: A Visual Analytic System for Interactive Exploration of Cohorts based on Historical Data
10.1109/TVCG.2022.3209483
http://dx.doi.org/10.1109/TVCG.2022.3209483
756766J
In history research, cohort analysis seeks to identify social structures and figure mobilities by studying the group-based behavior of historical figures. Prior works mainly employ automatic data mining approaches, lacking effective visual explanation. In this paper, we present CohortVA, an interactive visual analytic approach that enables historians to incorporate expertise and insight into the iterative exploration process. The kernel of CohortVA is a novel identification model that generates candidate cohorts and constructs cohort features by means of pre-built knowledge graphs constructed from large-scale history databases. We propose a set of coordinated views to illustrate identified cohorts and features coupled with historical events and figure profiles. Two case studies and interviews with historians demonstrate that CohortVA can greatly enhance the capabilities of cohort identifications, figure authentications, and hypothesis generation.
Wei Zhang;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen 0001;Wei Chen 0001
Wei Zhang;Wei Chen;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen
State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;TMCC, CS, Nankai University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Fudan University, China;State Key Lab of CAD&CG, Zhejiang University, China
10.1109/TVCG.2010.159;10.1109/TVCG.2018.2865049;10.1109/TVCG.2021.3114836;10.1109/TVCG.2015.2467971;10.1109/TVCG.2016.2598469;10.1109/TVCG.2015.2467620;10.1109/TVCG.2020.3030370;10.1109/TVCG.2020.3030347;10.1109/TVCG.2021.3114773;10.1109/TVCG.2021.3114790
Historical cohort analysis,machine learning,interpretability,visual analytic256
70
Vis2022Dashboard Design Patterns10.1109/TVCG.2022.3209448
http://dx.doi.org/10.1109/TVCG.2022.3209448
342352J
This paper introduces design patterns for dashboards to inform dashboard design processes. Despite a growing number of public examples, case studies, and general guidelines there is surprisingly little design guidance for dashboards. Such guidance is necessary to inspire designs and discuss tradeoffs in, e.g., screenspace, interaction, or information shown. Based on a systematic review of 144 dashboards, we report on eight groups of design patterns that provide common solutions in dashboard design. We discuss combinations of these patterns in “dashboard genres” such as narrative, analytical, or embedded dashboard. We ran a 2-week dashboard design workshop with 23 participants of varying expertise working on their own data and dashboards. We discuss the application of patterns for the dashboard design processes, as well as general design tradeoffs and common challenges. Our work complements previous surveys and aims to support dashboard designers and researchers in co-creation, structured design decisions, as well as future user evaluations about dashboard design guidelines. Detailed pattern descriptions and workshop material can be found online: https://dashboarddesignpatterns.github.io
Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen 0001
Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen
University of Edinburgh, Scotland;University of Glasgow, Scotland;King's College London, England;University of Warwick, England;University of Oxford, England;University of Oxford, England;University of Oxford, England
10.1109/VISUAL.1991.175794;10.1109/INFVIS.1997.636792;10.1109/TVCG.2020.3030424;10.1109/TVCG.2016.2599338;10.1109/TVCG.2021.3114828;10.1109/TVCG.2018.2864903;10.1109/TVCG.2013.120;10.1109/TVCG.2010.179;10.1109/TVCG.2019.2934398
Dashboards,Design Patterns,Data Visualization,Storytelling,Visual Analytics,Qualitative Evaluation,Education356HM
71
Vis2022
Constrained Dynamic Mode Decomposition
10.1109/TVCG.2022.3209437
http://dx.doi.org/10.1109/TVCG.2022.3209437
182192J
Frequency-based decomposition of time series data is used in many visualization applications. Most of these decomposition methods (such as Fourier transform or singular spectrum analysis) only provide interaction via pre- and post-processing, but no means to influence the core algorithm. A method that also belongs to this class is Dynamic Mode Decomposition (DMD), a spectral decomposition method that extracts spatio-temporal patterns from data. In this paper, we incorporate frequency-based constraints into DMD for an adaptive decomposition that leads to user-controllable visualizations, allowing analysts to include their knowledge into the process. To accomplish this, we derive an equivalent reformulation of DMD that implicitly provides access to the eigenvalues (and therefore to the frequencies) identified by DMD. By utilizing a constrained minimization problem customized to DMD, we can guarantee the existence of desired frequencies by minimal changes to DMD. We complement this core approach by additional techniques for constrained DMD to facilitate explorative visualization and investigation of time series data. With several examples, we demonstrate the usefulness of constrained DMD and compare it to conventional frequency-based decomposition methods.
Tim Krake;Daniel Klötzl;Bernd Eberhardt;Daniel Weiskopf
Tim Krake;Daniel Klötzl;Bernhard Eberhardt;Daniel Weiskopf
University of Stuttgart and Hochschule der Medien, Germany;University of Stuttgart, Germany;Hochschule der Medien, Germany;University of Stuttgart, Germany
10.1109/VAST.2012.6400557;10.1109/INFVIS.1999.801851;10.1109/TVCG.2015.2467751;10.1109/INFVIS.2001.963273;10.1109/TVCG.2011.195
Dynamic Mode Decomposition,time series analysis,spectral decomposition,frequency-based constraints,human-in-the-loop38058
72
Vis2022
Effects of View Layout on Situated Analytics for Multiple-View Representations in Immersive Visualization
10.1109/TVCG.2022.3209475
http://dx.doi.org/10.1109/TVCG.2022.3209475
440450J
Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.
Zhen Wen;Wei Zeng 0004;Luoxuan Weng;Yihan Liu;Mingliang Xu;Wei Chen 0001
Zhen Wen;Wei Zeng;Luoxuan Weng;Yihan Liu;Mingliang Xu;Wei Chen
State Key Lab of CAD&CG, Zhejiang University, China;The Hong Kong University of Science and Technology (Guangzhou), China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhengzhou University, China;State Key Lab of CAD&CG, Zhejiang University, China
10.1109/TVCG.2021.3114835;10.1109/TVCG.2020.3030338;10.1109/TVCG.2021.3114806;10.1109/TVCG.2019.2934332;10.1109/TVCG.2021.3114861;10.1109/VAST.2015.7347628;10.1109/TVCG.2007.70521;10.1109/TVCG.2018.2865191;10.1109/TVCG.2020.3030419;10.1109/TVCG.2017.2744198;10.1109/TVCG.2021.3114801;10.1109/TVCG.2019.2934282;10.1109/TVCG.2016.2598608
Situated analytics,multiple-view representations,view layout,immersive visualization158
73
Vis2022
SliceTeller: A Data Slice-Driven Approach for Machine Learning Model Validation
10.1109/TVCG.2022.3209465
http://dx.doi.org/10.1109/TVCG.2022.3209465
842852J
Real-world machine learning applications need to be thoroughly evaluated to meet critical product requirements for model release, to ensure fairness for different groups or individuals, and to achieve a consistent performance in various scenarios. For example, in autonomous driving, an object classification model should achieve high detection rates under different conditions of weather, distance, etc. Similarly, in the financial setting, credit-scoring models must not discriminate against minority groups. These conditions or groups are called as “Data Slices”. In product MLOps cycles, product developers must identify such critical data slices and adapt models to mitigate data slice problems. Discovering where models fail, understanding why they fail, and mitigating these problems, are therefore essential tasks in the MLOps life-cycle. In this paper, we present SliceTeller, a novel tool that allows users to debug, compare and improve machine learning models driven by critical data slices. SliceTeller automatically discovers problematic slices in the data, helps the user understand why models fail. More importantly, we present an efficient algorithm, SliceBoosting, to estimate trade-offs when prioritizing the optimization over certain slices. Furthermore, our system empowers model developers to compare and analyze different model versions during model iterations, allowing them to choose the model version best suitable for their applications. We evaluate our system with three use cases, including two real-world use cases of product development, to demonstrate the power of SliceTeller in the debugging and improvement of product-quality ML models.
Xiaoyu Zhang;Jorge Henrique Piazentin Ono;Huan Song;Liang Gou;Kwan-Liu Ma;Ren Liu
Xiaoyu Zhang;Jorge Piazentin Ono;Huan Song;Liang Gou;Kwan-Liu Ma;Liu Ren
UC Davis, USA;Robert Bosch Research and Technology Center, USA - Bosch Center for Artificial Intelligence, USA;Robert Bosch Research and Technology Center, USA - Bosch Center for Artificial Intelligence, USA;Robert Bosch Research and Technology Center, USA - Bosch Center for Artificial Intelligence, USA;UC Davis, USA;Robert Bosch Research and Technology Center, USA - Bosch Center for Artificial Intelligence, USA
10.1109/VAST47406.2019.8986948;10.1109/TVCG.2020.3030352;10.1109/TVCG.2021.3114855;10.1109/TVCG.2021.3114779;10.1109/TVCG.2014.2346248;10.1109/TVCG.2020.3030361;10.1109/TVCG.2018.2864825;10.1109/TVCG.2007.70515;10.1109/TVCG.2018.2864499
Model Validation,Data Slicing,Data Validation,Model Evaluation,Data-Centric AI,Human-in-the-loop358HM
74
Vis2022
FlowNL: Asking the Flow Data in Natural Languages
10.1109/TVCG.2022.3209453
http://dx.doi.org/10.1109/TVCG.2022.3209453
12001210J
Flow visualization is essentially a tool to answer domain experts' questions about flow fields using rendered images. Static flow visualization approaches require domain experts to raise their questions to visualization experts, who develop specific techniques to extract and visualize the flow structures of interest. Interactive visualization approaches allow domain experts to ask the system directly through the visual analytic interface, which provides flexibility to support various tasks. However, in practice, the visual analytic interface may require extra learning effort, which often discourages domain experts and limits its usage in real-world scenarios. In this paper, we propose FlowNL, a novel interactive system with a natural language interface. FlowNL allows users to manipulate the flow visualization system using plain English, which greatly reduces the learning effort. We develop a natural language parser to interpret user intention and translate textual input into a declarative language. We design the declarative language as an intermediate layer between the natural language and the programming language specifically for flow visualization. The declarative language provides selection and composition rules to derive relatively complicated flow structures from primitive objects that encode various kinds of information about scalar fields, flow patterns, regions of interest, connectivities, etc. We demonstrate the effectiveness of FlowNL using multiple usage scenarios and an empirical evaluation.
Jieying Huang;Yang Xi;Junnan Hu;Jun Tao 0002Jieying Huang;Yang Xi;Junnan Hu;Jun TaoSchool of Computer Science and Engineering, Sun Yat-sen University, China;School of Computer Science and Engineering, Sun Yat-sen University, China;School of Computer Science and Engineering, Sun Yat-sen University, China;Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), School of Computer Science and Engineering, Sun Yat-sen University, National Supercomputer Center in Guangzhou, China
10.1109/TVCG.2019.2934310;10.1109/TVCG.2011.185;10.1109/VISUAL.2005.1532856;10.1109/TVCG.2014.2346322;10.1109/TVCG.2019.2934785;10.1109/TVCG.2017.2744684;10.1109/TVCG.2013.121;10.1109/TVCG.2018.2864806;10.1109/TVCG.2013.189;10.1109/TVCG.2019.2934537;10.1109/TVCG.2020.3030453;10.1109/TVCG.2021.3114848;10.1109/TVCG.2020.3030378;10.1109/TVCG.2020.3030378;10.1109/TVCG.2019.2934367;10.1109/TVCG.2014.2346318;10.1109/VISUAL.2004.128;10.1109/TVCG.2016.2599030;10.1109/TVCG.2015.2467091;10.1109/TVCG.2017.2745219;10.1109/VISUAL.2003.1250376;10.1109/VAST47406.2019.8986918;10.1109/TVCG.2018.2864841;10.1109/TVCG.2010.131;10.1109/VISUAL.2005.1532831;10.1109/TVCG.2019.2934668
Flow visualization,natural language interface,interactive exploration,declarative grammar359
75
Vis2022
Geo-Storylines: Integrating Maps into Storyline Visualizations
10.1109/TVCG.2022.3209480
http://dx.doi.org/10.1109/TVCG.2022.3209480
9941004J
Storyline visualizations are a powerful way to compactly visualize how the relationships between people evolve over time. Real-world relationships often also involve space, for example the cities that two political rivals visited together or alone over the years. By default, Storyline visualizations only show implicitly geospatial co-occurrence between people (drawn as lines), by bringing their lines together. Even the few designs that do explicitly show geographic locations only do so in abstract ways (e.g., annotations) and do not communicate geospatial information, such as the direction or extent of their political campains. We introduce Geo-Storylines, a collection of visualisation designs that integrate geospatial context into Storyline visualizations, using different strategies for compositing time and space. Our contribution is twofold. First, we present the results of a sketching workshop with 11 participants, that we used to derive a design space for integrating maps into Storylines. Second, by analyzing the strengths and weaknesses of the potential designs of the design space in terms of legibility and ability to scale to multiple relationships, we extract the three most promising: Time Glyphs, Coordinated Views, and Map Glyphs. We compare these three techniques first in a controlled study with 18 participants, under five different geospatial tasks and two maps of different complexity. We additionally collected informal feedback about their usefulness from domain experts in data journalism. Our results indicate that, as expected, detailed performance depends on the task. Nevertheless, Coordinated Views remain a highly effective and preferred technique across the board.
Golina Hulstein;Vanessa Peña Araya;Anastasia Bezerianos
Golina Hulstein;Vanessa Peña-Araya;Anastasia Bezerianos
Université Paris-Saclay, France;Université Paris-Saclay, CNRS, Inria, France;Université Paris-Saclay, CNRS, Inria, France
10.1109/VAST.2017.8585487;10.1109/TVCG.2016.2598862;10.1109/TVCG.2019.2934397;10.1109/TVCG.2020.3028948;10.1109/TVCG.2013.196;10.1109/TVCG.2019.2934807;10.1109/TVCG.2020.3030347;10.1109/TVCG.2012.212;10.1109/TVCG.2020.3030467;10.1109/TVCG.2018.2864899;10.1109/TVCG.2020.3030402;10.1109/TVCG.2014.2346265;10.1109/TVCG.2015.2468111
Storyline visualization,geo-temporal data,maps,hypergraphs059
76
Vis2022
PromotionLens: Inspecting Promotion Strategies of Online E-commerce via Visual Analytics
10.1109/TVCG.2022.3209440
http://dx.doi.org/10.1109/TVCG.2022.3209440
767777J
Promotions are commonly used by e-commerce merchants to boost sales. The efficacy of different promotion strategies can help sellers adapt their offering to customer demand in order to survive and thrive. Current approaches to designing promotion strategies are either based on econometrics, which may not scale to large amounts of sales data, or are spontaneous and provide little explanation of sales volume. Moreover, accurately measuring the effects of promotion designs and making bootstrappable adjustments accordingly remains a challenge due to the incompleteness and complexity of the information describing promotion strategies and their market environments. We present PromotionLens, a visual analytics system for exploring, comparing, and modeling the impact of various promotion strategies. Our approach combines representative multivariant time-series forecasting models and well-designed visualizations to demonstrate and explain the impact of sales and promotional factors, and to support “what-if” analysis of promotions. Two case studies, expert feedback, and a qualitative user study demonstrate the efficacy of PromotionLens.
Chenyang Zhang;Xiyuan Wang;Chuyi Zhao;Yijing Ren;Tianyu Zhang;Zhenhui Peng;Xiaomeng Fan;Xiaojuan Ma;Quan Li
Chenyang Zhang;Xiyuan Wang;Chuyi Zhao;Yijing Ren;Tianyu Zhang;Zhenhui Peng;Xiaomeng Fan;Xiaojuan Ma;Quan Li
School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;Geek+, USA;School of Artificial Intelligence, Sun Yat-sen University, China;School of Entrepreneurship and Management, ShanghaiTech University, China;Hong Kong University of Science and Technology, China;School of Information Science and Technology, ShanghaiTech University, China
10.1109/INFVIS.2001.963293;10.1109/TVCG.2016.2598838;10.1109/TVCG.2014.2346913
E-commerce,promotion strategy,time-series prediction,“what-if” analysis,visualization059
77
Vis2022
Diverse Interaction Recommendation for Public Users Exploring Multi-view Visualization using Deep Learning
10.1109/TVCG.2022.3209461
http://dx.doi.org/10.1109/TVCG.2022.3209461
95105J
Interaction is an important channel to offer users insights in interactive visualization systems. However, which interaction to operate and which part of data to explore are hard questions for public users facing a multi-view visualization for the first time. Making these decisions largely relies on professional experience and analytic abilities, which is a huge challenge for non-professionals. To solve the problem, we propose a method aiming to provide diverse, insightful, and real-time interaction recommendations for novice users. Building on the Long-Short Term Memory Model (LSTM) structure, our model captures users' interactions and visual states and encodes them in numerical vectors to make further recommendations. Through an illustrative example of a visualization system about Chinese poets in the museum scenario, the model is proven to be workable in systems with multi-views and multiple interaction types. A further user study demonstrates the method's capability to help public users conduct more insightful and diverse interactive explorations and gain more accurate data insights.
Yixuan Li;Yusheng Qi;Yang Shi 0007;Qing Chen 0001;Nan Cao;Siming Chen 0001
Yixuan Li;Yusheng Qi;Yang Shi;Qing Chen;Nan Cao;Siming Chen
School of Data Science, Fudan University, China;School of Data Science, Fudan University, China;Tongji University, China;Tongji University, China;Tongji University, China;School of Data Science, Fudan University, China
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2015.2467871;10.1109/TVCG.2015.2467201;10.1109/TVCG.2014.2346575;10.1109/TVCG.2016.2598468;10.1109/INFVIS.1996.559213;10.1109/TVCG.2016.2598471;10.1109/TVCG.2019.2934283;10.1109/VAST.2008.4677365;10.1109/TVCG.2015.2467613;10.1109/TVCG.2008.127;10.1109/TVCG.2012.244;10.1109/TVCG.2016.2599030;10.1109/TVCG.2015.2467091;10.1109/TVCG.2007.70589;10.1109/TVCG.2021.3114826;10.1109/TVCG.2007.70515;10.1109/TVCG.2016.2598543
Interaction Recommendation,Visualization for public education,Mixed-initiative Exploration060
78
Vis2022
VACSEN: A Visualization Approach for Noise Awareness in Quantum Computing
10.1109/TVCG.2022.3209455
http://dx.doi.org/10.1109/TVCG.2022.3209455
462472J
Quantum computing has attracted considerable public attention due to its exponential speedup over classical computing. Despite its advantages, today's quantum computers intrinsically suffer from noise and are error-prone. To guarantee the high fidelity of the execution result of a quantum algorithm, it is crucial to inform users of the noises of the used quantum computer and the compiled physical circuits. However, an intuitive and systematic way to make users aware of the quantum computing noise is still missing. In this paper, we fill the gap by proposing a novel visualization approach to achieve noise-aware quantum computing. It provides a holistic picture of the noise of quantum computing through multiple interactively coordinated views: a Computer Evolution View with a circuit-like design overviews the temporal evolution of the noises of different quantum computers, a Circuit Filtering View facilitates quick filtering of multiple compiled physical circuits for the same quantum algorithm, and a Circuit Comparison View with a coupled bar chart enables detailed comparison of the filtered compiled circuits. We extensively evaluate the performance of VACSEN through two case studies on quantum algorithms of different scales and in-depth interviews with 12 quantum computing users. The results demonstrate the effectiveness and usability of VACSEN in achieving noise-aware quantum computing.
Shaolun Ruan;Yong Wang 0021;Weiwen Jiang;Ying Mao;Qiang Guan
Shaolun Ruan;Yong Wang;Weiwen Jiang;Ying Mao;Qiang Guan
Singapore Management University, Singapore;Singapore Management University, Singapore;George Mason University, USA;Fordham University, USA;Kent State University, USA
10.1109/TVCG.2009.111;10.1109/TVCG.2012.213
Data visualization,quantum computing,noise awareness160
79
Vis2022
Development and Evaluation of Two Approaches of Visual Sensitivity Analysis to Support Epidemiological Modeling
10.1109/TVCG.2022.3209464
http://dx.doi.org/10.1109/TVCG.2022.3209464
12551265J
Computational modeling is a commonly used technology in many scientific disciplines and has played a noticeable role in combating the COVID-19 pandemic. Modeling scientists conduct sensitivity analysis frequently to observe and monitor the behavior of a model during its development and deployment. The traditional algorithmic ranking of sensitivity of different parameters usually does not provide modeling scientists with sufficient information to understand the interactions between different parameters and model outputs, while modeling scientists need to observe a large number of model runs in order to gain actionable information for parameter optimization. To address the above challenge, we developed and compared two visual analytics approaches, namely: algorithm-centric and visualization-assisted, and visualization-centric and algorithm-assisted. We evaluated the two approaches based on a structured analysis of different tasks in visual sensitivity analysis as well as the feedback of domain experts. While the work was carried out in the context of epidemiological modeling, the two approaches developed in this work are directly applicable to a variety of modeling processes featuring time series outputs, and can be extended to work with models with other types of outputs.
Erik Rydow;Rita Borgo;Hui Fang 0003;Thomas Torsney-Weir;Ben Swallow;Thibaud Porphyre;Cagatay Turkay;Min Chen 0001
Erik Rydow;Rita Borgo;Hui Fang;Thomas Torsney-Weir;Ben Swallow;Thibaud Porphyre;Cagatay Turkay;Min Chen
Department of Engineering Science, University of Oxford, UK;Department of Informatics, Kings College, London, UK;Department of Computer Science, Loughborough University, UK;Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH (VRVis), Austria;School of Mathematics and Statistics, University of Glasgow, UK;Laboratoire de Biométrie et Biologie Evolutive (LBBE), VetAgro Sup, France;Centre for Interdisciplinary Methodologies, University of Warwick, UK;Department of Engineering Science, University of Oxford, UK
10.1109/VAST.2011.6102457;10.1109/TVCG.2016.2598869;10.1109/TVCG.2010.150;10.1109/TVCG.2010.190;10.1109/TVCG.2010.132;10.1109/VAST.2011.6102450;10.1109/TVCG.2015.2468093;10.1109/TVCG.2019.2934312;10.1109/TVCG.2013.141;10.1109/TVCG.2017.2745178;10.1109/TVCG.2010.171;10.1109/TVCG.2008.145;10.1109/TVCG.2009.155;10.1109/TVCG.2018.2865051;10.1109/TVCG.2010.181;10.1109/TVCG.2012.213;10.1109/TVCG.2011.248;10.1109/TVCG.2013.143
Sensitivity analysis,Ensemble visualization,COVID-19,Epidemiological Modeling,Epidemiology161
80
Vis2022
Erato: Cooperative Data Story Editing via Fact Interpolation
10.1109/TVCG.2022.3209428
http://dx.doi.org/10.1109/TVCG.2022.3209428
983993J
As an effective form of narrative visualization, visual data stories are widely used in data-driven storytelling to communicate complex insights and support data understanding. Although important, they are difficult to create, as a variety of interdisciplinary skills, such as data analysis and design, are required. In this work, we introduce Erato, a human-machine cooperative data story editing system, which allows users to generate insightful and fluent data stories together with the computer. Specifically, Erato only requires a number of keyframes provided by the user to briefly describe the topic and structure of a data story. Meanwhile, our system leverages a novel interpolation algorithm to help users insert intermediate frames between the keyframes to smooth the transition. We evaluated the effectiveness and usefulness of the Erato system via a series of evaluations including a Turing test, a controlled user study, a performance validation, and interviews with three expert users. The evaluation results showed that the proposed interpolation technique was able to generate coherent story content and help users create data stories more efficiently.
Mengdi Sun;Ligan Cai;Weiwei Cui;Yanqiu Wu;Yang Shi 0007;Nan Cao
Mengdi Sun;Ligan Cai;Weiwei Cui;Yanqiu Wu;Yang Shi;Nan Cao
Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Microsoft Research Asia, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China
10.1109/TVCG.2016.2598647;10.1109/TVCG.2015.2467732;10.1109/TVCG.2016.2598876;10.1109/TVCG.2021.3114804;10.1109/TVCG.2019.2934785;10.1109/TVCG.2015.2467531;10.1109/TVCG.2013.119;10.1109/TVCG.2020.3030360;10.1109/TVCG.2021.3114775;10.1109/TVCG.2007.70594;10.1109/TVCG.2018.2865240;10.1109/TVCG.2012.249;10.1109/TVCG.2010.179;10.1109/TVCG.2020.3030403;10.1109/VISUAL.2005.1532849;10.1109/TVCG.2018.2865232;10.1109/TVCG.2019.2934398;10.1109/VISUAL.1995.480798;10.1109/TVCG.2015.2467191;10.1109/TVCG.2021.3114774
Interpolation,visual storytelling,human-machine cooperation061
81
Vis2022
GenoREC: A Recommendation System for Interactive Genomics Data Visualization
10.1109/TVCG.2022.3209407
http://dx.doi.org/10.1109/TVCG.2022.3209407
570580J
Interpretation of genomics data is critically reliant on the application of a wide range of visualization tools. A large number of visualization techniques for genomics data and different analysis tasks pose a significant challenge for analysts: which visualization technique is most likely to help them generate insights into their data? Since genomics analysts typically have limited training in data visualization, their choices are often based on trial and error or guided by technical details, such as data formats that a specific tool can load. This approach prevents them from making effective visualization choices for the many combinations of data types and analysis questions they encounter in their work. Visualization recommendation systems assist non-experts in creating data visualization by recommending appropriate visualizations based on the data and task characteristics. However, existing visualization recommendation systems are not designed to handle domain-specific problems. To address these challenges, we designed GenoREC, a novel visualization recommendation system for genomics. GenoREC enables genomics analysts to select effective visualizations based on a description of their data and analysis tasks. Here, we present the recommendation model that uses a knowledge-based method for choosing appropriate visualizations and a web application that enables analysts to input their requirements, explore recommended visualizations, and export them for their usage. Furthermore, we present the results of two user studies demonstrating that GenoREC recommends visualizations that are both accepted by domain experts and suited to address the given genomics analysis problem. All supplemental materials are available at https://osf.io/y73pt/.
Aditeya Pandey;Sehi L'Yi;Qianwen Wang;Michelle Borkin;Nils Gehlenborg
Aditeya Pandey;Sehi L'Yi;Qianwen Wang;Michelle A. Borkin;Nils Gehlenborg
Northeastern University, MA, US;Harvard Medical School, MA, US;Harvard Medical School, MA, US;Northeastern University, MA, US;Harvard Medical School, MA, US
10.1109/TVCG.2013.234;10.1109/TVCG.2013.124;10.1109/TVCG.2021.3114860;10.1109/TVCG.2022.3209398;10.1109/TVCG.2020.3030419;10.1109/TVCG.2021.3114876;10.1109/TVCG.2007.70594;10.1109/TVCG.2009.167;10.1109/TVCG.2018.2865240;10.1109/TVCG.2018.2865240;10.1109/TVCG.2017.2744198;10.1109/TVCG.2019.2934784;10.1109/TVCG.2015.2467191;10.1109/TVCG.2020.3030423;10.1109/TVCG.2021.3114814
genomics,visualization,recommendation systems,data,tasks162
82
Vis2022
Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations
10.1109/TVCG.2022.3209427
http://dx.doi.org/10.1109/TVCG.2022.3209427
896906J
This work investigates and compares the performance of node-link diagrams, adjacency matrices, and bipartite layouts for visualizing networks. In a crowd-sourced user study ($\mathrm{n}=150$), we measure the task accuracy and completion time of the three representations for different network classes and properties. In contrast to the literature, which covers mostly topology-based tasks (e.g., path finding) in small datasets, we mainly focus on overview tasks for large and directed networks. We consider three overview tasks on networks with 500 nodes: (T1) network class identification, (T2) cluster detection, and (T3) network density estimation, and two detailed tasks: (T4) node in-degree vs. out-degree and (T5) representation mapping, on networks with 50 and 20 nodes, respectively. Our results show that bipartite layouts are beneficial for revealing the overall network structure, while adjacency matrices are most reliable across the different tasks.
Moataz Abdelaal;Nathan Daniel Schiele;Katrin Angerbauer;Kuno Kurzhals;Michael Sedlmair;Daniel Weiskopf
Moataz Abdelaal;Nathan D. Schiele;Katrin Angerbauer;Kuno Kurzhals;Michael Sedlmair;Daniel Weiskopf
University of Stuttgart, Germany;Leiden University, Netherlands;University of Stuttgart, Germany;University of Stuttgart, Germany;University of Stuttgart, Germany;University of Stuttgart, Germany
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2011.185;10.1109/TVCG.2011.226;10.1109/INFVIS.2004.1;10.1109/TVCG.2007.70582;10.1109/TVCG.2008.155
Bipartite,network,visualization,evaluation263
83
Vis2022
MedChemLens: An Interactive Visual Tool to Support Direction Selection in Interdisciplinary Experimental Research of Medicinal Chemistry
10.1109/TVCG.2022.3209434
http://dx.doi.org/10.1109/TVCG.2022.3209434
6373J
Interdisciplinary experimental science (e.g., medicinal chemistry) refers to the disciplines that integrate knowledge from different scientific backgrounds and involve experiments in the research process. Deciding “in what direction to proceed” is critical for the success of the research in such disciplines, since the time, money, and resource costs of the subsequent research steps depend largely on this decision. However, such a direction identification task is challenging in that researchers need to integrate information from large-scale, heterogeneous materials from all associated disciplines and summarize the related publications of which the core contributions are often showcased in diverse formats. The task also requires researchers to estimate the feasibility and potential in future experiments in the selected directions. In this work, we selected medicinal chemistry as a case and presented an interactive visual tool, MedChemLens, to assist medicinal chemists in choosing their intended directions of research. This task is also known as drug target (i.e., disease-linked proteins) selection. Given a candidate target name, MedChemLens automatically extracts the molecular features of drug compounds from chemical papers and clinical trial records, organizes them based on the drug structures, and interactively visualizes factors concerning subsequent experiments. We evaluated MedChemLens through a within-subjects study (N=16). Compared with the control condition (i.e., unrestricted online search without using our tool), participants who only used MedChemLens reported faster search, better-informed selections, higher confidence in their selections, and lower cognitive load.
Chuhan Shi;Fei Nie;Yicheng Hu;Yige Xu 0001;Lei Chen 0002;Xiaojuan Ma;Qiong Luo 0001
Chuhan Shi;Fei Nie;Yicheng Hu;Yige Xu;Lei Chen;Xiaojuan Ma;Qiong Luo
Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Nanyang Technological University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China
10.1109/TVCG.2015.2467757;10.1109/TVCG.2013.212;10.1109/TVCG.2012.252;10.1109/TVCG.2015.2467621;10.1109/TVCG.2012.277;10.1109/VAST.2014.7042494;10.1109/TVCG.2009.202;10.1109/TVCG.2013.167
Interdisciplinary experimental science,interactive visual analysis,scientific literature data063
84
Vis2022
Understanding how Designers Find and Use Data Visualization Examples
10.1109/TVCG.2022.3209490
http://dx.doi.org/10.1109/TVCG.2022.3209490
10481058J
Examples are useful for inspiring ideas and facilitating implementation in visualization design. However, there is little understanding of how visualization designers use examples, and how computational tools may support such activities. In this paper, we contribute an exploratory study of current practices in incorporating visualization examples. We conducted semi-structured interviews with 15 university students and 15 professional designers. Our analysis focus on two core design activities: searching for examples and utilizing examples. We characterize observed strategies and tools for performing these activities, as well as major challenges that hinder designers' current workflows. In addition, we identify themes that cut across these two activities: criteria for determining example usefulness, curation practices, and design fixation. Given our findings, we discuss the implications for visualization design and authoring tools and highlight critical areas for future research.
Hannah K. Bako;Xinyi Liu;Leilani Battle;Zhicheng LiuHannah K. Bako;Xinyi Liu;Leilani Battle;Zhicheng LiuUniversity of Maryland, USA;University of Maryland, USA;University of Washington, USA;University of Maryland, USA
10.1109/TVCG.2018.2865040;10.1109/TVCG.2021.3114760;10.1109/TVCG.2021.3114792;10.1109/TVCG.2021.3114856;10.1109/TVCG.2019.2934431;10.1109/TVCG.2007.70594;10.1109/TVCG.2010.179;10.1109/TVCG.2019.2934538;10.1109/TVCG.2015.2467191
Examples,visualization design,idea generation,interview study,qualitative research265
85
Vis2022
FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks
10.1109/TVCG.2022.3209498
http://dx.doi.org/10.1109/TVCG.2022.3209498
515525J
Volume data is found in many important scientific and engineering applications. Rendering this data for visualization at high quality and interactive rates for demanding applications such as virtual reality is still not easily achievable even using professional-grade hardware. We introduce FoVolNet—a method to significantly increase the performance of volume data visualization. We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network. Foveated rendering is a technique that prioritizes rendering computations around the user's focal point. This approach leverages properties of the human visual system, thereby saving computational resources when rendering data in the periphery of the user's field of vision. Our reconstruction network combines direct and kernel prediction methods to produce fast, stable, and perceptually convincing output. With a slim design and the use of quantization, our method outperforms state-of-the-art neural reconstruction techniques in both end-to-end frame times and visual quality. We conduct extensive evaluations of the system's rendering performance, inference speed, and perceptual properties, and we provide comparisons to competing neural image reconstruction techniques. Our test results show that FoVolNet consistently achieves significant time saving over conventional rendering while preserving perceptual quality.
David Bauer;Qi Wu 0015;Kwan-Liu MaDavid Bauer;Qi Wu;Kwan-Liu MaUniversity of California at Davis, USA;University of California at Davis, USA;University of California at Davis, USA
10.1109/TVCG.2020.3030344;10.1109/TVCG.2012.240;10.1109/VISUAL.2002.1183764;10.1109/TVCG.2011.211;10.1109/TVCG.2016.2599041
Volume data,volume visualization,deep learning,foveated rendering,neural reconstruction066HM
86
Vis2022
MosaicSets: Embedding Set Systems into Grid Graphs
10.1109/TVCG.2022.3209485
http://dx.doi.org/10.1109/TVCG.2022.3209485
875885J
Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets: a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews.
Peter Rottmann;Markus Wallinger;Annika Bonerath;Sven Gedicke;Martin Nöllenburg;Jan-Henrik Haunert
Peter Rottmann;Markus Wallinger;Annika Bonerath;Sven Gedicke;Martin Nöllenburg;Jan-Henrik Haunert
Geoinformation Group of the University of Bonn, Germany;Algorithms and Complexity Group of the Technical University of Vienna, Austria;Geoinformation Group of the University of Bonn, Germany;Geoinformation Group of the University of Bonn, Germany;Algorithms and Complexity Group of the Technical University of Vienna, Austria;Geoinformation Group of the University of Bonn, Germany
10.1109/TVCG.2011.186;10.1109/TVCG.2009.122;10.1109/TVCG.2020.3030475;10.1109/TVCG.2021.3114834;10.1109/TVCG.2014.2346248;10.1109/TVCG.2016.2598542;10.1109/TVCG.2020.3028953;10.1109/TVCG.2012.199;10.1109/TVCG.2010.210;10.1109/TVCG.2014.2346249;10.1109/TVCG.2008.165
Set Visualization,Euler Diagram,Integer Linear Programming,Hypergraph166
87
Vis2022
Multiple Forecast Visualizations (MFVs): Trade-offs in Trust and Performance in Multiple COVID-19 Forecast Visualizations
10.1109/TVCG.2022.3209457
http://dx.doi.org/10.1109/TVCG.2022.3209457
1222J
The prevalence of inadequate SARS-COV-2 (COVID-19) responses may indicate a lack of trust in forecasts and risk communication. However, no work has empirically tested how multiple forecast visualization choices impact trust and task-based performance. The three studies presented in this paper ($N=1299$) examine how visualization choices impact trust in COVID-19 mortality forecasts and how they influence performance in a trend prediction task. These studies focus on line charts populated with real-time COVID-19 data that varied the number and color encoding of the forecasts and the presence of best/worst-case forecasts. The studies reveal that trust in COVID-19 forecast visualizations initially increases with the number of forecasts and then plateaus after 6–9 forecasts. However, participants were most trusting of visualizations that showed less visual information, including a 95% confidence interval, single forecast, and grayscale encoded forecasts. Participants maintained high trust in intervals labeled with 50% and 25% and did not proportionally scale their trust to the indicated interval size. Despite the high trust, the 95% CI condition was the most likely to evoke predictions that did not correspond with the actual COVID-19 trend. Qualitative analysis of participants' strategies confirmed that many participants trusted both the simplistic visualizations and those with numerous forecasts. This work provides practical guides for how COVID-19 forecast visualizations influence trust, including recommendations for identifying the range where forecasts balance trade-offs between trust and task-based performance.
Lace M. K. Padilla;Racquel Fygenson;Spencer C. Castro;Enrico Bertini
Lace Padilla;Racquel Fygenson;Spencer C. Castro;Enrico Bertini
University of California Merced, USA;New York University, USA;University of California Merced, USA;Northeastern University, USA
10.1109/TVCG.2021.3114803;10.1109/TVCG.2014.2346298;10.1109/TVCG.2019.2934287;10.1109/TVCG.2017.2743898;10.1109/TVCG.2020.3030335;10.1109/TVCG.2018.2864909;10.1109/TVCG.2018.2865193;10.1109/INFVIS.2004.15
COVID-19,multiple forecast visualizations,uncertainty visualization,line charts,time-series data066BP
88
Vis2022
Polyphony: an Interactive Transfer Learning Framework for Single-Cell Data Analysis
10.1109/TVCG.2022.3209408
http://dx.doi.org/10.1109/TVCG.2022.3209408
591601J
Reference-based cell-type annotation can significantly reduce time and effort in single-cell analysis by transferring labels from a previously-annotated dataset to a new dataset. However, label transfer by end-to-end computational methods is challenging due to the entanglement of technical (e.g., from different sequencing batches or techniques) and biological (e.g., from different cellular microenvironments) variations, only the first of which must be removed. To address this issue, we propose Polyphony, an interactive transfer learning (ITL) framework, to complement biologists' knowledge with advanced computational methods. Polyphony is motivated and guided by domain experts' needs for a controllable, interactive, and algorithm-assisted annotation process, identified through interviews with seven biologists. We introduce anchors, i.e., analogous cell populations across datasets, as a paradigm to explain the computational process and collect user feedback for model improvement. We further design a set of visualizations and interactions to empower users to add, delete, or modify anchors, resulting in refined cell type annotations. The effectiveness of this approach is demonstrated through quantitative experiments, two hypothetical use cases, and interviews with two biologists. The results show that our anchor-based ITL method takes advantage of both human and machine intelligence in annotating massive single-cell datasets.
Furui Cheng;Mark S. Keller;Huamin Qu;Nils Gehlenborg;Qianwen Wang
Furui Cheng;Mark S Keller;Huamin Qu;Nils Gehlenborg;Qianwen Wang
Hong Kong University of Science and Technology, China;Harvard University, USA;Hong Kong University of Science and Technology, China;Harvard University, USA;Harvard University, USA
10.1109/TVCG.2021.3114797;10.1109/TVCG.2021.3114793;10.1109/TVCG.2019.2934547;10.1109/TVCG.2018.2865027;10.1109/TVCG.2010.137;10.1109/TVCG.2019.2934267;10.1109/TVCG.2009.111;10.1109/TVCG.2012.213;10.1109/TVCG.2020.3030336;10.1109/VAST47406.2019.8986943
Interactive Machine Learning,Transfer Learning,Single-cell Data Analysis,Human-AI Interaction166
89
Vis2022
ASTF: Visual Abstractions of Time-Varying Patterns in Radio Signals
10.1109/TVCG.2022.3209469
http://dx.doi.org/10.1109/TVCG.2022.3209469
214224J
A time-frequency diagram is a commonly used visualization for observing the time-frequency distribution of radio signals and analyzing their time-varying patterns of communication states in radio monitoring and management. While it excels when performing short-term signal analyses, it becomes inadaptable for long-term signal analyses because it cannot adequately depict signal time-varying patterns in a large time span on a space-limited screen. This research thus presents an abstract signal time-frequency (ASTF) diagram to address this problem. In the diagram design, a visual abstraction method is proposed to visually encode signal communication state changes in time slices. A time segmentation algorithm is proposed to divide a large time span into time slices. Three new quantified metrics and a loss function are defined to ensure the preservation of important time-varying information in the time segmentation. An algorithm performance experiment and a user study are conducted to evaluate the effectiveness of the diagram for long-term signal analyses.
Ying Zhao 0001;Luhao Ge;Huixuan Xie;Genghuai Bai;Zhao Zhang;Qiang Wei;Yun Lin 0005;Yuchao Liu;Fangfang Zhou
Ying Zhao;Luhao Ge;Huixuan Xie;Genghuai Bai;Zhao Zhang;Qiang Wei;Yun Lin;Yuchao Liu;Fangfang Zhou
School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;National Key Laboratory of Science and Technology on Blind Signal Processing, Chengdu, China;College of Information and Communication Engineering, Harbin Engineering University, Harbin, China;China Research Institute of Radiowave Propagation, Qingdao, China;School of Computer Science and Engineering, Central South University, Changsha, China
10.1109/VAST.2014.7042479;10.1109/TVCG.2019.2934433;10.1109/TVCG.2010.193;10.1109/VAST.2014.7042484;10.1109/TVCG.2008.109;10.1109/INFVIS.2005.1532144;10.1109/TVCG.2015.2467751;10.1109/TVCG.2011.195;10.1109/TVCG.2020.3030428;10.1109/TVCG.2019.2934655
Radio signal,visual abstraction,time-oriented data,binary sequence867HM
90
Vis2022
ChartWalk: Navigating large collections of text notes in electronic health records for clinical chart review
10.1109/TVCG.2022.3209444
http://dx.doi.org/10.1109/TVCG.2022.3209444
12441254J
Before seeing a patient for the first time, healthcare workers will typically conduct a comprehensive clinical chart review of the patient's electronic health record (EHR). Within the diverse documentation pieces included there, text notes are among the most important and thoroughly perused segments for this task; and yet they are among the least supported medium in terms of content navigation and overview. In this work, we delve deeper into the task of clinical chart review from a data visualization perspective and propose a hybrid graphics+text approach via ChartWalk, an interactive tool to support the review of text notes in EHRs. We report on our iterative design process grounded in input provided by a diverse range of healthcare professionals, with steps including: (a) initial requirements distilled from interviews and the literature, (b) an interim evaluation to validate design decisions, and (c) a task-based qualitative evaluation of our final design. We contribute lessons learned to better support the design of tools not only for clinical chart reviews but also other healthcare-related tasks around medical text analysis.
Nicole Sultanum;Farooq Naeem;Michael Brudno;Fanny Chevalier
Nicole Sultanum;Farooq Naeem;Michael Brudno;Fanny Chevalier
University of Toronto, Canada;Centre for Addiction and Mental Health (CAMH), Canada;University of Toronto, Canada;University of Toronto, Canada
10.1109/VAST.2014.7042493;10.1109/TVCG.2015.2467757;10.1109/TVCG.2014.2346431;10.1109/VAST.2010.5652922;10.1109/VAST.2012.6400485;10.1109/TVCG.2014.2346743;10.1109/VAST.2007.4389006;10.1109/TVCG.2015.2467759;10.1109/TVCG.2018.2864905;10.1109/VAST.2014.7042496
Electronic Health Record (EHR),Text Visualization,Close+Distant Reading,Clinical Overview,Medicine167HM
91
Vis2022
MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization
10.1109/TVCG.2022.3209447
http://dx.doi.org/10.1109/TVCG.2022.3209447
331341J
Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.
Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu
Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
10.1109/TVCG.2012.254;10.1109/TVCG.2021.3114792;10.1109/TVCG.2021.3114875;10.1109/TVCG.2022.3209468;10.1109/TVCG.2018.2864769;10.1109/TVCG.2015.2468292;10.1109/TVCG.2016.2598620;10.1109/TVCG.2016.2598432;10.1109/TVCG.2015.2467554;10.1109/TVCG.2014.2346445;10.1109/TVCG.2018.2865158;10.1109/TVCG.2013.206;10.1109/TVCG.2017.2745258;10.1109/TVCG.2020.3030359;10.1109/TVCG.2021.3114877;10.1109/VAST50239.2020.00014
Glyph-based visualization,metaphor,machine learning,automatic visualization368
92
Vis2022
Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models
10.1109/TVCG.2022.3209424
http://dx.doi.org/10.1109/TVCG.2022.3209424
613623J
Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.
Tushar M. Athawale;Christopher R. Johnson 0001;Sudhanshu Sane;David Pugmire
Tushar M. Athawale;Chris R. Johnson;Sudhanshu Sane;David Pugmire
Oak Ridge National Laboratory, USA;Scientific Computing & Imaging (SCI) Institute, University of Utah, USA;Luminary Cloud, Inc., USA;Oak Ridge National Laboratory, USA
10.1109/TVCG.2020.3030394;10.1109/TVCG.2015.2467958;10.1109/TVCG.2018.2864432;10.1109/TVCG.2015.2467204;10.1109/TVCG.2012.227;10.1109/INFVIS.2002.1173157;10.1109/TVCG.2017.2744099;10.1109/TVCG.2009.131;10.1109/TVCG.2008.116;10.1109/VISUAL.1996.568116;10.1109/TVCG.2007.70518;10.1109/TVCG.2020.3030365;10.1109/TVCG.2018.2864846;10.1109/TVCG.2006.165;10.1109/TVCG.2016.2599017;10.1109/TVCG.2013.143;10.1109/TVCG.2016.2599040;10.1109/VAST.2006.261424;10.1109/TVCG.2019.2934242;10.1109/TVCG.2020.3030466
Uncertainty visualization,fiber surfaces,and probability070
93
Vis2022
ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants
10.1109/TVCG.2022.3209430
http://dx.doi.org/10.1109/TVCG.2022.3209430
10911101J
Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.
Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu
Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu
State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
10.1109/TVCG.2015.2467851;10.1109/TVCG.2021.3114792;10.1109/TVCG.2017.2745083;10.1109/TVCG.2021.3114875;10.1109/VAST.2006.261421;10.1109/TVCG.2013.173;10.1109/TVCG.2017.2745105;10.1109/TVCG.2014.2346454;10.1109/TVCG.2015.2467622;10.1109/TVCG.2018.2864886;10.1109/TVCG.2009.200;10.1109/TVCG.2012.213;10.1109/TVCG.2019.2934275;10.1109/TVCG.2021.3114878;10.1109/TVCG.2009.117;10.1109/VAST.2009.5332595;10.1109/TVCG.2016.2598664;10.1109/TVCG.2021.3114877
Power plant visual analytics,energy data visualization,spatiotemporal visualization,smart factory171
94
Vis2022
Multivariate Probabilistic Range Queries for Scalable Interactive 3D Visualization
10.1109/TVCG.2022.3209439
http://dx.doi.org/10.1109/TVCG.2022.3209439
646656J
Large-scale scientific data, such as weather and climate simulations, often comprise a large number of attributes for each data sample, like temperature, pressure, humidity, and many more. Interactive visualization and analysis require filtering according to any desired combination of attributes, in particular logical AND operations, which is challenging for large data and many attributes. Many general data structures for this problem are built for and scale with a fixed number of attributes, and scalability of joint queries with arbitrary attribute subsets remains a significant problem. We propose a flexible probabilistic framework for multivariate range queries that decouples all attribute dimensions via projection, allowing any subset of attributes to be queried with full efficiency. Moreover, our approach is output-sensitive, mainly scaling with the cardinality of the query result rather than with the input data size. This is particularly important for joint attribute queries, where the query output is usually much smaller than the whole data set. Additionally, our approach can split query evaluation between user interaction and rendering, achieving much better scalability for interactive visualization than the previous state of the art. Furthermore, even when a multi-resolution strategy is used for visualization, queries are jointly evaluated at the finest data granularity, because our framework does not limit query accuracy to a fixed spatial subdivision.
Amani Ageeli;Alberto Jaspe Villanueva;Ronell Sicat;Florian Mannuß;Peter Rautek;Markus Hadwiger
Amani Ageeli;Alberto Jaspe-Villanueva;Ronell Sicat;Florian Mannuss;Peter Rautek;Markus Hadwiger
King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;Saudi Aramco, Dhahran, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia
10.1109/TVCG.2018.2864847;10.1109/VISUAL.1996.568121;10.1109/TVCG.2006.157;10.1109/TVCG.2014.2346324;10.1109/VISUAL.2005.1532792;10.1109/TVCG.2009.160
High-dimensional filtering,multivariate filtering,output-sensitivity,multivariate attribute queries,progressive culling071
95
Vis2022
A Scanner Deeply: Predicting Gaze Heatmaps on Visualizations Using Crowdsourced Eye Movement Data
10.1109/TVCG.2022.3209472
http://dx.doi.org/10.1109/TVCG.2022.3209472
396406J
Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a Scanner Deeply, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper's contribution.
Sungbok Shin;Sunghyo Chung;Sanghyun Hong 0001;Niklas Elmqvist
Sungbok Shin;Sunghyo Chung;Sanghyun Hong;Niklas Elmqvist
University of Maryland, College Park, USA;Kakao Corp., South Korea;Oregon State University, USA;University of Maryland, College Park, USA
10.1109/TVCG.2015.2467732;10.1109/TVCG.2011.193;10.1109/TVCG.2018.2865138;10.1109/TVCG.2012.215;10.1109/TVCG.2015.2467195;10.1109/TVCG.2017.2743939
Gaze prediction,visualization,webcam-based eye-tracking,crowdsourcing,deep learning072
96
Vis2022
Comparison Conundrum and the Chamber of Visualizations: An Exploration of How Language Influences Visual Design
10.1109/TVCG.2022.3209456
http://dx.doi.org/10.1109/TVCG.2022.3209456
12111221J
The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly). We conducted a user study with sixteen data visualization experts and non-experts to investigate how they designed visualizations for comparisons in our design space. Based on the rich set of visualization techniques observed, we extracted key design features from the visualizations and synthesized them into a subset of sixteen representative visualization designs. We then conducted a follow-up study to validate user preferences for the sixteen representative visualizations corresponding to utterances in our design space. Findings from these studies suggest guidelines and future directions for designing natural language interfaces and recommendation tools to better support natural language comparisons in visual analytics.
Aimen Gaba;Vidya Setlur;Arjun Srinivasan;Jane Hoffswell;Cindy Xiong
Aimen Gaba;Vidya Setlur;Arjun Srinivasan;Jane Hoffswell;Cindy Xiong
UMass Amherst, USA;Tableau Research, USA;Tableau Research, USA;Adobe Research, USA;UMass Amherst, USA
10.1109/TVCG.2017.2744199;10.1109/TVCG.2013.183;10.1109/TVCG.2007.70556;10.1109/TVCG.2019.2934786;10.1109/TVCG.2011.194;10.1109/TVCG.2019.2934801;10.1109/TVCG.2016.2599030;10.1109/TVCG.2021.3114823;10.1109/TVCG.2019.2934399;10.1109/TVCG.2021.3114814
Comparative constructions,cardinality,explicit and implicit comparisons,natural language,intent,visual analysis174
97
Vis2022
Data Hunches: Incorporating Personal Knowledge into Visualizations
10.1109/TVCG.2022.3209451
http://dx.doi.org/10.1109/TVCG.2022.3209451
504514J
The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches. We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.
Haihan Lin;Derya Akbaba;Miriah D. Meyer;Alexander LexHaihan Lin;Derya Akbaba;Miriah Meyer;Alexander LexUniversity of Utah, USA;University of Utah, USA;Linköping University, Sweden;University of Utah, USA
10.1109/TVCG.2012.220;10.1109/TVCG.2014.2346298;10.1109/TVCG.2016.2599058;10.1109/TVCG.2019.2934287;10.1109/TVCG.2017.2745240;10.1109/TVCG.2018.2864913;10.1109/TVCG.2016.2598839;10.1109/TVCG.2017.2745958;10.1109/TVCG.2012.262
Data Visualization,Uncertainty,Data Hunches174
98
Vis2022
Visual Comparison of Language Model Adaptation
10.1109/TVCG.2022.3209458
http://dx.doi.org/10.1109/TVCG.2022.3209458
11781188J
Neural language models are widely used; however, their model parameters often need to be adapted to the specific domains and tasks of an application, which is time- and resource-consuming. Thus, adapters have recently been introduced as a lightweight alternative for model adaptation. They consist of a small set of task-specific parameters with a reduced training time and simple parameter composition. The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces. To help developers overcome these challenges, we provide a twofold contribution. First, in close collaboration with NLP researchers, we conducted a requirement analysis for an approach supporting adapter evaluation and detected, among others, the need for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e., prediction-based) explanation methods. Second, motivated by the gathered requirements, we designed a flexible visual analytics workspace that enables the comparison of adapter properties. In this paper, we discuss several design iterations and alternatives for interactive, comparative visual explanation methods. Our comparative visualizations show the differences in the adapted embedding vectors and prediction outcomes for diverse human-interpretable concepts (e.g., person names, human qualities). We evaluate our workspace through case studies and show that, for instance, an adapter trained on the language debiasing task according to context-0 (decontextualized) embeddings introduces a new type of bias where words (even gender-independent words such as countries) become more similar to female- than male pronouns. We demonstrate that these are artifacts of context-0 embeddings, and the adapter effectively eliminates the gender information from the contextualized word representations.
Rita Sevastjanova;Eren Cakmak;Shauli Ravfogel;Ryan Cotterell;Mennatallah El-Assady
Rita Sevastjanova;Eren Cakmak;Shauli Ravfogel;Ryan Cotterell;Mennatallah El-Assady
University of Konstanz, Germany;University of Konstanz, Germany;Bar-Ilan University, Israel;ETH, Israel;ETH, AI Center, Israel
10.1109/TVCG.2020.3028976;10.1109/TVCG.2017.2744199;10.1109/VAST.2018.8802454;10.1109/TVCG.2017.2745141;10.1109/TVCG.2018.2865230;10.1109/TVCG.2012.213;10.1109/TVCG.2018.2865044
Language Model Adaptation,Adapter,Word Embeddings,Sequence Classification,Visual Analytics174
99
Vis2022
The Influence of Visual Provenance Representations on Strategies in a Collaborative Hand-off Data Analysis Scenario
10.1109/TVCG.2022.3209495
http://dx.doi.org/10.1109/TVCG.2022.3209495
11131123J
Conducting data analysis tasks rarely occur in isolation. Especially in intelligence analysis scenarios where different experts contribute knowledge to a shared understanding, members must communicate how insights develop to establish common ground among collaborators. The use of provenance to communicate analytic sensemaking carries promise by describing the interactions and summarizing the steps taken to reach insights. Yet, no universal guidelines exist for communicating provenance in different settings. Our work focuses on the presentation of provenance information and the resulting conclusions reached and strategies used by new analysts. In an open-ended, 30-minute, textual exploration scenario, we qualitatively compare how adding different types of provenance information (specifically data coverage and interaction history) affects analysts' confidence in conclusions developed, propensity to repeat work, filtering of data, identification of relevant information, and typical investigation strategies. We see that data coverage (i.e., what was interacted with) provides provenance information without limiting individual investigation freedom. On the other hand, while interaction history (i.e., when something was interacted with) does not significantly encourage more mimicry, it does take more time to comfortably understand, as represented by less confident conclusions and less relevant information-gathering behaviors. Our results contribute empirical data towards understanding how provenance summarizations can influence analysis behaviors.
Jeremy E. Block;Shaghayegh Esmaeili;Eric D. Ragan;John R. Goodall;G. David Richardson
Jeremy E. Block;Shaghayegh Esmaeili;Eric D. Ragan;John R. Goodall;G. David Richardson
University of Florida, USA;University of Florida, USA;University of Florida, USA;Oak Ridge National Laboratory, USA;Oak Ridge National Laboratory, USA
10.1109/TVCG.2013.155;10.1109/TVCG.2019.2934209;10.1109/VAST.2014.7042492;10.1109/VAST.2017.8585665;10.1109/VAST.2017.8585658;10.1109/VAST.2010.5652932;10.1109/TVCG.2018.2865233;10.1109/TVCG.2016.2599058;10.1109/TVCG.2015.2467613;10.1109/TVCG.2008.137;10.1109/VAST.2009.5333878;10.1109/VAST.2006.261415;10.1109/TVCG.2021.3114827;10.1109/VAST.2016.7883515;10.1109/TVCG.2015.2467611;10.1109/TVCG.2015.2467551;10.1109/VAST.2008.4677358;10.1109/TVCG.2017.2744805;10.1109/TVCG.2015.2467591;10.1109/TVCG.2016.2598466;10.1109/TVCG.2020.3030403;10.1109/TVCG.2018.2865024;10.1109/TVCG.2017.2744138;10.1109/TVCG.2013.132;10.1109/TVCG.2021.3114862;10.1109/TVCG.2013.164;10.1109/TVCG.2013.167;10.1109/TVCG.2017.2745279
Analytic provenance,sensemaking,information transfer,visualization,workflow summarization,user studies075
100
Vis2022
RoboHapalytics: A Robot Assisted Haptic Controller for Immersive Analytics
10.1109/TVCG.2022.3209433
http://dx.doi.org/10.1109/TVCG.2022.3209433
451461J
Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typical mid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of haptic controls to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range and resolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm with hand tracking to align tangible controls under the user's fingers as they reach out to interact with data affordances. We begin with a study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtual mid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-air interaction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction with immersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactive slicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data.
Shaozhang Dai;Jim Smiley;Tim Dwyer;Barrett Ens;Lonni Besançon
Shaozhang Dai;Jim Smiley;Tim Dwyer;Barrett Ens;Lonni Besancon
Monash University, Australia;Monash University, Australia;Monash University, Australia;Monash University, Australia;Linköping University, Sweden
10.1109/TVCG.2016.2599217;10.1109/TVCG.2013.121;10.1109/TVCG.2014.2346250;10.1109/VISUAL.2004.47
Haptic Feedback,Human Centred Interaction,Robotic Arm076