A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Conference | Year | Title | DOI | Link | FirstPage | LastPage | PaperType | Abstract | AuthorNames-Deduped | AuthorNames | AuthorAffiliation | InternalReferences | AuthorKeywords | AminerCitationCount | CitationCount_CrossRef | PubsCited_CrossRef | Downloads_Xplore | Award | GraphicsReplicabilityStamp |
2 | Vis | 2023 | Design Patterns for Situated Visualization in Augmented Reality | 10.1109/tvcg.2023.3327398 | http://dx.doi.org/10.1109/TVCG.2023.3327398 | 1324 | 1335 | J | Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows. | Benjamin Lee;Michael Sedlmair;Dieter Schmalstieg | Benjamin Lee;Michael Sedlmair;Dieter Schmalstieg | University of Stuttgart, Germany;University of Stuttgart, Germany;Graz University of Technology and University of Stuttgart, Austria | 10.1109/tvcg.2021.3114835;10.1109/tvcg.2020.3030334;10.1109/tvcg.2020.3030450;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209386;10.1109/tvcg.2016.2598608;10.1109/tvcg.2007.70515 | Augmented reality,immersive analytics,situated visualization,design patterns,design space | 11 | 124 | 736 | |||
3 | Vis | 2023 | ggdist: Visualizations of Distributions and Uncertainty in the Grammar of Graphics | 10.1109/tvcg.2023.3327195 | http://dx.doi.org/10.1109/TVCG.2023.3327195 | 414 | 424 | J | The grammar of graphics is ubiquitous, providing the foundation for a variety of popular visualization tools and toolkits. Yet support for uncertainty visualization in the grammar graphics—beyond simple variations of error bars, uncertainty bands, and density plots—remains rudimentary. Research in uncertainty visualization has developed a rich variety of improved uncertainty visualizations, most of which are difficult to create in existing grammar of graphics implementations. ggdist, an extension to the popular ggplot2 grammar of graphics toolkit, is an attempt to rectify this situation. ggdist unifies a variety of uncertainty visualization types through the lens of distributional visualization, allowing functions of distributions to be mapped to directly to visual channels (aesthetics), making it straightforward to express a variety of (sometimes weird!) uncertainty visualization types. This distributional lens also offers a way to unify Bayesian and frequentist uncertainty visualization by formalizing the latter with the help of confidence distributions. In this paper, I offer a description of this uncertainty visualization paradigm and lessons learned from its development and adoption: ggdist has existed in some form for about six years (originally as part of the tidybayes R package for post-processing Bayesian models), and it has evolved substantially over that time, with several rewrites and API re-organizations as it changed in response to user feedback and expanded to cover increasing varieties of uncertainty visualization types. Ultimately, given the huge expressive power of the grammar of graphics and the popularity of tools built on it, I hope a catalog of my experience with ggdist will provide a catalyst for further improvements to formalizations and implementations of uncertainty visualization in grammar of graphics ecosystems. A free copy of this paper is available at https://osf.io/2gsz6. All supplemental materials are available at https://github.com/mjskay/ggdist-paper and are archived on Zenodo at doi:10.5281/zenodo.7770984. | Matthew Kay 0001 | Matthew Kay | Northwestern University, USA | 10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346298;10.1109/tvcg.2013.227;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865193;10.1109/tvcg.2014.2346455;10.1109/tvcg.2009.111;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2011.227 | Uncertainty visualization,probability distributions,confidence distributions,grammar of graphics | 7 | 55 | 281 | |||
4 | Vis | 2023 | PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation | 10.1109/tvcg.2023.3327168 | http://dx.doi.org/10.1109/TVCG.2023.3327168 | 295 | 305 | J | Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model. | Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001 | Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen | State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China | 10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391 | Prompt engineering,text-to-image generation,image visualization | 5 | 78 | 1065 | |||
5 | Vis | 2023 | Challenges and Opportunities in Data Visualization Education: A Call to Action | 10.1109/tvcg.2023.3327378 | http://dx.doi.org/10.1109/TVCG.2023.3327378 | 649 | 660 | J | This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io. | Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale | Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale | University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada | 10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367 | Data Visualization,Education,Challenges | 5 | 138 | 563 | |||
6 | Vis | 2023 | Affective Visualization Design: Leveraging the Emotional Impact of Data | 10.1109/tvcg.2023.3327385 | http://dx.doi.org/10.1109/TVCG.2023.3327385 | 1 | 11 | J | In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., why emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (where), design tasks (what), and design methods (how), to explore the design space of affective visualization design. | Xingyu Lan;Yanqiu Wu 0001;Nan Cao 0001 | Xingyu Lan;Yanqiu Wu;Nan Cao | Fudan University, Research Group of Computational and AI Communication at Institute for Global Communications and Integrated Media, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China | 10.1109/tvcg.2021.3114775;10.1109/tvcg.2020.3030435;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209457;10.1109/tvcg.2020.3030472;10.1109/tvcg.2010.179;10.1109/tvcg.2022.3209409;10.1109/infvis.2004.8;10.1109/tvcg.2009.171;10.1109/tvcg.2021.3114774;10.1109/tvcg.2019.2934656 | Information Visualization,Affective Design,Visual Communication,User Experience,Storytelling | 4 | 95 | 848 | BP | ||
7 | Vis | 2023 | ARGUS: Visualization of AI-Assisted Task Guidance in AR | 10.1109/tvcg.2023.3327396 | http://dx.doi.org/10.1109/TVCG.2023.3327396 | 1313 | 1323 | J | The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi-year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine-tune the components of the AR assistant. | Sonia Castelo;João Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Irán R. Román;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He 0001;Qi Sun 0003;Huy T. Vo;Juan Pablo Bello;Michael Krone;Cláudio T. Silva | Sonia Castelo;Joao Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Iran Roman;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He;Qi Sun;Huy Vo;Juan Bello;Michael Krone;Claudio Silva | New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York | 10.1109/tvcg.2017.2746018;10.1109/tvcg.2018.2865152;10.1109/tvcg.2018.2864499 | Data Models,Image and Video Data,Temporal Data,Application Motivated Visualization,AR/VR/Immersive | 4 | 58 | 467 | HM | ||
8 | Vis | 2023 | Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model | 10.1109/tvcg.2023.3326913 | http://dx.doi.org/10.1109/TVCG.2023.3326913 | 284 | 294 | J | Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design. | Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng 0004 | Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng | Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China | 10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598620;10.1109/tvcg.2012.221;10.1109/tvcg.2020.3030448;10.1109/tvcg.2022.3209486;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2022.3209447 | pictorial visualization,generative model,authoring tool | 4 | 61 | 465 | |||
9 | Vis | 2023 | Unraveling the Design Space of Immersive Analytics: A Systematic Review | 10.1109/tvcg.2023.3327368 | http://dx.doi.org/10.1109/TVCG.2023.3327368 | 495 | 506 | J | Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions: Academic Theory and Contribution, Immersive Technology, Data, Spatial Presentation, and Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj. | David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne | David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne | Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA | 10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192 | Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space | 3 | 94 | 476 | |||
10 | Vis | 2023 | Data Player: Automatic Generation of Data Videos with Narration-Animation Interplay | 10.1109/tvcg.2023.3327197 | http://dx.doi.org/10.1109/TVCG.2023.3327197 | 109 | 119 | J | Data visualizations and narratives are often integrated to convey data stories effectively. Among various data storytelling formats, data videos have been garnering increasing attention. These videos provide an intuitive interpretation of data charts while vividly articulating the underlying data insights. However, the production of data videos demands a diverse set of professional skills and considerable manual labor, including understanding narratives, linking visual elements with narration segments, designing and crafting animations, recording audio narrations, and synchronizing audio with visual animations. To simplify this process, our paper introduces a novel method, referred to as Data Player, capable of automatically generating dynamic data videos with narration-animation interplay. This approach lowers the technical barriers associated with creating data videos rich in narration. To enable narration-animation interplay, Data Player constructs references between visualizations and text input. Specifically, it first extracts data into tables from the visualizations. Subsequently, it utilizes large language models to form semantic connections between text and visuals. Finally, Data Player encodes animation design knowledge as computational low-level constraints, allowing for the recommendation of suitable animation presets that align with the audio narration produced by text-to-speech technologies. We assessed Data Player's efficacy through an example gallery, a user study, and expert interviews. The evaluation results demonstrated that Data Player can generate high-quality data videos that are comparable to human-composed ones. | Leixian Shen;Yizhi Zhang;Haidong Zhang;Yun Wang 0012 | Leixian Shen;Yizhi Zhang;Haidong Zhang;Yun Wang | The Hong Kong University of Science and Technology, China;Cornell University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China | 10.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2007.70539;10.1109/tvcg.2020.3030360;10.1109/tvcg.2021.3114775;10.1109/tvcg.2021.3114802;10.1109/tvcg.2018.2865240;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2022.3209357;10.1109/tvcg.2022.3209447;10.1109/tvcg.2022.3209369 | Visualization,Narration-animation interplay,Data video,Human-AI collaboration | 3 | 68 | 435 | |||
11 | Vis | 2023 | MeTACAST: Target- and Context-Aware Spatial Selection in VR | 10.1109/tvcg.2023.3326517 | http://dx.doi.org/10.1109/TVCG.2023.3326517 | 480 | 494 | J | We propose three novel spatial data selection techniques for particle data in VR visualization environments. They are designed to be target- and context-aware and be suitable for a wide range of data features and complex scenarios. Each technique is designed to be adjusted to particular selection intents: the selection of consecutive dense regions, the selection of filament-like structures, and the selection of clusters—with all of them facilitating post-selection threshold adjustment. These techniques allow users to precisely select those regions of space for further exploration—with simple and approximate 3D pointing, brushing, or drawing input—using flexible point- or path-based input and without being limited by 3D occlusions, non-homogeneous feature density, or complex data shapes. These new techniques are evaluated in a controlled experiment and compared with the Baseline method, a region-based 3D painting selection. Our results indicate that our techniques are effective in handling a wide range of scenarios and allow users to select data based on their comprehension of crucial features. Furthermore, we analyze the attributes, requirements, and strategies of our spatial selection methods and compare them with existing state-of-the-art selection methods to handle diverse data features and situations. Based on this analysis we provide guidelines for choosing the most suitable 3D spatial selection techniques based on the interaction environment, the given data characteristics, or the need for interactive post-selection threshold adjustment. | Lixiang Zhao;Tobias Isenberg 0001;Fuqi Xie;Hai-Ning Liang;Lingyun Yu 0001 | Lixiang Zhao;Tobias Isenberg;Fuqi Xie;Hai-Ning Liang;Lingyun Yu | Xi'an Jiaotong-Liverpool University, China;Université Paris-Saclay, CNRS, Inria, LISN, France;Xi'an Jiaotong-Liverpool University, China;Xi'an Jiaotong-Liverpool University, China;Xi'an Jiaotong-Liverpool University, China | 10.1109/tvcg.2009.112;10.1109/tvcg.2019.2934332;10.1109/tvcg.2018.2865191;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030363;10.1109/tvcg.2012.292;10.1109/infvis.1996.559216;10.1109/tvcg.2012.217;10.1109/tvcg.2015.2467202 | Spatial selection,immersive analytics,virtual reality (VR),target-aware and context-aware interaction for visualization | 3 | 65 | 363 | X | ||
12 | Vis | 2023 | Vistrust: a Multidimensional Framework and Empirical Study of Trust in Data Visualizations | 10.1109/tvcg.2023.3326579 | http://dx.doi.org/10.1109/TVCG.2023.3326579 | 348 | 358 | J | Trust is an essential aspect of data visualization, as it plays a crucial role in the interpretation and decision-making processes of users. While research in social sciences outlines the multi-dimensional factors that can play a role in trust formation, most data visualization trust researchers employ a single-item scale to measure trust. We address this gap by proposing a comprehensive, multidimensional conceptualization and operationalization of trust in visualization. We do this by applying general theories of trust from social sciences, as well as synthesizing and extending earlier work and factors identified by studies in the visualization field. We apply a two-dimensional approach to trust in visualization, to distinguish between cognitive and affective elements, as well as between visualization and data-specific trust antecedents. We use our framework to design and run a large crowd-sourced study to quantify the role of visual complexity in establishing trust in science visualizations. Our study provides empirical evidence for several aspects of our proposed theoretical framework, most notably the impact of cognition, affective responses, and individual differences when establishing trust in visualizations. | Hamza Elhamdadi;Adam Stefkovics;Johanna Beyer;Eric Mörth;Hanspeter Pfister;Cindy Xiong Bearfield;Carolina Nobre | Hamza Elhamdadi;Adam Stefkovics;Johanna Beyer;Eric Moerth;Hanspeter Pfister;Cindy Xiong Bearfield;Carolina Nobre | UMass Amherst, USA;HUN-REN Centre for Social Sciences, USA;Harvard University, USA;Harvard Medical School, USA;Harvard University, USA;UMass Amherst, USA;University of Toronto, Canada | 10.1109/tvcg.2016.2598544;10.1109/tvcg.2020.3028984;10.1109/tvcg.2017.2745240;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591 | Trust,visualization,science,framework | 3 | 62 | 307 | |||
13 | Vis | 2023 | InkSight: Leveraging Sketch Interaction for Documenting Chart Findings in Computational Notebooks | 10.1109/tvcg.2023.3327170 | http://dx.doi.org/10.1109/TVCG.2023.3327170 | 944 | 954 | J | Computational notebooks have become increasingly popular for exploratory data analysis due to their ability to support data exploration and explanation within a single document. Effective documentation for explaining chart findings during the exploration process is essential as it helps recall and share data analysis. However, documenting chart findings remains a challenge due to its time-consuming and tedious nature. While existing automatic methods alleviate some of the burden on users, they often fail to cater to users' specific interests. In response to these limitations, we present InkSight, a mixed-initiative computational notebook plugin that generates finding documentation based on the user's intent. InkSight allows users to express their intent in specific data subsets through sketching atop visualizations intuitively. To facilitate this, we designed two types of sketches, i.e., open-path and closed-path sketch. Upon receiving a user's sketch, InkSight identifies the sketch type and corresponding selected data items. Subsequently, it filters data fact types based on the sketch and selected data items before employing existing automatic data fact recommendation algorithms to infer data facts. Using large language models (GPT-3.5), InkSight converts data facts into effective natural language documentation. Users can conveniently fine-tune the generated documentation within InkSight. A user study with 12 participants demonstrated the usability and effectiveness of InkSight in expressing user intent and facilitating chart finding documentation. | Yanna Lin;Haotian Li 0001;Leni Yang;Aoyu Wu;Huamin Qu | Yanna Lin;Haotian Li;Leni Yang;Aoyu Wu;Huamin Qu | Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Harvard University, USA;Hong Kong University of Science and Technology, China | 10.1109/tvcg.2019.2934785;10.1109/tvcg.2021.3114802;10.1109/tvcg.2013.191;10.1109/tvcg.2020.3030378;10.1109/tvcg.2022.3209421;10.1109/tvcg.2020.3030403;10.1109/tvcg.2018.2865145;10.1109/tvcg.2012.275;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114826;10.1109/tvcg.2021.3114774;10.1109/tvcg.2019.2934668 | Computational Notebook,Sketch-based Interaction,Documentation,Visualization,Exploratory Data Analysis | 3 | 58 | 249 | |||
14 | Vis | 2023 | InvVis: Large-Scale Data Embedding for Invertible Visualization | 10.1109/tvcg.2023.3326597 | http://dx.doi.org/10.1109/TVCG.2023.3326597 | 1139 | 1149 | J | We present InvVis, a new approach for invertible visualization, which is reconstructing or further modifying a visualization from an image. InvVis allows the embedding of a significant amount of data, such as chart data, chart information, source code, etc., into visualization images. The encoded image is perceptually indistinguishable from the original one. We propose a new method to efficiently express chart data in the form of images, enabling large-capacity data embedding. We also outline a model based on the invertible neural network to achieve high-quality data concealing and revealing. We explore and implement a variety of application scenarios of InvVis. Additionally, we conduct a series of evaluation experiments to assess our method from multiple perspectives, including data embedding quality, data restoration accuracy, data encoding capacity, etc. The result of our experiments demonstrates the great potential of InvVis in invertible visualization. | Huayuan Ye;Chenhui Li;Yang Li;Changbo Wang | Huayuan Ye;Chenhui Li;Yang Li;Changbo Wang | School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China | 10.1109/tvcg.2019.2934810;10.1109/tvcg.2020.3030351;10.1109/tvcg.2017.2744320;10.1109/tvcg.2020.3030343 | Information visualization,information steganography,invertible visualization,invertible neural network | 3 | 57 | 218 | |||
15 | Vis | 2023 | AttentionViz: A Global View of Transformer Attention | 10.1109/tvcg.2023.3327163 | http://dx.doi.org/10.1109/TVCG.2023.3327163 | 262 | 272 | J | Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allows these models to learn rich, contextual relationships between elements of a sequence. The main idea behind our method is to visualize a joint embedding of the query and key vectors used by transformer models to compute attention. Unlike previous attention visualization techniques, our approach enables the analysis of global patterns across multiple input sequences. We create an interactive visualization tool, AttentionViz (demo: http://attentionviz.com), based on these joint query-key embeddings, and use it to study attention mechanisms in both language and vision transformers. We demonstrate the utility of our approach in improving model understanding and offering new insights about query-key interactions through several application scenarios and expert feedback. | Catherine Yeh;Yida Chen;Aoyu Wu;Cynthia Chen;Fernanda B. Viégas;Martin Wattenberg | Catherine Yeh;Yida Chen;Aoyu Wu;Cynthia Chen;Fernanda Viégas;Martin Wattenberg | Harvard University, USA;Harvard University, USA;Harvard University, USA;Harvard University, USA;Harvard University, USA;Harvard University, USA | 10.1109/tvcg.2020.3028976;10.1109/tvcg.2019.2934659;10.1109/tvcg.2021.3114683;10.1109/vast.2018.8802454;10.1109/tvcg.2022.3209458;10.1109/tvcg.2018.2865044 | Transformer,Attention,NLP,Computer Vision,Visual Analytics | 2 | 62 | 502 | |||
16 | Vis | 2023 | Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms | 10.1109/tvcg.2023.3327356 | http://dx.doi.org/10.1109/TVCG.2023.3327356 | 23 | 33 | J | We conducted a longitudinal study during the 2022 U.S. midterm elections, investigating the real-world impacts of uncertainty visualizations. Using our forecast model of the governor elections in 33 states, we created a website and deployed four uncertainty visualizations for the election forecasts: single quantile dotplot (1-Dotplot), dual quantile dotplots (2-Dotplot), dual histogram intervals (2-Interval), and Plinko quantile dotplot (Plinko), an animated design with a physical and probabilistic analogy. Our online experiment ran from Oct. 18, 2022, to Nov. 23, 2022, involving 1,327 participants from 15 states. We use Bayesian multilevel modeling and post-stratification to produce demographically-representative estimates of people's emotions, trust in forecasts, and political participation intention. We find that election forecast visualizations can heighten emotions, increase trust, and slightly affect people's intentions to participate in elections. 2-Interval shows the strongest effects across all measures; 1-Dotplot increases trust the most after elections. Both visualizations create emotional and trust gaps between different partisan identities, especially when a Republican candidate is predicted to win. Our qualitative analysis uncovers the complex political and social contexts of election forecast visualizations, showcasing that visualizations may provoke polarization. This intriguing interplay between visualization types, partisanship, and trust exemplifies the fundamental challenge of disentangling visualization from its context, underscoring a need for deeper investigation into the real-world impacts of visualizations. Our preprint and supplements are available at https://doi.org/osf.io/ajq8f. | Fumeng Yang;Mandi Cai;Chloe Mortenson;Hoda Fakhari;Ayse D. Lokmanoglu;Jessica Hullman;Steven Franconeri;Nicholas Diakopoulos;Erik C. Nisbet;Matthew Kay 0001 | Fumeng Yang;Mandi Cai;Chloe Mortenson;Hoda Fakhari;Ayse D. Lokmanoglu;Jessica Hullman;Steven Franconeri;Nicholas Diakopoulos;Erik C. Nisbet;Matthew Kay | Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA | 10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2020.3030335;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209457;10.1109/tvcg.2022.3209348;10.1109/tvcg.2022.3209383;10.1109/tvcg.2021.3114679 | Uncertainty visualization,Probabilistic forecasts,Elections,Emotions,Trust,Political participation,Longitudinal study | 2 | 92 | 481 | BP | ||
17 | Vis | 2023 | Socrates: Data Story Generation via Adaptive Machine-Guided Elicitation of User Feedback | 10.1109/tvcg.2023.3327363 | http://dx.doi.org/10.1109/TVCG.2023.3327363 | 131 | 141 | J | Visual data stories can effectively convey insights from data, yet their creation often necessitates intricate data exploration, insight discovery, narrative organization, and customization to meet the communication objectives of the storyteller. Existing automated data storytelling techniques, however, tend to overlook the importance of user customization during the data story authoring process, limiting the system's ability to create tailored narratives that reflect the user's intentions. We present a novel data story generation workflow that leverages adaptive machine-guided elicitation of user feedback to customize the story. Our approach employs an adaptive plug-in module for existing story generation systems, which incorporates user feedback through interactive questioning based on the conversation history and dataset. This adaptability refines the system's understanding of the user's intentions, ensuring the final narrative aligns with their goals. We demonstrate the feasibility of our approach through the implementation of an interactive prototype: Socrates. Through a quantitative user study with 18 participants that compares our method to a state-of-the-art data story generation algorithm, we show that Socrates produces more relevant stories with a larger overlap of insights compared to human-generated stories. We also demonstrate the usability of Socrates via interviews with three data analysts and highlight areas of future work. | Guande Wu;Shunan Guo;Jane Hoffswell;Gromit Yeuk-Yin Chan;Ryan A. Rossi;Eunyee Koh | Guande Wu;Shunan Guo;Jane Hoffswell;Gromit Yeuk-Yin Chan;Ryan A. Rossi;Eunyee Koh | New York University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA | 10.1109/tvcg.2016.2598647;10.1109/tvcg.2015.2467732;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2016.2598468;10.1109/tvcg.2021.3114804;10.1109/tvcg.2021.3114806;10.1109/vast.2015.7347625;10.1109/tvcg.2019.2934785;10.1109/tvcg.2012.260;10.1109/tvcg.2013.119;10.1109/tvcg.2021.3114802;10.1109/tvcg.2022.3209421;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2022.3209428;10.1109/tvcg.2020.3030467;10.1109/tvcg.2017.2745078;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114826;10.1109/tvcg.2021.3114774 | Narrative visualization,visual storytelling,conversational agent | 2 | 79 | 348 | |||
18 | Vis | 2023 | Leveraging Historical Medical Records as a Proxy via Multimodal Modeling and Visualization to Enrich Medical Diagnostic Learning | 10.1109/tvcg.2023.3326929 | http://dx.doi.org/10.1109/TVCG.2023.3326929 | 1238 | 1248 | J | Simulation-based Medical Education (SBME) has been developed as a cost-effective means of enhancing the diagnostic skills of novice physicians and interns, thereby mitigating the need for resource-intensive mentor-apprentice training. However, feedback provided in most SBME is often directed towards improving the operational proficiency of learners, rather than providing summative medical diagnoses that result from experience and time. Additionally, the multimodal nature of medical data during diagnosis poses significant challenges for interns and novice physicians, including the tendency to overlook or over-rely on data from certain modalities, and difficulties in comprehending potential associations between modalities. To address these challenges, we present DiagnosisAssistant, a visual analytics system that leverages historical medical records as a proxy for multimodal modeling and visualization to enhance the learning experience of interns and novice physicians. The system employs elaborately designed visualizations to explore different modality data, offer diagnostic interpretive hints based on the constructed model, and enable comparative analyses of specific patients. Our approach is validated through two case studies and expert interviews, demonstrating its effectiveness in enhancing medical training. | Yang Ouyang;Yuchen Wu;He Wang;Chenyang Zhang;Furui Cheng;Chang Jiang;Lixia Jin;Yuanwu Cao;Quan Li | Yang Ouyang;Yuchen Wu;He Wang;Chenyang Zhang;Furui Cheng;Chang Jiang;Lixia Jin;Yuanwu Cao;Quan Li | School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;Department of Computer Science, University of Illinois at Urbana-Champaign, USA;Department of Computer Science, ETH Zürich, Switzerland;Zhongshan Hospital Fudan University, China;Zhongshan Hospital Fudan University, China;Zhongshan Hospital Fudan University, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China | 10.1109/tvcg.2020.3030437;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802454;10.1109/tvcg.2021.3114840 | Multimodal Medical Dataset,Visual Analytics,Explainable Machine Learning | 2 | 74 | 310 | |||
19 | Vis | 2023 | The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics | 10.1109/tvcg.2023.3326598 | http://dx.doi.org/10.1109/TVCG.2023.3326598 | 1402 | 1412 | J | While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org. | Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001 | Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda | University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA | 10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369 | Urban visual analytics,Urban analytics,Urban data,Visualization toolkit | 2 | 74 | 295 | |||
20 | Vis | 2023 | TactualPlot: Spatializing Data as Sound Using Sensory Substitution for Touchscreen Accessibility | 10.1109/tvcg.2023.3326937 | http://dx.doi.org/10.1109/TVCG.2023.3326937 | 836 | 846 | J | Tactile graphics are one of the best ways for a blind person to perceive a chart using touch, but their fabrication is often costly, time-consuming, and does not lend itself to dynamic exploration. Refreshable haptic displays tend to be expensive and thus unavailable to most blind individuals. We propose TactualPlot, an approach to sensory substitution where touch interaction yields auditory (sonified) feedback. The technique relies on embodied cognition for spatial awareness—i.e., individuals can perceive 2D touch locations of their fingers with reference to other 2D locations such as the relative locations of other fingers or chart characteristics that are visualized on touchscreens. Combining touch and sound in this way yields a scalable data exploration method for scatterplots where the data density under the user's fingertips is sampled. The sample regions can optionally be scaled based on how quickly the user moves their hand. Our development of TactualPlot was informed by formative design sessions with a blind collaborator, whose practice while using tactile scatterplots caused us to expand the technique for multiple fingers. We present results from an evaluation comparing our TactualPlot interaction technique to tactile graphics printed on swell touch paper. | Pramod Chundury;Yasmin Reyazuddin;J. Bern Jordan;Jonathan Lazar;Niklas Elmqvist | Pramod Chundury;Yasmin Reyazuddin;J. Bern Jordan;Jonathan Lazar;Niklas Elmqvist | University of Maryland, College Park, College Park, MD, USA;National Federation of the Blind, Baltimore, MD, USA;University of Maryland, College Park, College Park, MD, USA;University of Maryland, College Park, College Park, MD, USA;Aarhus University, Aarhus, Denmark | 10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114829;10.1109/tvcg.2021.3114846;10.1109/tvcg.2018.2865237;10.1109/tvcg.2017.2744184;10.1109/tvcg.2016.2598498 | Accessibility,sonification,multimodal interaction,crossmodal interaction,visualization | 2 | 61 | 212 | |||
21 | Vis | 2023 | Data Navigator: An Accessibility-Centered Data Navigation Toolkit | 10.1109/tvcg.2023.3327393 | http://dx.doi.org/10.1109/TVCG.2023.3327393 | 803 | 813 | J | Making data visualizations accessible for people with disabilities remains a significant challenge in current practitioner efforts. Existing visualizations often lack an underlying navigable structure, fail to engage necessary input modalities, and rely heavily on visual-only rendering practices. These limitations exclude people with disabilities, especially users of assistive technologies. To address these challenges, we present Data Navigator: a system built on a dynamic graph structure, enabling developers to construct navigable lists, trees, graphs, and flows as well as spatial, diagrammatic, and geographic relations. Data Navigator supports a wide range of input modalities: screen reader, keyboard, speech, gesture detection, and even fabricated assistive devices. We present 3 case examples with Data Navigator, demonstrating we can provide accessible navigation structures on top of raster images, integrate with existing toolkits at scale, and rapidly develop novel prototypes. Data Navigator is a step towards making accessible data visualizations easier to design and implement. | Frank Elavsky;Lucas Nadolskis;Dominik Moritz | Frank Elavsky;Lucas Nadolskis;Dominik Moritz | Carnegie Mellon University, USA;Carnegie Mellon University, USA;Carnegie Mellon University, USA | 10.1109/tvcg.2011.185;10.1109/tvcg.2021.3114829;10.1109/tvcg.2021.3114846;10.1109/tvcg.2021.3114770;10.1109/tvcg.2016.2599030 | accessibility,visualization,tools,technical materials,platforms,data interaction | 2 | 47 | 192 | |||
22 | Vis | 2023 | Adaptively Placed Multi-Grid Scene Representation Networks for Large-Scale Data Visualization | 10.1109/tvcg.2023.3327194 | http://dx.doi.org/10.1109/TVCG.2023.3327194 | 965 | 974 | J | Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN. | Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo 0001;Tom Peterka | Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo;Tom Peterka | The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA | 10.1109/tvcg.2012.274 | Scene representation network,deep learning,scientific visualization,volume rendering | 2 | 35 | 161 | |||
23 | Vis | 2023 | Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query Sculpting | 10.1109/tvcg.2023.3326594 | http://dx.doi.org/10.1109/TVCG.2023.3326594 | 186 | 196 | J | We present aggregate query sculpting (AQS), a faceted visual query technique for large-scale multidimensional data. As a “born scalable” query technique, AQS starts visualization with a single visual mark representing an aggregation of the entire dataset. The user can then progressively explore the dataset through a sequence of operations abbreviated as $\mathbb{P}^{6}$: pivot (facet an aggregate based on an attribute), partition (lay out a facet in space), peek (see inside a subset using an aggregate visual representation), pile (merge two or more subsets), project (extracting a subset into a new substrate), and prune (discard an aggregate not currently of interest). We validate AQS with Dataopsy, a prototype implementation of AQS that has been designed for fluid interaction on desktop and touch-based mobile devices. We demonstrate AQS and Dataopsy using two case studies and three application examples. | Md. Naimul Hoque;Niklas Elmqvist | Md Naimul Hoque;Niklas Elmqvist | University of Maryland, College Park, College Park, MD, USA;Aarhus University, Aarhus, Denmark | 10.1109/tvcg.2006.120;10.1109/vast47406.2019.8986948;10.1109/tvcg.2016.2598624;10.1109/tvcg.2012.252;10.1109/tvcg.2008.153;10.1109/tvcg.2022.3209484;10.1109/tvcg.2013.223;10.1109/tvcg.2009.145;10.1109/tvcg.2022.3209421;10.1109/tvcg.2006.166;10.1109/tvcg.2006.142;10.1109/infvis.2000.885086;10.1109/tvcg.2009.108;10.1109/tvcg.2015.2467051 | Multidimensional data visualization,multivariate graphs,visual queries,visual exploration | 2 | 54 | 153 | |||
24 | Vis | 2023 | From Shock to Shift: Data Visualization for Constructive Climate Journalism | 10.1109/tvcg.2023.3327185 | http://dx.doi.org/10.1109/TVCG.2023.3327185 | 1413 | 1423 | J | We present a multi-dimensional, multi-level, and multi-channel approach to data visualization for the purpose of constructive climate journalism. Data visualization has assumed a central role in environmental journalism and is often used in data stories to convey the dramatic consequences of climate change and other ecological crises. However, the emphasis on the catastrophic impacts of climate change tends to induce feelings of fear, anxiety, and apathy in readers. Climate mitigation, adaptation, and protection—all highly urgent in the face of the climate crisis—are at risk of being overlooked. These topics are more difficult to communicate as they are hard to convey on varying levels of locality, involve multiple interconnected sectors, and need to be mediated across various channels from the printed newspaper to social media platforms. So far, there has been little research on data visualization to enhance affective engagement with data about climate protection as part of solution-oriented reporting of climate change. With this research we characterize the unique challenges of constructive climate journalism for data visualization and share findings from a research and design study in collaboration with a national newspaper in Germany. Using the affordances and aesthetics of travel postcards, we present Klimakarten, a data journalism project on the progress of climate protection at multiple spatial scales (from national to local), across five key sectors (agriculture, buildings, energy, mobility, and waste), and for print and online use. The findings from quantitative and qualitative analysis of reader feedback confirm our overall approach and suggest implications for future work. | Francesca Morini;Anna Eschenbacher;Johanna Hartmann;Marian Dörk | Francesca Morini;Anna Eschenbacher;Johanna Hartmann;Marian Dörk | Södertörn University and University of Applied Sciences Potsdam, Germany;Filmuniversität Babelsberg, Germany;Filmuniversität Babelsberg, Germany;University of Applied Sciences Potsdam, Germany | 10.1109/tvcg.2014.2346323;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598647;10.1109/tvcg.2012.221 | Constructive Climate Journalism,Frameworks,Storytelling,Journalism | 1 | 57 | 784 | |||
25 | Vis | 2023 | Dead or Alive: Continuous Data Profiling for Interactive Data Science | 10.1109/tvcg.2023.3327367 | http://dx.doi.org/10.1109/TVCG.2023.3327367 | 197 | 207 | J | Profiling data by plotting distributions and analyzing summary statistics is a critical step throughout data analysis. Currently, this process is manual and tedious since analysts must write extra code to examine their data after every transformation. This inefficiency may lead to data scientists profiling their data infrequently, rather than after each transformation, making it easy for them to miss important errors or insights. We propose continuous data profiling as a process that allows analysts to immediately see interactive visual summaries of their data throughout their data analysis to facilitate fast and thorough analysis. Our system, AutoProfiler, presents three ways to support continuous data profiling: (1) it automatically displays data distributions and summary statistics to facilitate data comprehension; (2) it is live, so visualizations are always accessible and update automatically as the data updates; (3) it supports follow up analysis and documentation by authoring code for the user in the notebook. In a user study with 16 participants, we evaluate two versions of our system that integrate different levels of automation: both automatically show data profiles and facilitate code authoring, however, one version updates reactively (“live”) and the other updates only on demand (“dead”). We find that both tools, dead or alive, facilitate insight discovery with 91% of user-generated insights originating from the tools rather than manual profiling code written by users. Participants found live updates intuitive and felt it helped them verify their transformations while those with on-demand profiles liked the ability to look at past visualizations. We also present a longitudinal case study on how AutoProfiler helped domain scientists find serendipitous insights about their data through automatic, live data profiles. Our results have implications for the design of future tools that offer automated data analysis support. | Will Epperson;Vaishnavi Gorantla;Dominik Moritz;Adam Perer | Will Epperson;Vaishnavi Gorantla;Dominik Moritz;Adam Perer | Carnegie Mellon University, USA;Carnegie Mellon University, USA;Carnegie Mellon University, USA;Carnegie Mellon University, USA | 10.1109/tvcg.2018.2865040;10.1109/tvcg.2012.219;10.1109/tvcg.2015.2467191 | Data Profiling,Data Quality,Exploratory Data Analysis,Interactive Data Science | 1 | 51 | 607 | HM | ||
26 | Vis | 2023 | Heuristics for Supporting Cooperative Dashboard Design | 10.1109/tvcg.2023.3327158 | http://dx.doi.org/10.1109/TVCG.2023.3327158 | 370 | 380 | J | Dashboards are no longer mere static displays of metrics; through functionality such as interaction and storytelling, they have evolved to support analytic and communicative goals like monitoring and reporting. Existing dashboard design guidelines, however, are often unable to account for this expanded scope as they largely focus on best practices for visual design. In contrast, we frame dashboard design as facilitating an analytical conversation: a cooperative, interactive experience where a user may interact with, reason about, or freely query the underlying data. By drawing on established principles of conversational flow and communication, we define the concept of a cooperative dashboard as one that enables a fruitful and productive analytical conversation, and derive a set of 39 dashboard design heuristics to support effective analytical conversations. To assess the utility of this framing, we asked 52 computer science and engineering graduate students to apply our heuristics to critique and design dashboards as part of an ungraded, opt-in homework assignment. Feedback from participants demonstrates that our heuristics surface new reasons dashboards may fail, and encourage a more fluid, supportive, and responsive style of dashboard design. Our approach suggests several compelling directions for future work, including dashboard authoring tools that better anticipate conversational turn-taking, repair, and refinement and extending cooperative principles to other analytical workflows. | Vidya Setlur;Michael Correll;Arvind Satyanarayan;Melanie Tory | Vidya Setlur;Michael Correll;Arvind Satyanarayan;Melanie Tory | Tableau Research, USA;Tableau Research, USA;MIT CSAIL, USA;Northeastern University, USA | 10.1109/tvcg.2022.3209448;10.1109/tvcg.2021.3114760;10.1109/tvcg.2020.3030338;10.1109/tvcg.2019.2934283;10.1109/tvcg.2016.2599058;10.1109/tvcg.2017.2744684;10.1109/tvcg.2017.2745240;10.1109/tvcg.2021.3114860;10.1109/tvcg.2017.2744319;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209451;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2022.3209409;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2745219;10.1109/vast47406.2019.8986918;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862;10.1109/tvcg.2021.3114826;10.1109/tvcg.2022.3209493 | Gricean maxims,interactive visualization,conversation initiation,grounding,turn-taking,repair and refinement | 1 | 99 | 526 | |||
27 | Vis | 2023 | TimeSplines: Sketch-Based Authoring of Flexible and Idiosyncratic Timelines | 10.1109/tvcg.2023.3326520 | http://dx.doi.org/10.1109/TVCG.2023.3326520 | 34 | 44 | J | Timelines are essential for visually communicating chronological narratives and reflecting on the personal and cultural significance of historical events. Existing visualization tools tend to support conventional linear representations, but fail to capture personal idiosyncratic conceptualizations of time. In response, we built TimeSplines, a visualization authoring tool that allows people to sketch multiple free-form temporal axes and populate them with heterogeneous, time-oriented data via incremental and lazy data binding. Authors can bend, compress, and expand temporal axes to emphasize or de-emphasize intervals based on their personal importance; they can also annotate the axes with text and figurative elements to convey contextual information. The results of two user studies show how people appropriate the concepts in TimeSplines to express their own conceptualization of time, while our curated gallery of images demonstrates the expressive potential of our approach. | Anna Offenwanger;Matthew Brehmer;Fanny Chevalier;Theophanis Tsandilas | Anna Offenwanger;Matthew Brehmer;Fanny Chevalier;Theophanis Tsandilas | Université Paris Saclay, CRNS, Inria, LISN, France;Tableau Research, USA;Departments of Computer Science and Statistical Sciences, University of Toronto, Canada;Université Paris Saclay, CRNS, Inria, LISN, France | 10.1109/tvcg.2015.2467851;10.1109/tvcg.2016.2598609;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598876;10.1109/tvcg.2017.2744118;10.1109/tvcg.2013.191;10.1109/tvcg.2022.3209451;10.1109/tvcg.2013.200;10.1109/tvcg.2021.3114959;10.1109/tvcg.2017.2743918;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2015.2467153;10.1109/tvcg.2012.212;10.1109/tvcg.2020.3030476;10.1109/infvis.1999.801851;10.1109/tvcg.2015.2467751;10.1109/tvcg.2018.2865076;10.1109/tvcg.2011.195 | Temporal Data,interaction design,communication / presentation,storytelling,sketch-based interface,lazy data binding | 1 | 79 | 520 | BP | ||
28 | Vis | 2023 | Knowledge Graphs in Practice: Characterizing their Users, Challenges, and Visualization Opportunities | 10.1109/tvcg.2023.3326904 | http://dx.doi.org/10.1109/TVCG.2023.3326904 | 584 | 594 | J | This study presents insights from interviews with nineteen Knowledge Graph (KG) practitioners who work in both enterprise and academic settings on a wide variety of use cases. Through this study, we identify critical challenges experienced by KG practitioners when creating, exploring, and analyzing KGs that could be alleviated through visualization design. Our findings reveal three major personas among KG practitioners – KG Builders, Analysts, and Consumers – each of whom have their own distinct expertise and needs. We discover that KG Builders would benefit from schema enforcers, while KG Analysts need customizable query builders that provide interim query results. For KG Consumers, we identify a lack of efficacy for node-link diagrams, and the need for tailored domain-specific visualizations to promote KG adoption and comprehension. Lastly, we find that implementing KGs effectively in practice requires both technical and social solutions that are not addressed with current tools, technologies, and collaborative workflows. From the analysis of our interviews, we distill several visualization research directions to improve KG usability, including knowledge cards that balance digestibility and discoverability, timeline views to track temporal changes, interfaces that support organic discovery, and semantic explanations for AI and machine learning predictions. | Harry X. Li;Gabriel Appleby;Camelia Daniela Brumar;Remco Chang;Ashley Suh 0001 | Harry Li;Gabriel Appleby;Camelia Daniela Brumar;Remco Chang;Ashley Suh | MIT Lincoln Laboratory, USA;Tufts University, USA;Tufts University, USA;Tufts University, USA;MIT Lincoln Laboratory, USA | 10.1109/tvcg.2018.2865040;10.1109/tvcg.2011.185;10.1109/tvcg.2020.3030443;10.1109/tvcg.2008.178;10.1109/tvcg.2022.3209453;10.1109/tvcg.2012.219;10.1109/tvcg.2021.3114863;10.1109/tvcg.2014.2346452;10.1109/tvcg.2020.3030378;10.1109/tvcg.2018.2865149;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934802 | Knowledge graphs,visualization techniques and methodologies,human factors,visual communication | 1 | 81 | 490 | |||
29 | Vis | 2023 | NL2Color: Refining Color Palettes for Charts with Natural Language | 10.1109/tvcg.2023.3326522 | http://dx.doi.org/10.1109/TVCG.2023.3326522 | 814 | 824 | J | Choice of color is critical to creating effective charts with an engaging, enjoyable, and informative reading experience. However, designing a good color palette for a chart is a challenging task for novice users who lack related design expertise. For example, they often find it difficult to articulate their abstract intentions and translate these intentions into effective editing actions to achieve a desired outcome. In this work, we present NL2Color, a tool that allows novice users to refine chart color palettes using natural language expressions of their desired outcomes. We first collected and categorized a dataset of 131 triplets, each consisting of an original color palette of a chart, an editing intent, and a new color palette designed by human experts according to the intent. Our tool employs a large language model (LLM) to substitute the colors in original palettes and produce new color palettes by selecting some of the triplets as few-shot prompts. To evaluate our tool, we conducted a comprehensive two-stage evaluation, including a crowd-sourcing study ($\mathrm{N}=71$) and a within-subjects user study ($\mathrm{N}=12$). The results indicate that the quality of the color palettes revised by NL2Color has no significantly large difference from those designed by human experts. The participants who used NL2Color obtained revised color palettes to their satisfaction in a shorter period and with less effort. | Chuhan Shi;Weiwei Cui;Chengzhong Liu;Chengbo Zheng;Haidong Zhang;Qiong Luo 0001;Xiaojuan Ma | Chuhan Shi;Weiwei Cui;Chengzhong Liu;Chengbo Zheng;Haidong Zhang;Qiong Luo;Xiaojuan Ma | Southeast University, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China | 10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934785;10.1109/tvcg.2021.3114848;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2865147;10.1109/tvcg.2015.2467471;10.1109/tvcg.2019.2934284;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934668 | chart,color palette,natural language,large language model | 1 | 57 | 486 | |||
30 | Vis | 2023 | Transitioning to a Commercial Dashboarding System: Socio-Technical Observations and Opportunities | 10.1109/tvcg.2023.3326525 | http://dx.doi.org/10.1109/TVCG.2023.3326525 | 381 | 391 | J | Many long-established, traditional manufacturing businesses are becoming more digital and data-driven to improve their production. These companies are embracing visual analytics in these transitions through their adoption of commercial dashboarding systems. Although a number of studies have looked at the technical challenges of adopting these systems, very few have focused on the socio-technical issues that arise. In this paper, we report on the results of an interview study with 17 participants working in a range of roles at a long-established, traditional manufacturing company as they adopted Microsoft Power BI. The results highlight a number of socio-technical challenges the employees faced, including difficulties in training, using and creating dashboards, and transitioning to a modern digital company. Based on these results, we propose a number of opportunities for both companies and visualization researchers to improve these difficult transitions, as well as opportunities for rethinking how we design dashboarding systems for real-world use. | Conny Walchshofer;Vaishali Dhanoa;Marc Streit;Miriah Meyer | Conny Walchshofer;Vaishal Dhanoa;Marc Streit;Miriah Meyer | Johannes Kepler University Linz, Austria;Pro2 Future GmbH, Austria;Johannes Kepler University Linz, Austria;Linköping University, Sweden | 10.1109/tvcg.2018.2865040;10.1109/tvcg.2016.2598647;10.1109/tvcg.2022.3209448;10.1109/tvcg.2022.3209490;10.1109/tvcg.2021.3114830;10.1109/tvcg.2017.2743990;10.1109/tvcg.2010.164;10.1109/tvcg.2012.219;10.1109/tvcg.2022.3209451;10.1109/tvcg.2019.2934593;10.1109/tvcg.2021.3114959;10.1109/tvcg.2018.2864903;10.1109/tvcg.2012.213;10.1109/tvcg.2010.179;10.1109/tvcg.2009.162;10.1109/vast.2012.6400554;10.1109/tvcg.2022.3209493 | Interview study,socio-technical challenges,visual analytics | 1 | 62 | 461 | |||
31 | Vis | 2023 | Supporting Guided Exploratory Visual Analysis on Time Series Data with Reinforcement Learning | 10.1109/tvcg.2023.3327200 | http://dx.doi.org/10.1109/TVCG.2023.3327200 | 1172 | 1182 | J | The exploratory visual analysis (EVA) of time series data uses visualization as the main output medium and input interface for exploring new data. However, for users who lack visual analysis expertise, interpreting and manipulating EVA can be challenging. Thus, providing guidance on EVA is necessary and two relevant questions need to be answered. First, how to recommend interesting insights to provide a first glance at data and help develop an exploration goal. Second, how to provide step-by-step EVA suggestions to help identify which parts of the data to explore. In this work, we present a reinforcement learning (RL)-based system, Visail, which generates EVA sequences to guide the exploration of time series data. As a user uploads a time series dataset, Visail can generate step-by-step EVA suggestions, while each step is visualized as an annotated chart combined with textual descriptions. The RL-based algorithm uses exploratory data analysis knowledge to construct the state and action spaces for the agent to imitate human analysis behaviors in data exploration tasks. In this way, the agent learns the strategy of generating coherent EVA sequences through a well-designed network. To evaluate the effectiveness of our system, we conducted an ablation study, a user study, and two case studies. The results of our evaluation suggested that Visail can provide effective guidance on supporting EVA on time series data. | Yang Shi 0007;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao 0001 | Yang Shi;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao | Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Huawei Cloud Computing Technologies Co., Ltd., China;Huawei Cloud Computing Technologies Co., Ltd., China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China | 10.1109/tvcg.2018.2865040;10.1109/vast.2014.7042480;10.1109/tvcg.2016.2598876;10.1109/tvcg.2016.2598468;10.1109/tvcg.2022.3209468;10.1109/tvcg.2021.3114875;10.1109/tvcg.2020.3028889;10.1109/tvcg.2018.2865077;10.1109/tvcg.2012.229;10.1109/tvcg.2018.2864526;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2022.3209409;10.1109/tvcg.2022.3209486;10.1109/tvcg.2012.191;10.1109/tvcg.2018.2865145;10.1109/tvcg.2015.2467751;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/vast.2009.5332595;10.1109/tvcg.2021.3114826;10.1109/tvcg.2023.3326913;10.1109/tvcg.2021.3114774;10.1109/tvcg.2011.195;10.1109/tvcg.2021.3114865 | Time Series Data,Exploratory Visual Analysis,Reinforcement Learning | 1 | 77 | 448 | |||
32 | Vis | 2023 | CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering | 10.1109/tvcg.2023.3327201 | http://dx.doi.org/10.1109/TVCG.2023.3327201 | 770 | 780 | J | Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev. | Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen 0001;Danielle Albers Szafir;Jinwook Seo | Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen;Danielle Albers Szafir;Jinwook Seo | Seoul National University, South Korea;University of North Carolina, Chapel Hill, USA;UNIST, South Korea;University of Utah, USA;Seoul National University, South Korea;Seoul National University, South Korea | 10.1109/infvis.2005.1532136;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/tvcg.2014.2346572;10.1109/tvcg.2021.3114833;10.1109/tvcg.2017.2744718;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864912;10.1109/tvcg.2021.3114694 | Cluster,scatterplot,perception,cluster analysis,cluster ambiguity,visual quality measure | 1 | 86 | 382 | HM | ||
33 | Vis | 2023 | VideoPro: A Visual Analytics Approach for Interactive Video Programming | 10.1109/tvcg.2023.3326586 | http://dx.doi.org/10.1109/TVCG.2023.3326586 | 87 | 97 | J | Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews. | Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu | Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu | Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China | 10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391 | Interactive machine learning,data programming,video exploration and analysis | 1 | 83 | 381 | |||
34 | Vis | 2023 | VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching | 10.1109/tvcg.2023.3327161 | http://dx.doi.org/10.1109/TVCG.2023.3327161 | 458 | 468 | J | Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence. | Tica Lin;Alexandre Aouididi;Zhutian Chen;Johanna Beyer;Hanspeter Pfister;Jui-Hsien Wang | Tica Lin;Alexandre Aouididi;Chen Zhu-Tian;Johanna Beyer;Hanspeter Pfister;Jui-Hsien Wang | Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Adobe Research, USA | 10.1109/tvcg.2021.3114861;10.1109/vast.2014.7042478;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2022.3209353;10.1109/visual.2001.964496;10.1109/tvcg.2017.2745181;10.1109/tvcg.2009.108;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030427;10.1109/tvcg.2020.3030392 | Sports Analytics,Immersive Analytics,Data Visualization | 1 | 61 | 375 | |||
35 | Vis | 2023 | From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making | 10.1109/tvcg.2023.3326593 | http://dx.doi.org/10.1109/TVCG.2023.3326593 | 359 | 369 | J | In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the “intelligence” stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely “design” and “choice”? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb). | Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara | Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara | Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands | 10.1109/vast.2011.6102457;10.1109/tvcg.2019.2934262;10.1109/vast.2007.4388995;10.1109/visual.1999.809923;10.1109/tvcg.2021.3114830;10.1109/tvcg.2021.3114760;10.1109/tvcg.2021.3114803;10.1109/tvcg.2018.2865233;10.1109/tvcg.2017.2745138;10.1109/tvcg.2019.2934283;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030469;10.1109/vast.2015.7347636;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.173;10.1109/tvcg.2013.134;10.1109/tvcg.2020.3030335;10.1109/tvcg.2017.2744299;10.1109/tvcg.2018.2865159;10.1109/tvcg.2022.3209451;10.1109/tvcg.2016.2598432;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598589;10.1109/tvcg.2012.261;10.1109/vast.2009.5333920;10.1109/tvcg.2015.2468011;10.1109/vast.2017.8585669;10.1109/tvcg.2017.2745078;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2865126;10.1109/tvcg.2020.3030458;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2744738;10.1109/tvcg.2018.2865020 | Decision making,visualization,state of the art,review,survey,design,interaction,multi-criteria decision making,MCDM | 1 | 106 | 374 | |||
36 | Vis | 2023 | InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology | 10.1109/tvcg.2023.3327387 | http://dx.doi.org/10.1109/TVCG.2023.3327387 | 518 | 528 | J | Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation. Through both the case studies and the engagement project, we find that our system not only meets our original goals of design, allowing users to better identify the sources of technical inventions and to understand the broad impact of scientific research; it also goes beyond these purposes to enable an array of new applications for researchers and research institutions, ranging from identifying untapped innovation potential within an institution to forging new collaboration opportunities between science and industry. | Yifang Wang 0001;Yifan Qian;Xiaoyu Qi;Nan Cao 0001;Dashun Wang | Yifang Wang;Yifan Qian;Xiaoyu Qi;Nan Cao;Dashun Wang | The Center for Science of Science and Innovation, Northwestern University, USA;The Center for Science of Science and Innovation, Northwestern University, USA;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;The Center for Science of Science and Innovation, Northwestern University, USA | 10.1109/tvcg.2022.3209427;10.1109/tvcg.2011.202;10.1109/tvcg.2011.226;10.1109/tvcg.2018.2864826;10.1109/tvcg.2012.252;10.1109/tvcg.2013.162;10.1109/visual.2001.964539;10.1109/tvcg.2022.3209422;10.1109/tvcg.2018.2865022;10.1109/tvcg.2019.2934667;10.1109/tvcg.2017.2745158;10.1109/tvcg.2021.3114820;10.1109/tvcg.2018.2865149;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.213;10.1109/tvcg.2021.3114787;10.1109/vast.2011.6102453;10.1109/tvcg.2021.3114794;10.1109/tvcg.2021.3114790;10.1109/tvcg.2022.3209360;10.1109/tvcg.2015.2468151 | Science of Science,Innovation,Academic Profiles,Patent Data,Publication Data,Visual Analytics | 1 | 76 | 369 | HM | ||
37 | Vis | 2023 | Class-Constrained t-SNE: Combining Data Features and Class Probabilities | 10.1109/tvcg.2023.3326600 | http://dx.doi.org/10.1109/TVCG.2023.3326600 | 164 | 174 | J | Data features and class probabilities are two main perspectives when, e.g., evaluating model results and identifying problematic items. Class probabilities represent the likelihood that each instance belongs to a particular class, which can be produced by probabilistic classifiers or even human labeling with uncertainty. Since both perspectives are multi-dimensional data, dimensionality reduction (DR) techniques are commonly used to extract informative characteristics from them. However, existing methods either focus solely on the data feature perspective or rely on class probability estimates to guide the DR process. In contrast to previous work where separate views are linked to conduct the analysis, we propose a novel approach, class-constrained t-SNE, that combines data features and class probabilities in the same DR result. Specifically, we combine them by balancing two corresponding components in a cost function to optimize the positions of data points and iconic representation of classes – class landmarks. Furthermore, an interactive user-adjustable parameter balances these two components so that users can focus on the weighted perspectives of interest and also empowers a smooth visual transition between varying perspectives to preserve the mental map. We illustrate its application potential in model evaluation and visual-interactive labeling. A comparative analysis is performed to evaluate the DR results. | Linhao Meng;Stef van den Elzen;Nicola Pezzotti;Anna Vilanova | Linhao Meng;Stef van den Elzen;Nicola Pezzotti;Anna Vilanova | Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands | 10.1109/tvcg.2014.2346660;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.212;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/visual.1997.663916;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598445;10.1109/tvcg.2018.2864843;10.1109/tvcg.2019.2934631;10.1109/tvcg.2011.212;10.1109/tvcg.2019.2934307;10.1109/tvcg.2016.2598828;10.1109/visual.2000.885740;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2864499 | Dimensionality reduction,t-distributed stochastic neighbor embedding,constraint integration | 1 | 60 | 346 | |||
38 | Vis | 2023 | HealthPrism: A Visual Analytics System for Exploring Children's Physical and Mental Health Profiles with Multimodal Data | 10.1109/tvcg.2023.3326943 | http://dx.doi.org/10.1109/TVCG.2023.3326943 | 1205 | 1215 | J | The correlation between children's personal and family characteristics (e.g., demographics and socioeconomic status) and their physical and mental health status has been extensively studied across various research domains, such as public health, medicine, and data science. Such studies can provide insights into the underlying factors affecting children's health and aid in the development of targeted interventions to improve their health outcomes. However, with the availability of multiple data sources, including context data (i.e., the background information of children) and motion data (i.e., sensor data measuring activities of children), new challenges have arisen due to the large-scale, heterogeneous, and multimodal nature of the data. Existing statistical hypothesis-based and learning model-based approaches have been inadequate for comprehensively analyzing the complex correlation between multimodal features and multi-dimensional health outcomes due to the limited information revealed. In this work, we first distill a set of design requirements from multiple levels through conducting a literature review and iteratively interviewing 11 experts from multiple domains (e.g., public health and medicine). Then, we propose HealthPrism, an interactive visual and analytics system for assisting researchers in exploring the importance and influence of various context and motion features on children's health status from multi-level perspectives. Within HealthPrism, a multimodal learning model with a gate mechanism is proposed for health profiling and cross-modality feature importance comparison. A set of visualization components is designed for experts to explore and understand multimodal data freely. We demonstrate the effectiveness and usability of HealthPrism through quantitative evaluation of the model performance, case studies, and expert interviews in associated domains. | Zhihan Jiang;Handi Chen;Rui Zhou;Jing Deng;Xinchen Zhang;Running Zhao;Cong Xie;Yifang Wang 0001;Edith C. H. Ngai | Zhihan Jiang;Handi Chen;Rui Zhou;Jing Deng;Xinchen Zhang;Running Zhao;Cong Xie;Yifang Wang;Edith C.H. Ngai | University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;Tencent, China;Kellogg School of Management, Northwestern University, USA;University of Hong Kong, China | 10.1109/tvcg.2021.3114836;10.1109/tvcg.2020.3030424;10.1109/tvcg.2018.2864885;10.1109/tvcg.2016.2598588;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/tvcg.2015.2467555;10.1109/tvcg.2015.2467325;10.1109/tvcg.2021.3114794 | Visual Analytics,Health Profiling,Multimodal Learning,Context Data,Motion Data | 1 | 68 | 328 | |||
39 | Vis | 2023 | Classes are Not Clusters: Improving Label-Based Evaluation of Dimensionality Reduction | 10.1109/tvcg.2023.3327187 | http://dx.doi.org/10.1109/TVCG.2023.3327187 | 781 | 791 | J | A common way to evaluate the reliability of dimensionality reduction (DR) embeddings is to quantify how well labeled classes form compact, mutually separated clusters in the embeddings. This approach is based on the assumption that the classes stay as clear clusters in the original high-dimensional space. However, in reality, this assumption can be violated; a single class can be fragmented into multiple separated clusters, and multiple classes can be merged into a single cluster. We thus cannot always assure the credibility of the evaluation using class labels. In this paper, we introduce two novel quality measures—Label-Trustworthiness and Label-Continuity (Label-T&C)—advancing the process of DR evaluation based on class labels. Instead of assuming that classes are well-clustered in the original space, Label-T&C work by (1) estimating the extent to which classes form clusters in the original and embedded spaces and (2) evaluating the difference between the two. A quantitative evaluation showed that Label-T&C outperform widely used DR evaluation measures (e.g., Trustworthiness and Continuity, Kullback-Leibler divergence) in terms of the accuracy in assessing how well DR embeddings preserve the cluster structure, and are also scalable. Moreover, we present case studies demonstrating that Label-T&C can be successfully used for revealing the intrinsic characteristics of DR techniques and their hyperparameters. | Hyeon Jeon;Yun-Hsin Kuo;Michaël Aupetit 0001;Kwan-Liu Ma;Jinwook Seo | Hyeon Jeon;Yun-Hsin Kuo;Michaël Aupetit;Kwan-Liu Ma;Jinwook Seo | Seoul National University, South Korea;University of California, Davis, South Korea;Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar;University of California, Davis, South Korea;Seoul National University, South Korea | 10.1109/tvcg.2021.3114833;10.1109/tvcg.2011.220;10.1109/tvcg.2017.2745085;10.1109/tvcg.2020.3030365;10.1109/tvcg.2013.153;10.1109/tvcg.2017.2745258;10.1109/tvcg.2022.3209423;10.1109/tvcg.2021.3114694 | Dimensionality Reduction,Reliability,Clustering,Clustering Validation Measures,Dimensionality Reduction Evaluation | 1 | 74 | 317 | |||
40 | Vis | 2023 | Differentiable Design Galleries: A Differentiable Approach to Explore the Design Space of Transfer Functions | 10.1109/tvcg.2023.3327371 | http://dx.doi.org/10.1109/TVCG.2023.3327371 | 1369 | 1379 | J | The transfer function is crucial for direct volume rendering (DVR) to create an informative visual representation of volumetric data. However, manually adjusting the transfer function to achieve the desired DVR result can be time-consuming and unintuitive. In this paper, we propose Differentiable Design Galleries, an image-based transfer function design approach to help users explore the design space of transfer functions by taking advantage of the recent advances in deep learning and differentiable rendering. Specifically, we leverage neural rendering to learn a latent design space, which is a continuous manifold representing various types of implicit transfer functions. We further provide a set of interactive tools to support intuitive query, navigation, and modification to obtain the target design, which is represented as a neural-rendered design exemplar. The explicit transfer function can be reconstructed from the target design with a differentiable direct volume renderer. Experimental results on real volumetric data demonstrate the effectiveness of our method. | Bo Pan;Jiaying Lu 0005;Haoxuan Li;Weifeng Chen 0003;Yiyao Wang;Minfeng Zhu;Chenhao Yu;Wei Chen 0001 | Bo Pan;Jiaying Lu;Haoxuan Li;Weifeng Chen;Yiyao Wang;Minfeng Zhu;Chenhao Yu;Wei Chen | State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University of Finance&Economics, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China | 10.1109/tvcg.2008.162;10.1109/tvcg.2009.189;10.1109/tvcg.2011.261;10.1109/visual.1996.568113;10.1109/tvcg.2019.2934312;10.1109/tvcg.2012.231;10.1109/tvcg.2015.2467294;10.1109/visual.2003.1250414;10.1109/visual.2003.1250412;10.1109/visual.2005.1532807;10.1109/tvcg.2009.185;10.1109/tvcg.2006.148;10.1109/tvcg.2021.3114769 | Transfer function,direct volume rendering,deep learning,generative models,differentiable rendering | 1 | 50 | 305 | |||
41 | Vis | 2023 | Cluster-Aware Grid Layout | 10.1109/tvcg.2023.3326934 | http://dx.doi.org/10.1109/TVCG.2023.3326934 | 240 | 250 | J | Grid visualizations are widely used in many applications to visually explain a set of data and their proximity relationships. However, existing layout methods face difficulties when dealing with the inherent cluster structures within the data. To address this issue, we propose a cluster-aware grid layout method that aims to better preserve cluster structures by simultaneously considering proximity, compactness, and convexity in the optimization process. Our method utilizes a hybrid optimization strategy that consists of two phases. The global phase aims to balance proximity and compactness within each cluster, while the local phase ensures the convexity of cluster shapes. We evaluate the proposed grid layout method through a series of quantitative experiments and two use cases, demonstrating its effectiveness in preserving cluster structures and facilitating analysis tasks. | Yuxing Zhou;Weikai Yang;Jiashu Chen;Changjian Chen;Zhiyang Shen;Xiaonan Luo;Lingyun Yu 0001;Shixia Liu | Yuxing Zhou;Weikai Yang;Jiashu Chen;Changjian Chen;Zhiyang Shen;Xiaonan Luo;Lingyun Yu;Shixia Liu | School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Kuaishou Technology, China;School of Software, BNRist, Tsinghua University, China;Guilin University of Electronic Technology, China;Xi'an Jiaotong-Liverpool University, China;School of Software, BNRist, Tsinghua University, China | 10.1109/tvcg.2022.3209425;10.1109/tvcg.2019.2934280;10.1109/tvcg.2016.2598447;10.1109/tvcg.2022.3209384;10.1109/tvcg.2009.152;10.1109/tvcg.2021.3114834;10.1109/tvcg.2016.2598831;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865151;10.1109/tvcg.2016.2598542;10.1109/tvcg.2008.158;10.1109/tvcg.2021.3114841;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2016.2598796;10.1109/tvcg.2022.3209423;10.1109/tvcg.2015.2467251;10.1109/tvcg.2022.3209404;10.1109/tvcg.2020.3030410 | Grid layout,similarity,convexity,compactness,optimization | 1 | 61 | 305 | |||
42 | Vis | 2023 | LiveRetro: Visual Analytics for Strategic Retrospect in Livestream E-Commerce | 10.1109/tvcg.2023.3326911 | http://dx.doi.org/10.1109/TVCG.2023.3326911 | 1117 | 1127 | J | Livestream e-commerce integrates live streaming and online shopping, allowing viewers to make purchases while watching. However, effective marketing strategies remain a challenge due to limited empirical research and subjective biases from the absence of quantitative data. Current tools fail to capture the interdependence between live performances and feedback. This study identified computational features, formulated design requirements, and developed LiveRetro, an interactive visual analytics system. It enables comprehensive retrospective analysis of livestream e-commerce for streamers, viewers, and merchandise. LiveRetro employs enhanced visualization and time-series forecasting models to align performance features and feedback, identifying influences at channel, merchandise, feature, and segment levels. Through case studies and expert interviews, the system provides deep insights into the relationship between live performance and streaming statistics, enabling efficient strategic analysis from multiple perspectives. | Yuchen Wu;Yuansong Xu;Shenghan Gao;Xingbo Wang 0001;Wenkai Song;Zhiheng Nie;Xiaomeng Fan;Quan Li | Yuchen Wu;Yuansong Xu;Shenghan Gao;Xingbo Wang;Wenkai Song;Zhiheng Nie;Xiaomeng Fan;Quan Li | School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;Weill Cornell Medical College, Cornell University, USA;School of Entrepreneurship and Management, ShanghaiTech University, China;Be Friends Holding Limited, China;School of Entrepreneurship and Management, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China | 10.1109/tvcg.2015.2467851;10.1109/tvcg.2022.3209351;10.1109/tvcg.2021.3114789;10.1109/tvcg.2021.3114822;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209440 | Livestream E-commerce,Visual Analytics,Multimodal Video Analysis,Marketing Strategy,Time-series Modeling | 1 | 79 | 296 | |||
43 | Vis | 2023 | TimeTuner: Diagnosing Time Representations for Time-Series Forecasting with Counterfactual Explanations | 10.1109/tvcg.2023.3327389 | http://dx.doi.org/10.1109/TVCG.2023.3327389 | 1183 | 1193 | J | Deep learning (DL) approaches are being increasingly used for time-series forecasting, with many efforts devoted to designing complex DL models. Recent studies have shown that the DL success is often attributed to effective data representations, fostering the fields of feature engineering and representation learning. However, automated approaches for feature learning are typically limited with respect to incorporating prior knowledge, identifying interactions among variables, and choosing evaluation metrics to ensure that the models are reliable. To improve on these limitations, this paper contributes a novel visual analytics framework, namely TimeTuner, designed to help analysts understand how model behaviors are associated with localized correlations, stationarity, and granularity of time-series representations. The system mainly consists of the following two-stage technique: We first leverage counterfactual explanations to connect the relationships among time-series representations, multivariate features and model predictions. Next, we design multiple coordinated views including a partition-based correlation matrix and juxtaposed bivariate stripes, and provide a set of interactions that allow users to step into the transformation selection process, navigate through the feature space, and reason the model performance. We instantiate TimeTuner with two transformation methods of smoothing and sampling, and demonstrate its applicability on real-world time-series forecasting of univariate sunspots and multivariate air pollutants. Feedback from domain experts indicates that our system can help characterize time-series representations and guide the feature engineering processes. | Jianing Hao;Qing Shi;Yilin Ye;Wei Zeng 0004 | Jianing Hao;Qing Shi;Yilin Ye;Wei Zeng | Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China | 10.1109/tvcg.2019.2934262;10.1109/tvcg.2008.166;10.1109/tvcg.2020.3030342;10.1109/tvcg.2010.162;10.1109/tvcg.2010.193;10.1109/tvcg.2018.2865027;10.1109/tvcg.2019.2934267;10.1109/tvcg.2013.125;10.1109/infvis.2005.1532144;10.1109/tvcg.2023.3327200;10.1109/tvcg.2019.2934629;10.1109/tvcg.2017.2744158;10.1109/infvis.1999.801851;10.1109/tvcg.2019.2934619;10.1109/tvcg.2020.3030410 | Time-series forecasting,counterfactual explanation,visual analytics | 1 | 67 | 295 | |||
44 | Vis | 2023 | GeoExplainer: A Visual Analytics Framework for Spatial Modeling Contextualization and Report Generation | 10.1109/tvcg.2023.3327359 | http://dx.doi.org/10.1109/TVCG.2023.3327359 | 1391 | 1401 | J | Geographic regression models of various descriptions are often applied to identify patterns and anomalies in the determinants of spatially distributed observations. These types of analyses focus on answering why questions about underlying spatial phenomena, e.g., why is crime higher in this locale, why do children in one school district outperform those in another, etc.? Answers to these questions require explanations of the model structure, the choice of parameters, and contextualization of the findings with respect to their geographic context. This is particularly true for local forms of regression models which are focused on the role of locational context in determining human behavior. In this paper, we present GeoExplainer, a visual analytics framework designed to support analysts in creating explanative documentation that summarizes and contextualizes their spatial analyses. As analysts create their spatial models, our framework flags potential issues with model parameter selections, utilizes template-based text generation to summarize model outputs, and links with external knowledge repositories to provide annotations that help to explain the model results. As analysts explore the model results, all visualizations and annotations can be captured in an interactive report generation widget. We demonstrate our framework using a case study modeling the determinants of voting in the 2016 US Presidential Election. | Fan Lei;Yuxin Ma;A. Stewart Fotheringham;Elizabeth A. Mack;Ziqi Li;Mehak Sachdeva;Sarah Bardin;Ross Maciejewski | Fan Lei;Yuxin Ma;A. Stewart Fotheringham;Elizabeth A. Mack;Ziqi Li;Mehak Sachdeva;Sarah Bardin;Ross Maciejewski | Arizona State University, USA;Southern University of Science and Technology, China;Arizona State University, USA;Michigan State University, USA;Florida State University, USA;Arizona State University, USA;Arizona State University, USA;Arizona State University, USA | 10.1109/tvcg.2011.185;10.1109/vast.2010.5652885;10.1109/tvcg.2020.3030358;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467199;10.1109/tvcg.2011.255;10.1109/vast.2017.8585720;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2864812;10.1109/tvcg.2013.125;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145 | Spatial data analysis,local models,multiscale geographically weighted regression,model explanation,visual analytics | 1 | 71 | 285 | |||
45 | Vis | 2023 | FSLens: A Visual Analytics Approach to Evaluating and Optimizing the Spatial Layout of Fire Stations | 10.1109/tvcg.2023.3327077 | http://dx.doi.org/10.1109/TVCG.2023.3327077 | 847 | 857 | J | The provision of fire services plays a vital role in ensuring the safety of residents' lives and property. The spatial layout of fire stations is closely linked to the efficiency of fire rescue operations. Traditional approaches have primarily relied on mathematical planning models to generate appropriate layouts by summarizing relevant evaluation criteria. However, this optimization process presents significant challenges due to the extensive decision space, inherent conflicts among criteria, and decision-makers' preferences. To address these challenges, we propose FSLens, an interactive visual analytics system that enables in-depth evaluation and rational optimization of fire station layout. Our approach integrates fire records and correlation features to reveal fire occurrence patterns and influencing factors using spatiotemporal sequence forecasting. We design an interactive visualization method to explore areas within the city that are potentially under-resourced for fire service based on the fire distribution and existing fire station layout. Moreover, we develop a collaborative human-computer multi-criteria decision model that generates multiple candidate solutions for optimizing firefighting resources within these areas. We simulate and compare the impact of different solutions on the original layout through well-designed visualizations, providing decision-makers with the most satisfactory solution. We demonstrate the effectiveness of our approach through one case study with real-world datasets. The feedback from domain experts indicates that our system helps them to better identify and improve potential gaps in the current fire station layout. | Longfei Chen;He Wang;Yang Ouyang;Yang Zhou;Naiyu Wang;Quan Li | Longfei Chen;He Wang;Yang Ouyang;Yang Zhou;Naiyu Wang;Quan Li | School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;School of Information Science and Technology, ShanghaiTech University, China;College of Civil Engineering and Architecture, Zhejiang University, China;College of Civil Engineering and Architecture, Zhejiang University, China;School of Information Science and Technology, ShanghaiTech University, China | 10.1109/tvcg.2013.173;10.1109/tvcg.2016.2598432;10.1109/tvcg.2006.179;10.1109/tvcg.2016.2598589;10.1109/tvcg.2022.3209440;10.1109/tvcg.2014.2346898 | Spatiotemporal Analysis,Multi-criteria Decision Making,Visualization | 1 | 69 | 280 | |||
46 | Vis | 2023 | Scalable Hypergraph Visualization | 10.1109/tvcg.2023.3326599 | http://dx.doi.org/10.1109/TVCG.2023.3326599 | 595 | 605 | J | Hypergraph visualization has many applications in network data analysis. Recently, a polygon-based representation for hypergraphs has been proposed with demonstrated benefits. However, the polygon-based layout often suffers from excessive self-intersections when the input dataset is relatively large. In this paper, we propose a framework in which the hypergraph is iteratively simplified through a set of atomic operations. Then, the layout of the simplest hypergraph is optimized and used as the foundation for a reverse process that brings the simplest hypergraph back to the original one, but with an improved layout. At the core of our approach is the set of atomic simplification operations and an operation priority measure to guide the simplification process. In addition, we introduce necessary definitions and conditions for hypergraph planarity within the polygon representation. We extend our approach to handle simultaneous simplification and layout optimization for both the hypergraph and its dual. We demonstrate the utility of our approach with datasets from a number of real-world applications. | Peter Oliver;Eugene Zhang;Yue Zhang 0009 | Peter Oliver;Eugene Zhang;Yue Zhang | School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA | 10.1109/tvcg.2013.184;10.1109/tvcg.2012.252;10.1109/tvcg.2020.3030475;10.1109/tvcg.2014.2346248;10.1109/tvcg.2021.3114759;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467992;10.1109/vast.2007.4389006 | Hypergraph visualization,scalable visualization,polygon layout,hypergraph embedding,primal-dual visualization | 1 | 57 | 243 | |||
47 | Vis | 2023 | Polarizing Political Polls: How Visualization Design Choices Can Shape Public Opinion and Increase Political Polarization | 10.1109/tvcg.2023.3326512 | http://dx.doi.org/10.1109/TVCG.2023.3326512 | 1446 | 1456 | J | While we typically focus on data visualization as a tool for facilitating cognitive tasks (e.g. learning facts, making decisions), we know relatively little about their second-order impacts on our opinions, attitudes, and values. For example, could design or framing choices interact with viewers' social cognitive biases in ways that promote political polarization? When reporting on U.S. attitudes toward public policies, it is popular to highlight the gap between Democrats and Republicans (e.g. with blue vs red connected dot plots). But these charts may encourage social-normative conformity, influencing viewers' attitudes to match the divided opinions shown in the visualization. We conducted three experiments examining visualization framing in the context of social conformity and polarization. Crowdworkers viewed charts showing simulated polling results for public policy proposals. We varied framing (aggregating data as non-partisan “All US Adults,” or partisan “Democrat” / “Republican”) and the visualized groups' support levels. Participants then reported their own support for each policy. We found that participants' attitudes biased significantly toward the group attitudes shown in the stimuli and this can increase inter-party attitude divergence. These results demonstrate that data visualizations can induce social conformity and accelerate political polarization. Choosing to visualize partisan divisions can divide us further. | Eli Holder;Cindy Xiong Bearfield | Eli Holder;Cindy Xiong Bearfield | 3iap, USA;Georgia Tech, Georgia | 10.1109/tvcg.2015.2467732;10.1109/tvcg.2014.2346298;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209377;10.1109/tvcg.2011.255;10.1109/tvcg.2020.3030335;10.1109/tvcg.2020.3029412;10.1109/tvcg.2017.2745240;10.1109/tvcg.2022.3209500;10.1109/tvcg.2010.179;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209405 | Political Polarization,Public Opinion,Social Categorization,Survey Data,Social Influence,Attitude Change | 1 | 88 | 237 | |||
48 | Vis | 2023 | RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios | 10.1109/tvcg.2023.3326568 | http://dx.doi.org/10.1109/TVCG.2023.3326568 | 1347 | 1357 | J | Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-LABEL in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes. | Zhutian Chen;Daniele Chiappalupi;Tica Lin;Yalong Yang 0001;Johanna Beyer;Hanspeter Pfister | Chen Zhu-Tian;Daniele Chiappalupi;Tica Lin;Yalong Yang;Johanna Beyer;Hanspeter Pfister | Harvard John A. Paulson School of Engineering and Applied Sciences, United States;Harvard John A. Paulson School of Engineering and Applied Sciences, United States;Harvard John A. Paulson School of Engineering and Applied Sciences, United States;Virginia Tech, United States;Harvard John A. Paulson School of Engineering and Applied Sciences, United States;Harvard John A. Paulson School of Engineering and Applied Sciences, United States | 10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114861;10.1109/tvcg.2020.3030467;10.1109/tvcg.2022.3209386;10.1109/tvcg.2020.3030423;10.1109/tvcg.2020.3030392 | Augmented Reality,Reinforcement Learning,Label Placement,Dynamic Scenarios | 1 | 58 | 232 | |||
49 | Vis | 2023 | Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research | 10.1109/tvcg.2023.3326591 | http://dx.doi.org/10.1109/TVCG.2023.3326591 | 672 | 682 | J | Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research. | Hariharan Subramonyam;Jessica Hullman | Hariharan Subramonyam;Jessica Hullman | Stanford University, USA;Northwestern University, USA | 10.1109/tvcg.2014.2346660;10.1109/tvcg.2017.2744683;10.1109/tvcg.2019.2934261;10.1109/tvcg.2020.3030342;10.1109/tvcg.2019.2934654;10.1109/tvcg.2018.2864769;10.1109/vast.2017.8585498;10.1109/tvcg.2019.2934659;10.1109/tvcg.2022.3209384;10.1109/tvcg.2013.126;10.1109/tvcg.2021.3114793;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2019.2934267;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114858;10.1109/tvcg.2017.2744358;10.1109/tvcg.2018.2864838;10.1109/tvcg.2014.2346481;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2022.3209361;10.1109/vast.2011.6102453;10.1109/tvcg.2018.2864504;10.1109/tvcg.2022.3209347;10.1109/tvcg.2020.3030418;10.1109/tvcg.2019.2934619;10.1109/tvcg.2017.2744878;10.1109/vast47406.2019.8986943;10.1109/tvcg.2022.3209465;10.1109/tvcg.2018.2864475 | VIS4ML,Visualization,Machine learning,Human-in-the-loop,Human Knowledge,Generalizability,Survey | 1 | 93 | 227 | |||
50 | Vis | 2023 | VISGRADER: Automatic Grading of D3 Visualizations | 10.1109/tvcg.2023.3327181 | http://dx.doi.org/10.1109/TVCG.2023.3327181 | 617 | 627 | J | Manually grading D3 data visualizations is a challenging endeavor, and is especially difficult for large classes with hundreds of students. Grading an interactive visualization requires a combination of interactive, quantitative, and qualitative evaluation that are conventionally done manually and are difficult to scale up as the visualization complexity, data size, and number of students increase. We present VISGRADER, a first-of-its kind automatic grading method for D3 visualizations that scalably and precisely evaluates the data bindings, visual encodings, interactions, and design specifications used in a visualization. Our method enhances students' learning experience, enabling them to submit their code frequently and receive rapid feedback to better inform iteration and improvement to their code and visualization design. We have successfully deployed our method and auto-graded D3 submissions from more than 4000 students in a visualization course at Georgia Tech, and received positive feedback for expanding its adoption. | Matthew Hull;Vivian Pednekar;Hannah Murray;Nimisha Roy;Emmanuel Tung;Susanta Routray;Connor Guerin;Justin Chen;Zijie J. Wang;Seongmin Lee 0007;M. Mahdi Roozbahani;Duen Horng Chau | Matthew Hull;Vivian Pednekar;Hannah Murray;Nimisha Roy;Emmanuel Tung;Susanta Routray;Connor Guerin;Justin Chen;Zijie J. Wang;Seongmin Lee;Mahdi Roozbahani;Duen Horng Chau | Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA | 10.1109/tvcg.2011.185;10.1109/tvcg.2021.3114804;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934431;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864836 | Automatic grading,D3 visualization,large class,Selenium,Gradescope grading platform | 1 | 49 | 217 | |||
51 | Vis | 2023 | Handling Non-Visible Referents in Situated Visualizations | 10.1109/tvcg.2023.3327361 | http://dx.doi.org/10.1109/TVCG.2023.3327361 | 1336 | 1346 | J | Situated visualizations are a type of visualization where data is presented next to its physical referent (i.e., the physical object, space, or person it refers to), often using augmented-reality displays. While situated visualizations can be beneficial in various contexts and have received research attention, they are typically designed with the assumption that the physical referent is visible. However, in practice, a physical referent may be obscured by another object, such as a wall, or may be outside the user's visual field. In this paper, we propose a conceptual framework and a design space to help researchers and user interface designers handle non-visible referents in situated visualizations. We first provide an overview of techniques proposed in the past for dealing with non-visible objects in the areas of 3D user interfaces, 3D visualization, and mixed reality. From this overview, we derive a design space that applies to situated visualizations and employ it to examine various trade-offs, challenges, and opportunities for future research in this area. | Ambre Assor;Arnaud Prouzeau;Martin Hachet;Pierre Dragicevic | Ambre Assor;Arnaud Prouzeau;Martin Hachet;Pierre Dragicevic | Inria, CNRS, Université de Bordeaux, France;Inria, CNRS, Université de Bordeaux, France;Inria, CNRS, Université de Bordeaux, France;Inria, CNRS, Université de Bordeaux, France | 10.1109/tvcg.2021.3114835;10.1109/tvcg.2020.3030334;10.1109/tvcg.2016.2598608 | Taxonomy,Models,Frameworks,Theory,Mobile,AR/VR/Immersive,Specialized Input/Display Hardware | 1 | 98 | 211 | |||
52 | Vis | 2023 | Adaptive Assessment of Visualization Literacy | 10.1109/tvcg.2023.3327165 | http://dx.doi.org/10.1109/TVCG.2023.3327165 | 628 | 637 | J | Visualization literacy is an essential skill for accurately interpreting data to inform critical decisions. Consequently, it is vital to understand the evolution of this ability and devise targeted interventions to enhance it, requiring concise and repeatable assessments of visualization literacy for individuals. However, current assessments, such as the Visualization Literacy Assessment Test (vlat), are time-consuming due to their fixed, lengthy format. To address this limitation, we develop two streamlined computerized adaptive tests (cats) for visualization literacy, a-vlat and a-calvi, which measure the same set of skills as their original versions in half the number of questions. Specifically, we (1) employ item response theory (IRT) and non-psychometric constraints to construct adaptive versions of the assessments, (2) finalize the configurations of adaptation through simulation, (3) refine the composition of test items of a-calvi via a qualitative study, and (4) demonstrate the test-retest reliability (ICC: 0.98 and 0.98) and convergent validity (correlation: 0.81 and 0.66) of both CATS via four online studies. We discuss practical recommendations for using our CATS and opportunities for further customization to leverage the full potential of adaptive assessments. All supplemental materials are available at https://osf.io/a6258/. | Yuan Cui;Lily W. Ge;Yiren Ding;Fumeng Yang;Lane Harrison;Matthew Kay 0001 | Yuan Cui;Lily W. Ge;Yiren Ding;Fumeng Yang;Lane Harrison;Matthew Kay | Northwestern University, USA;Northwestern University, USA;Worcester Polytechnic Institute, USA;Northwestern University, USA;Worcester Polytechnic Institute, USA;Northwestern University, USA | 10.1109/tvcg.2014.2346984;10.1109/tvcg.2016.2598920 | Visualization literacy,computerized adaptive testing,item response theory | 1 | 33 | 206 | |||
53 | Vis | 2023 | Mystique: Deconstructing SVG Charts for Layout Reuse | 10.1109/tvcg.2023.3327354 | http://dx.doi.org/10.1109/TVCG.2023.3327354 | 447 | 457 | J | To facilitate the reuse of existing charts, previous research has examined how to obtain a semantic understanding of a chart by deconstructing its visual representation into reusable components, such as encodings. However, existing deconstruction approaches primarily focus on chart styles, handling only basic layouts. In this paper, we investigate how to deconstruct chart layouts, focusing on rectangle-based ones, as they cover not only 17 chart types but also advanced layouts (e.g., small multiples, nested layouts). We develop an interactive tool, called Mystique, adopting a mixed-initiative approach to extract the axes and legend, and deconstruct a chart's layout into four semantic components: mark groups, spatial relationships, data encodings, and graphical constraints. Mystique employs a wizard interface that guides chart authors through a series of steps to specify how the deconstructed components map to their own data. On 150 rectangle-based SVG charts, Mystique achieves above 85% accuracy for axis and legend extraction and 96% accuracy for layout deconstruction. In a chart reproduction study, participants could easily reuse existing charts on new datasets. We discuss the current limitations of Mystique and future research directions. | Chen Chen 0080;Bongshin Lee;Yunhai Wang;Yunjeong Chang;Zhicheng Liu 0001 | Chen Chen;Bongshin Lee;Yunhai Wang;Yunjeong Chang;Zhicheng Liu | University of Maryland, College Park, Maryland, United States;Microsoft Research, Redmond, Washington, United States;Shandong University, Qingdao, China;University of Maryland, College Park, Maryland, United States;University of Maryland, College Park, Maryland, United States | 10.1109/tvcg.2022.3209490;10.1109/tvcg.2011.185;10.1109/tvcg.2019.2934810;10.1109/tvcg.2021.3114856;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/infvis.2001.963283;10.1109/tvcg.2019.2934538;10.1109/tvcg.2008.165;10.1109/tvcg.2021.3114877 | Chart layout,Reuse,Reverse-engineering,Deconstruction | 1 | 47 | 183 | |||
54 | Vis | 2023 | MolSieve: A Progressive Visual Analytics System for Molecular Dynamics Simulations | 10.1109/tvcg.2023.3326584 | http://dx.doi.org/10.1109/TVCG.2023.3326584 | 727 | 737 | J | Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge physio-chemical research. They provide critical insights into how a physical system evolves over time given a model of interatomic interactions. Understanding a system's evolution is key to selecting the best candidates for new drugs, materials for manufacturing, and countless other practical applications. With today's technology, these simulations can encompass millions of unit transitions between discrete molecular structures, spanning up to several milliseconds of real time. Attempting to perform a brute-force analysis with data-sets of this size is not only computationally impractical, but would not shed light on the physically-relevant features of the data. Moreover, there is a need to analyze simulation ensembles in order to compare similar processes in differing environments. These problems call for an approach that is analytically transparent, computationally efficient, and flexible enough to handle the variety found in materials-based research. In order to address these problems, we introduce MolSieve, a progressive visual analytics system that enables the comparison of multiple long-duration simulations. Using MolSieve, analysts are able to quickly identify and compare regions of interest within immense simulations through its combination of control charts, data-reduction techniques, and highly informative visual components. A simple programming interface is provided which allows experts to fit MolSieve to their needs. To demonstrate the efficacy of our approach, we present two case studies of MolSieve and report on findings from domain collaborators. | Rostyslav Hnatyshyn;Jieqiong Zhao;Danny Perez;James P. Ahrens;Ross Maciejewski | Rostyslav Hnatyshyn;Jieqiong Zhao;Danny Perez;James Ahrens;Ross Maciejewski | Arizona State University, USA;Arizona State University, USA;Los Alamos National Laboratory, USA;Los Alamos National Laboratory, USA;Arizona State University, USA | 10.1109/tvcg.2018.2864851;10.1109/tvcg.2010.193;10.1109/tvcg.2012.265;10.1109/tvcg.2022.3209411;10.1109/tvcg.2018.2864504;10.1109/tvcg.2007.70515 | Molecular dynamics,time-series analysis,visual analytics | 1 | 51 | 183 | |||
55 | Vis | 2023 | Guaranteed Visibility in Scatterplots with Tolerance | 10.1109/tvcg.2023.3326596 | http://dx.doi.org/10.1109/TVCG.2023.3326596 | 792 | 802 | J | In 2D visualizations, visibility of every datum's representation is crucial to ease the completion of visual tasks. Such a guarantee is barely respected in complex visualizations, mainly because of overdraws between datum representations that hide parts of the information (e.g., outliers). The literature proposes various Layout Adjustment algorithms to improve the readability of visualizations that suffer from this issue. Manipulating the data in high-dimensional, geometric or visual space; they rely on different strategies with their own strengths and weaknesses. Moreover, most of these algorithms are computationally expensive as they search for an exact solution in the geometric space and do not scale well to large datasets. This article proposes GIST, a layout adjustment algorithm that aims at optimizing three criteria: (i) node visibility guarantee (at least 1 pixel), (ii) node size maximization, and (iii) the original layout preservation. This is achieved by combining a search for the maximum node size that enables to draw all the data points without overlaps, with a limited budget of movements (i.e., limiting the distortions of the original layout). The method's basis relies on the idea that it is not necessary for two data representations to be strictly not overlapping in order to guarantee their visibility in visual space. Our algorithm therefore uses a tolerance in the geometric space to determine the overlaps between pairs of data. The tolerance is optimized such that the approximation computed in the geometric space can lead to visualization without noticeable overdraw after the data rendering rasterization. In addition, such an approximation helps to ease the algorithm's convergence as it reduces the number of constraints to resolve, enabling it to handle large datasets. We demonstrate the effectiveness of our approach by comparing its results to those of state-of-the-art methods on several large datasets. | Loann Giovannangeli;Frédéric Lalanne;Romain Giot;Romain Bourqui | Loann Giovannangeli;Frederic Lalanne;Romain Giot;Romain Bourqui | Univ. Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, UMR 5800, Talence, France;Univ. Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, UMR 5800, Talence, France;Univ. Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, UMR 5800, Talence, France;Univ. Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, UMR 5800, Talence, France | 10.1109/tvcg.2019.2934541;10.1109/tvcg.2023.3326596;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2022.3209459;10.1109/vast.2017.8585721 | Guaranteed visibility,Layout adjustment,Overlap removal,Scatterplots | 1 | 40 | 169 | |||
56 | Vis | 2023 | Perception of Line Attributes for Visualization | 10.1109/tvcg.2023.3326523 | http://dx.doi.org/10.1109/TVCG.2023.3326523 | 1041 | 1051 | J | Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty—e.g., in weather forecasts—is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance. | Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn | Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn | University of Jena, Germany;University of Tübingen, Germany;University of Jena, Germany;University of Tübingen, Germany;Brandenburg University of Technology, Germany;University of Jena, Germany | 10.1109/tvcg.2012.220;10.1109/tvcg.2017.2743959;10.1109/tvcg.2015.2467671;10.1109/tvcg.2012.279;10.1109/tvcg.2015.2467591;10.1109/tvcg.2016.2598826;10.1109/tvcg.2023.3326574 | Line Drawings,Line Stylization,Perceptual Evaluation,Uncertainty Visualization | 1 | 48 | 157 | |||
57 | Vis | 2023 | Too Many Cooks: Exploring How Graphical Perception Studies Influence Visualization Recommendations in Draco | 10.1109/tvcg.2023.3326527 | http://dx.doi.org/10.1109/TVCG.2023.3326527 | 1063 | 1073 | J | Findings from graphical perception can guide visualization recommendation algorithms in identifying effective visualization designs. However, existing algorithms use knowledge from, at best, a few studies, limiting our understanding of how complementary (or contradictory) graphical perception results influence generated recommendations. In this paper, we present a pipeline of applying a large body of graphical perception results to develop new visualization recommendation algorithms and conduct an exploratory study to investigate how results from graphical perception can alter the behavior of downstream algorithms. Specifically, we model graphical perception results from 30 papers in Draco—a framework to model visualization knowledge—to develop new recommendation algorithms. By analyzing Draco-generated algorithms, we showcase the feasibility of our method to (1) identify gaps in existing graphical perception literature informing recommendation algorithms, (2) cluster papers by their preferred design rules and constraints, and (3) investigate why certain studies can dominate Draco's recommendations, whereas others may have little influence. Given our findings, we discuss the potential for mutually reinforcing advancements in graphical perception and visualization recommendation research. | Zehua Zeng;Junran Yang;Dominik Moritz;Jeffrey Heer;Leilani Battle | Zehua Zeng;Junran Yang;Dominik Moritz;Jeffrey Heer;Leilani Battle | University of Maryland, College Park, USA;University of Washington, Seattle, USA;Carnegie Mellon University, United States;University of Washington, Seattle, USA;University of Washington, Seattle, USA | 10.1109/tvcg.2017.2745086;10.1109/tvcg.2018.2865077;10.1109/tvcg.2019.2934786;10.1109/tvcg.2021.3114863;10.1109/tvcg.2007.70594;10.1109/tvcg.2021.3114684;10.1109/tvcg.2018.2865240;10.1109/tvcg.2018.2864884;10.1109/tvcg.2019.2934807;10.1109/tvcg.2018.2865264;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346320;10.1109/tvcg.2019.2934784;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114814 | Graphical Perception Studies,Visualization Recommendation Algorithms | 1 | 51 | 153 | |||
58 | Vis | 2023 | A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision | 10.1109/tvcg.2023.3326588 | http://dx.doi.org/10.1109/TVCG.2023.3326588 | 76 | 86 | J | Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements. | Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu 0004;Hang Su;Hanspeter Pfister;Shixia Liu | Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu;Hang Su;Hanspeter Pfister;Shixia Liu | School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Department of Computer Science and Technology, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Cardiff University, United Kingdom;Department of Computer Science and Technology, Tsinghua University, China;Harvard University, United Kingdom;School of Software, BNRist, Tsinghua University, China | 10.1109/tvcg.2014.2346660;10.1109/tvcg.2022.3209425;10.1109/tvcg.2017.2744683;10.1109/tvcg.2020.3028976;10.1109/tvcg.2020.3030350;10.1109/tvcg.2013.173;10.1109/tvcg.2021.3114855;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2007.70589;10.1109/tvcg.2022.3209489;10.1109/vast50239.2020.00007;10.1109/tvcg.2022.3209465 | Model evaluation,computer vision,classification,object detection,instance segmentation | 0 | 69 | 668 | |||
59 | Vis | 2023 | ManiVault: A Flexible and Extensible Visual Analytics Framework for High-Dimensional Data | 10.1109/tvcg.2023.3326582 | http://dx.doi.org/10.1109/TVCG.2023.3326582 | 175 | 185 | J | Exploration and analysis of high-dimensional data are important tasks in many fields that produce large and complex data, like the financial sector, systems biology, or cultural heritage. Tailor-made visual analytics software is developed for each specific application, limiting their applicability in other fields. However, as diverse as these fields are, their characteristics and requirements for data analysis are conceptually similar. Many applications share abstract tasks and data types and are often constructed with similar building blocks. Developing such applications, even when based mostly on existing building blocks, requires significant engineering efforts. We developed ManiVault, a flexible and extensible open-source visual analytics framework for analyzing high-dimensional data. The primary objective of ManiVault is to facilitate rapid prototyping of visual analytics workflows for visualization software developers and practitioners alike. ManiVault is built using a plugin-based architecture that offers easy extensibility. While our architecture deliberately keeps plugins self-contained, to guarantee maximum flexibility and re-usability, we have designed and implemented a messaging API for tight integration and linking of modules to support common visual analytics design patterns. We provide several visualization and analytics plugins, and ManiVault's API makes the integration of new plugins easy for developers. ManiVault facilitates the distribution of visualization and analysis pipelines and results for practitioners through saving and reproducing complete application states. As such, ManiVault can be used as a communication tool among researchers to discuss workflows and results. A copy of this paper and all supplemental material is available at osf.io/9k6jw, and source code at github.com/ManiVaultStudio. | Alexander Vieth;Thomas Kroes;Julian Thijssen;Baldur van Lew;Jeroen Eggermont;Soumyadeep Basu;Elmar Eisemann;Anna Vilanova;Thomas Höllt;Boudewijn P. F. Lelieveldt | Alexander Vieth;Thomas Kroes;Julian Thijssen;Baldur van Lew;Jeroen Eggermont;Soumyadeep Basu;Elmar Eisemann;Anna Vilanova;Thomas Höllt;Boudewijn Lelieveldt | TU Delft, Netherlands;Leiden University Medical Center, Netherlands;Leiden University Medical Center, Netherlands;Leiden University Medical Center, Netherlands;Leiden University Medical Center, Netherlands;Leiden University Medical Center, Netherlands;TU Delft, Netherlands;TU Eindhoven, Netherlands;TU Delft, Netherlands;TU Delft, Netherlands | 10.1109/visual.2005.1532788;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/visual.1991.175794;10.1109/tvcg.2020.3030338;10.1109/tvcg.2006.161;10.1109/infvis.2004.64;10.1109/tvcg.2019.2934547;10.1109/tvcg.2017.2744319;10.1109/tvcg.2019.2934631;10.1109/tvcg.2019.2934307;10.1109/tvcg.2009.110;10.1109/tvcg.2015.2467551;10.1109/tvcg.2014.2346291;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346574;10.1109/tvcg.2019.2934275;10.1109/visual.1994.346302;10.1109/tvcg.2021.3114832;10.1109/tvcg.2020.3030367 | High-dimensional data,Visual analytics,Visualization framework,Progressive analytics,Prototyping system | 0 | 72 | 619 | HM | ||
60 | Vis | 2023 | Guided Visual Analytics for Image Selection in Time and Space | 10.1109/tvcg.2023.3326572 | http://dx.doi.org/10.1109/TVCG.2023.3326572 | 66 | 75 | J | Unexploded Ordnance (UXO) detection, the identification of remnant active bombs buried underground from archival aerial images, implies a complex workflow involving decision-making at each stage. An essential phase in UXO detection is the task of image selection, where a small subset of images must be chosen from archives to reconstruct an area of interest (AOI) and identify craters. The selected image set must comply with good spatial and temporal coverage over the AOI, particularly in the temporal vicinity of recorded aerial attacks, and do so with minimal images for resource optimization. This paper presents a guidance-enhanced visual analytics prototype to select images for UXO detection. In close collaboration with domain experts, our design process involved analyzing user tasks, eliciting expert knowledge, modeling quality metrics, and choosing appropriate guidance. We report on a user study with two real-world scenarios of image selection performed with and without guidance. Our solution was well-received and deemed highly usable. Through the lens of our task-based design and developed quality measures, we observed guidance-driven changes in user behavior and improved quality of analysis results. An expert evaluation of the study allowed us to improve our guidance-enhanced prototype further and discuss new possibilities for user-adaptive guidance. | Ignacio Pérez-Messina;Davide Ceneda;Silvia Miksch | Ignacio Pérez-Messina;Davide Ceneda;Silvia Miksch | TU Wien, Austria;TU Wien, Austria;TU Wien, Austria | 10.1109/tvcg.2013.124;10.1109/tvcg.2016.2598468;10.1109/tvcg.2021.3114813;10.1109/tvcg.2018.2864769;10.1109/vast.2017.8585498;10.1109/tvcg.2011.231;10.1109/tvcg.2017.2744418;10.1109/tvcg.2020.3030364;10.1109/tvcg.2014.2346481;10.1109/tvcg.2014.2346321;10.1109/tvcg.2022.3209393;10.1109/vast47406.2019.8986917;10.1109/tvcg.2019.2934658;10.1109/tvcg.2018.2865146 | Application Motivated Visualization,Geospatial Data,Mixed Initiative Human-Machine Analysis,Process/Workflow Design,Task Abstractions & Application Domains,Temporal Data | 0 | 37 | 536 | |||
61 | Vis | 2023 | TransforLearn: Interactive Visual Tutorial for the Transformer Model | 10.1109/tvcg.2023.3327353 | http://dx.doi.org/10.1109/TVCG.2023.3327353 | 891 | 901 | J | The widespread adoption of Transformers in deep learning, serving as the core framework for numerous large-scale language models, has sparked significant interest in understanding their underlying mechanisms. However, beginners face difficulties in comprehending and learning Transformers due to its complex structure and abstract data representation. We present TransforLearn, the first interactive visual tutorial designed for deep learning beginners and non-experts to comprehensively learn about Transformers. TransforLearn supports interactions for architecture-driven exploration and task-driven exploration, providing insight into different levels of model details and their working processes. It accommodates interactive views of each layer's operation and mathematical formula, helping users to understand the data flow of long text sequences. By altering the current decoder-based recursive prediction results and combining the downstream task abstractions, users can deeply explore model processes. Our user study revealed that the interactions of TransforLearn are positively received. We observe that TransforLearn facilitates users' accomplishment of study tasks and a grasp of key concepts in Transformer effectively. | Lin Gao;Zekai Shao;Ziqin Luo;Haibo Hu 0002;Cagatay Turkay;Siming Chen 0001 | Lin Gao;Zekai Shao;Ziqin Luo;Haibo Hu;Cagatay Turkay;Siming Chen | School of Data Science, Fudan University, China;School of Data Science, Fudan University, China;School of Data Science, Fudan University, China;Chongqing University, China;University of Warwick, United Kingdom;School of Data Science, Fudan University, China | 10.1109/tvcg.2020.3028976;10.1109/tvcg.2018.2864500;10.1109/tvcg.2022.3209461;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2021.3114794;10.1109/tvcg.2020.3030418;10.1109/tvcg.2017.2744878;10.1109/tvcg.2022.3209423;10.1109/tvcg.2017.2744098;10.1109/tvcg.2022.3209469 | Deep learning,Transformer,Visual tutorial,Explorable explanations | 0 | 62 | 518 | |||
62 | Vis | 2023 | A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-Objective Optimization | 10.1109/tvcg.2023.3326921 | http://dx.doi.org/10.1109/TVCG.2023.3326921 | 661 | 671 | J | Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms. | Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng | Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng | Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China | 10.1109/tvcg.2015.2467851;10.1109/tvcg.2017.2744199;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/tvcg.2014.2346578;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030347;10.1109/visual.2005.1532820;10.1109/vast50239.2020.00006;10.1109/tvcg.2021.3114790;10.1109/tvcg.2020.3030418;10.1109/tvcg.2020.3030458;10.1109/tvcg.2021.3114850;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864499 | Visual analytics,evolutionary multi-objective optimization | 0 | 79 | 465 | |||
63 | Vis | 2023 | Marjorie: Visualizing Type 1 Diabetes Data to Support Pattern Exploration | 10.1109/tvcg.2023.3326936 | http://dx.doi.org/10.1109/TVCG.2023.3326936 | 1216 | 1226 | J | In this work we propose Marjorie, a visual analytics approach to address the challenge of analyzing patients' diabetes data during brief regular appointments with their diabetologists. Designed in consultation with diabetologists, Marjorie uses a combination of visual and algorithmic methods to support the exploration of patterns in the data. Patterns of interest include seasonal variations of the glucose profiles, and non-periodic patterns such as fluctuations around mealtimes or periods of hypoglycemia (i.e., glucose levels below the normal range). We introduce a unique representation of glucose data based on modified horizon graphs and hierarchical clustering of adjacent carbohydrate or insulin entries. Semantic zooming allows the exploration of patterns on different levels of temporal detail. We evaluated our solution in a case study, which demonstrated Marjorie's potential to provide valuable insights into therapy parameters and unfavorable eating habits, among others. The study results and informal feedback collected from target users suggest that Marjorie effectively supports patients and diabetologists in the joint exploration of patterns in diabetes data, potentially enabling more informed treatment decisions. A free copy of this paper and all supplemental materials are available at https://osf.io/34t8c/. | Anna Scimone;Klaus Eckelt;Marc Streit;Andreas P. Hinterreiter | Anna Scimone;Klaus Eckelt;Marc Streit;Andreas Hinterreiter | Johannes Kepler University Linz, Austria;Johannes Kepler University Linz, Austria;Johannes Kepler University Linz, Austria;Johannes Kepler University Linz, Austria | 10.1109/tvcg.2015.2467851;10.1109/tvcg.2020.3030442;10.1109/tvcg.2020.3028889;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865076 | Design study,task analysis,diabetes,time series data,visual analytics,clustering | 0 | 50 | 421 | |||
64 | Vis | 2023 | OW-Adapter: Human-Assisted Open-World Object Detection with a Few Examples | 10.1109/tvcg.2023.3326577 | http://dx.doi.org/10.1109/TVCG.2023.3326577 | 694 | 704 | J | Open-world object detection (OWOD) is an emerging computer vision problem that involves not only the identification of predefined object classes, like what general object detectors do, but also detects new unknown objects simultaneously. Recently, several end-to-end deep learning models have been proposed to address the OWOD problem. However, these approaches face several challenges: a) significant changes in both network architecture and training procedure are required; b) they are trained from scratch, which can not leverage existing pre-trained general detectors; c) costly annotations for all unknown classes are needed. To overcome these challenges, we present a visual analytic framework called OW-Adapter. It acts as an adaptor to enable pre-trained general object detectors to handle the OWOD problem. Specifically, OW-Adapter is designed to identify, summarize, and annotate unknown examples with minimal human effort. Moreover, we introduce a lightweight classifier to learn newly annotated unknown classes and plug the classifier into pre-trained general detectors to detect unknown objects. We demonstrate the effectiveness of our framework through two case studies of different domains, including common object recognition and autonomous driving. The studies show that a simple yet powerful adaptor can extend the capability of pre-trained general detectors to detect unknown objects and improve the performance on known classes simultaneously. | Suphanut Jamonnak;Jiajing Guo;Wenbin He;Liang Gou;Liu Ren | Suphanut Jamonnak;Jiajing Guo;Wenbin He;Liang Gou;Liu Ren | Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA | 10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744683;10.1109/tvcg.2015.2467196;10.1109/tvcg.2020.3030350;10.1109/tvcg.2021.3114855;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2022.3209466;10.1109/tvcg.2021.3114683;10.1109/tvcg.2021.3114793;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/vast.2017.8585721;10.1109/tvcg.2019.2934267;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2021.3114794;10.1109/tvcg.2017.2744685;10.1109/vast47406.2019.8986943;10.1109/vast50239.2020.00007;10.1109/tvcg.2018.2864499 | Open world learning,object detection,continuous learning,human-assisted AI | 0 | 76 | 415 | |||
65 | Vis | 2023 | Wizualization: A “Hard Magic” Visualization System for Immersive and Ubiquitous Analytics | 10.1109/tvcg.2023.3326580 | http://dx.doi.org/10.1109/TVCG.2023.3326580 | 507 | 517 | J | What if magic could be used as an effective metaphor to perform data visualization and analysis using speech and gestures while mobile and on-the-go? In this paper, we introduce Wizualization, a visual analytics system for eXtended Reality (XR) that enables an analyst to author and interact with visualizations using such a magic system through gestures, speech commands, and touch interaction. Wizualization is a rendering system for current XR headsets that comprises several components: a cross-device (or Arcane Focuses) infrastructure for signalling and view control (Weave), a code notebook (Spellbook), and a grammar of graphics for XR (Optomancy). The system offers users three modes of input: gestures, spoken commands, and materials. We demonstrate Wizualization and its components using a motivating scenario on collaborative data analysis of pandemic data across time and space. | Andrea Batch;Peter W. S. Butcher;Panagiotis D. Ritsos;Niklas Elmqvist | Andrea Batch;Peter W. S. Butcher;Panagiotis D. Ritsos;Niklas Elmqvist | U.S. Bureau of Economic Analysis, Washington, D.C., United States;Bangor University, Bangor, United Kingdom;Bangor University, Bangor, United Kingdom;Aarhus University, Aarhus, Denmark | 10.1109/tvcg.2017.2745941;10.1109/vast.2016.7883506;10.1109/tvcg.2019.2934803;10.1109/tvcg.2019.2934785;10.1109/tvcg.2019.2934415;10.1109/tvcg.2015.2468292;10.1109/tvcg.2012.204;10.1109/tvcg.2013.191;10.1109/tvcg.2013.225;10.1109/tvcg.2020.3030378;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2015.2467153;10.1109/tvcg.2018.2865152;10.1109/tvcg.2021.3114844;10.1109/tvcg.2007.70515;10.1109/tvcg.2019.2934668;10.1109/tvcg.2020.3030367 | Immersive analytics,situated analytics,ubiquitous analytics,gestural interaction,voice interaction | 0 | 82 | 389 | |||
66 | Vis | 2023 | Fast Compressed Segmentation Volumes for Scientific Visualization | 10.1109/tvcg.2023.3326573 | http://dx.doi.org/10.1109/TVCG.2023.3326573 | 12 | 22 | J | Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement. | Max Piochowiak;Carsten Dachsbacher | Max Piochowiak;Carsten Dachsbacher | Karlsruhe Institute of Technology, Germany;Karlsruhe Institute of Technology, Germany | 10.1109/tvcg.2015.2467441;10.1109/tvcg.2020.3030451;10.1109/tvcg.2013.142;10.1109/tvcg.2018.2864847;10.1109/tvcg.2017.2744238;10.1109/tvcg.2012.240;10.1109/tvcg.2006.143 | Segmentation volumes,lossless compression,volume rendering | 0 | 50 | 375 | BP | ||
67 | Vis | 2023 | CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models | 10.1109/tvcg.2023.3327153 | http://dx.doi.org/10.1109/TVCG.2023.3327153 | 273 | 283 | J | Recently, large pretrained language models have achieved compelling performance on commonsense benchmarks. Nevertheless, it is unclear what commonsense knowledge the models learn and whether they solely exploit spurious patterns. Feature attributions are popular explainability techniques that identify important input concepts for model outputs. However, commonsense knowledge tends to be implicit and rarely explicitly presented in inputs. These methods cannot infer models' implicit reasoning over mentioned concepts. We present CommonsenseVIS, a visual explanatory system that utilizes external commonsense knowledge bases to contextualize model behavior for commonsense question-answering. Specifically, we extract relevant commonsense knowledge in inputs as references to align model behavior with human knowledge. Our system features multi-level visualization and interactive model probing and editing for different concepts and their underlying relations. Through a user study, we show that CommonsenseVIS helps NLP experts conduct a systematic and scalable visual analysis of models' relational reasoning over concepts in different situations. | Xingbo Wang 0001;Renfei Huang;Zhihua Jin;Tianqing Fang;Huamin Qu | Xingbo Wang;Renfei Huang;Zhihua Jin;Tianqing Fang;Huamin Qu | Weill Cornell Medical College, Cornell University, USA;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China | 10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744158;10.1109/tvcg.2021.3114794;10.1109/tvcg.2019.2934619 | Commonsense reasoning,visual analytics,XAI,natural language processing | 0 | 77 | 370 | |||
68 | Vis | 2023 | Character-Oriented Design for Visual Data Storytelling | 10.1109/tvcg.2023.3326578 | http://dx.doi.org/10.1109/TVCG.2023.3326578 | 98 | 108 | J | When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must balance being consumable and enjoyable while preserving scientific integrity. In data stories, numerous methods have been identified for constructing and presenting a plot. However, there is an opportunity to expand how we think and create the visual elements that present the story. Stories are brought to life by characters; often they are what make a story captivating, enjoyable, memorable, and facilitate following the plot until the end. Through the analysis of 160 existing data stories, we systematically investigate and identify distinguishable features of characters in data stories, and we illustrate how they feed into the broader concept of “character-oriented design”. We identify the roles and visual representations data characters assume as well as the types of relationships these roles have with one another. We identify characteristics of antagonists as well as define conflict in data stories. We find the need for an identifiable central character that the audience latches on to in order to follow the narrative and identify their visual representations. We then illustrate “character-oriented design” by showing how to develop data characters with common data story plots. With this work, we present a framework for data characters derived from our analysis; we then offer our extension to the data storytelling process using character-oriented design. To access our supplemental materials please visit https://chaorientdesignds.github.io/. | Keshav Dasu;Yun-Hsin Kuo;Kwan-Liu Ma | Keshav Dasu;Yun-Hsin Kuo;Kwan-Liu Ma | University of California, Davis, USA;University of California, Davis, USA;University of California, Davis, USA | 10.1109/tvcg.2016.2598647;10.1109/tvcg.2020.3030437;10.1109/tvcg.2016.2598876;10.1109/tvcg.2020.3030412;10.1109/tvcg.2007.70539;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2018.2865145;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2865232;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114774 | Storytelling,Explanatory,Narrative visualization,Visual metaphor | 0 | 71 | 366 | |||
69 | Vis | 2023 | A Parallel Framework for Streaming Dimensionality Reduction | 10.1109/tvcg.2023.3326515 | http://dx.doi.org/10.1109/TVCG.2023.3326515 | 142 | 152 | J | The visualization of streaming high-dimensional data often needs to consider the speed in dimensionality reduction algorithms, the quality of visualized data patterns, and the stability of view graphs that usually change over time with new data. Existing methods of streaming high-dimensional data visualization primarily line up essential modules in a serial manner and often face challenges in satisfying all these design considerations. In this research, we propose a novel parallel framework for streaming high-dimensional data visualization to achieve high data processing speed, high quality in data patterns, and good stability in visual presentations. This framework arranges all essential modules in parallel to mitigate the delays caused by module waiting in serial setups. In addition, to facilitate the parallel pipeline, we redesign these modules with a parametric non-linear embedding method for new data embedding, an incremental learning method for online embedding function updating, and a hybrid strategy for optimized embedding updating. We also improve the coordination mechanism among these modules. Our experiments show that our method has advantages in embedding speed, quality, and stability over other existing methods to visualize streaming high-dimensional data. | Jiazhi Xia;Linquan Huang;Yiping Sun;Zhiwei Deng;Xiaolong Luke Zhang;Minfeng Zhu | Jiazhi Xia;Linquan Huang;Yiping Sun;Zhiwei Deng;Xiaolong Luke Zhang;Minfeng Zhu | School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;Pennsylvania State University, USA;Zhejiang University, China | 10.1109/tvcg.2021.3114880;10.1109/tvcg.2019.2934433;10.1109/tvcg.2015.2467553;10.1109/tvcg.2011.220;10.1109/tvcg.2019.2934396;10.1109/tvcg.2021.3114765;10.1109/tvcg.2022.3209423;10.1109/tvcg.2021.3114694;10.1109/tvcg.2018.2865026;10.1109/tvcg.2016.2598664 | High-dimensional data visualization,dimensionality reduction,streaming data visualization | 0 | 77 | 365 | |||
70 | Vis | 2023 | Visualizing Large-Scale Spatial Time Series with GeoChron | 10.1109/tvcg.2023.3327162 | http://dx.doi.org/10.1109/TVCG.2023.3327162 | 1194 | 1204 | J | In geo-related fields such as urban informatics, atmospheric science, and geography, large-scale spatial time (ST) series (i.e., geo-referred time series) are collected for monitoring and understanding important spatiotemporal phenomena. ST series visualization is an effective means of understanding the data and reviewing spatiotemporal phenomena, which is a prerequisite for in-depth data analysis. However, visualizing these series is challenging due to their large scales, inherent dynamics, and spatiotemporal nature. In this study, we introduce the notion of patterns of evolution in ST series. Each evolution pattern is characterized by 1) a set of ST series that are close in space and 2) a time period when the trends of these ST series are correlated. We then leverage Storyline techniques by considering an analogy between evolution patterns and sessions, and finally design a novel visualization called GeoChron, which is capable of visualizing large-scale ST series in an evolution pattern-aware and narrative-preserving manner. GeoChron includes a mining framework to extract evolution patterns and two-level visualizations to enhance its visual scalability. We evaluate GeoChron with two case studies, an informal user study, an ablation study, parameter analysis, and running time analysis. | Zikun Deng;Shifu Chen;Tobias Schreck;Dazhen Deng;Tan Tang;Mingliang Xu;Di Weng;Yingcai Wu | Zikun Deng;Shifu Chen;Tobias Schreck;Dazhen Deng;Tan Tang;Mingliang Xu;Di Weng;Yingcai Wu | State Key Lab of CAD&CG, Zhejiang University, China;School of Software Technology, Zhejiang University, China;Graz University of Technology, Austria;School of Software Technology, Zhejiang University, China;School of Art and Archaeology, Zhejiang University, China;School of Computer and Artificial Intelligence, Zhengzhou University, China;School of Software Technology, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China | 10.1109/tvcg.2015.2467851;10.1109/tvcg.2019.2934670;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209480;10.1109/tvcg.2019.2934555;10.1109/tvcg.2021.3114762;10.1109/vast.2014.7042489;10.1109/tvcg.2018.2865018;10.1109/tvcg.2022.3209430;10.1109/tvcg.2013.196;10.1109/tvcg.2021.3114868;10.1109/vast.2012.6400491;10.1109/tvcg.2007.70523;10.1109/tvcg.2012.212;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/tvcg.2021.3114781;10.1109/tvcg.2018.2865146;10.1109/tvcg.2013.228;10.1109/tvcg.2022.3209447;10.1109/tvcg.2021.3114877;10.1109/tvcg.2019.2934660;10.1109/tvcg.2022.3209469;10.1109/tvcg.2021.3114865 | Spatiotemporal visualization,spatial time series,Storyline | 0 | 76 | 365 | |||
71 | Vis | 2023 | A Heuristic Approach for Dual Expert/End-User Evaluation of Guidance in Visual Analytics | 10.1109/tvcg.2023.3327152 | http://dx.doi.org/10.1109/TVCG.2023.3327152 | 997 | 1007 | J | Guidance can support users during the exploration and analysis of complex data. Previous research focused on characterizing the theoretical aspects of guidance in visual analytics and implementing guidance in different scenarios. However, the evaluation of guidance-enhanced visual analytics solutions remains an open research question. We tackle this question by introducing and validating a practical evaluation methodology for guidance in visual analytics. We identify eight quality criteria to be fulfilled and collect expert feedback on their validity. To facilitate actual evaluation studies, we derive two sets of heuristics. The first set targets heuristic evaluations conducted by expert evaluators. The second set facilitates end-user studies where participants actually use a guidance-enhanced system. By following such a dual approach, the different quality criteria of guidance can be examined from two different perspectives, enhancing the overall value of evaluation studies. To test the practical utility of our methodology, we employ it in two studies to gain insight into the quality of two guidance-enhanced visual analytics solutions, one being a work-in-progress research prototype, and the other being a publicly available visualization recommender system. Based on these two evaluations, we derive good practices for conducting evaluations of guidance in visual analytics and identify pitfalls to be avoided during such studies. | Davide Ceneda;Christopher Collins 0001;Mennatallah El-Assady;Silvia Miksch;Christian Tominski;Alessio Arleo | Davide Ceneda;Christopher Collins;Mennatallah El-Assady;Silvia Miksch;Christian Tominski;Alessio Arleo | TU Wien, Austria;Ontario Tech University, Canada;ETH Zurich, AI Center, Switzerland;TU Wien, Austria;VAC Institute, University of Rostock, Germany;TU Wien, Austria | 10.1109/infvis.2004.10;10.1109/tvcg.2016.2598468;10.1109/vast.2010.5652443;10.1109/tvcg.2022.3209390;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70539;10.1109/vast.2011.6102448;10.1109/tvcg.2022.3209393;10.1109/tvcg.2019.2934629;10.1109/tvcg.2018.2865146;10.1109/tvcg.2015.2467191 | Guidance,heuristics,evaluation,visual analytics | 0 | 55 | 352 | |||
72 | Vis | 2023 | QEVIS: Multi-Grained Visualization of Distributed Query Execution | 10.1109/tvcg.2023.3326930 | http://dx.doi.org/10.1109/TVCG.2023.3326930 | 153 | 163 | J | Distributed query processing systems such as Apache Hive and Spark are widely-used in many organizations for large-scale data analytics. Analyzing and understanding the query execution process of these systems are daily routines for engineers and crucial for identifying performance problems, optimizing system configurations, and rectifying errors. However, existing visualization tools for distributed query execution are insufficient because (i) most of them (if not all) do not provide fine-grained visualization (i.e., the atomic task level), which can be crucial for understanding query performance and reasoning about the underlying execution anomalies, and (ii) they do not support proper linkages between system status and query execution, which makes it difficult to identify the causes of execution problems. To tackle these limitations, we propose QEVIS, which visualizes distributed query execution process with multiple views that focus on different granularities and complement each other. Specifically, we first devise a query logical plan layout algorithm to visualize the overall query execution progress compactly and clearly. We then propose two novel scoring methods to summarize the anomaly degrees of the jobs and machines during query execution, and visualize the anomaly scores intuitively, which allow users to easily identify the components that are worth paying attention to. Moreover, we devise a scatter plot-based task view to show a massive number of atomic tasks, where task distribution patterns are informative for execution problems. We also equip QEVIS with a suite of auxiliary views and interaction methods to support easy and effective cross-view exploration, which makes it convenient to track the causes of execution problems. QEVIS has been used in the production environment of our industry partner, and we present three use cases from real-world applications and user interview to demonstrate its effectiveness. QEVIS is open-source at https://github.com/DBGroup-SUSTech/QEVIS. | Qiaomu Shen;Zhengxin You;Xiao Yan 0002;Chaozu Zhang;Ke Xu;Dan Zeng 0002;Jianbin Qin;Bo Tang 0016 | Qiaomu Shen;Zhengxin You;Xiao Yan;Chaozu Zhang;Ke Xu;Dan Zeng;Jianbin Qin;Bo Tang | Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Huawei Technologies Co., Ltd., China;Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, China;Shenzhen Institute of Computing Sciences, Shenzhen University, China;Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, China | 10.1109/tvcg.2014.2346594;10.1109/tvcg.2021.3114756;10.1109/tvcg.2019.2934661;10.1109/tvcg.2022.3209375;10.1109/tvcg.2012.213;10.1109/vast50239.2020.00009;10.1109/tvcg.2018.2865026;10.1109/tvcg.2017.2744738 | visual analytics system,distributed query execution,performance analysis | 0 | 54 | 345 | |||
73 | Vis | 2023 | HoopInSight: Analyzing and Comparing Basketball Shooting Performance Through Visualization | 10.1109/tvcg.2023.3326910 | http://dx.doi.org/10.1109/TVCG.2023.3326910 | 858 | 868 | J | Data visualization has the power to revolutionize sports. For example, the rise of shot maps has changed basketball strategy by visually illustrating where “good/bad” shots are taken from. As a result, professional basketball teams today take shots from very different positions on the court than they did 20 years ago. Although the shot map has transformed many facets of the game, there is still much room for improvement to support richer and more complex analytical tasks. More specifically, we believe that the lack of sufficient interactivity to support various analytical queries and the inability to visually compare differences across situations are significant limitations of current shot maps. To address these limitations and showcase new possibilities, we designed and developed HoopInSight, an interactive visualization system that centers around a novel spatial comparison visual technique, enhancing the capabilities of shot maps in basketball analytics. This article presents the system, with a focus on our proposed visual technique and its accompanying interactions, all designed to promote comparison of two different scenarios. Furthermore, we provide reflections on and a discussion of relevant issues, including considerations for designing spatial comparison techniques, the scalability and transferability of this approach, and the benefits and pitfalls of designing as domain experts. | Yu Fu;John T. Stasko | Yu Fu;John Stasko | Georgia Institute of Technology, USA;Georgia Institute of Technology, USA | 10.1109/tvcg.2021.3114806;10.1109/tvcg.2017.2744199;10.1109/tvcg.2022.3209353;10.1109/tvcg.2013.192;10.1109/tvcg.2012.213;10.1109/tvcg.2022.3209373;10.1109/tvcg.2018.2865041;10.1109/tvcg.2007.70515 | sports data visualization,sports analytics,visual comparison,basketball | 0 | 52 | 339 | |||
74 | Vis | 2023 | LiberRoad: Probing into the Journey of Chinese Classics Through Visual Analytics | 10.1109/tvcg.2023.3326944 | http://dx.doi.org/10.1109/TVCG.2023.3326944 | 529 | 539 | J | Books act as a crucial carrier of cultural dissemination in ancient times. This work involves joint efforts between visualization and humanities researchers, aiming at building a holistic view of the cultural exchange and integration between China and Japan brought about by the overseas circulation of Chinese classics. Book circulation data consist of uncertain spatiotemporal trajectories, with multiple dimensions, and movement across hierarchical spaces forms a compound network. LiberRoad visualizes the circulation of books collected in the Imperial Household Agency of Japan, and can be generalized to other book movement data. The LiberRoad system enables a smooth transition between three views (Location Graph, map, and timeline) according to the desired perspectives (spatial or temporal), as well as flexible filtering and selection. The Location Graph is a novel uncertainty-aware visualization method that employs improved circle packing to represent spatial hierarchy. The map view intuitively shows the overall circulation by clustering and allows zooming into single book trajectory with lenses magnifying local movements. The timeline view ranks dynamically in response to user interaction to facilitate the discovery of temporal events. The evaluation and feedback from the expert users demonstrate that LiberRoad is helpful in revealing movement patterns and comparing circulation characteristics of different times and spaces. | Yuhan Guo;Yuchu Luo;Keer Lu;Linfang Li;Haizheng Yang;Xiaoru Yuan | Yuhan Guo;Yuchu Luo;Keer Lu;Linfang Li;Haizheng Yang;Xiaoru Yuan | Key Laboratory of Machine Perception (Ministry of Education), School of AI, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), School of AI, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), School of AI, Peking University, China;Department of Chinese Language & Literature, Center for Ancient Chinese Classics & Archives, Peking University, China;Department of Chinese Language & Literature, Center for Ancient Chinese Classics & Archives, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), School of AI, Peking University, China | 10.1109/vast.2009.5332584;10.1109/vast.2011.6102454;10.1109/tvcg.2015.2467619;10.1109/vast.2009.5332593;10.1109/tvcg.2020.3030442;10.1109/tvcg.2017.2743959;10.1109/tvcg.2019.2934661;10.1109/tvcg.2015.2467752;10.1109/tvcg.2014.2346271;10.1109/tvcg.2017.2745320;10.1109/tvcg.2006.147;10.1109/tvcg.2011.179;10.1109/tvcg.2013.196;10.1109/tvcg.2012.279;10.1109/tvcg.2021.3114868;10.1109/tvcg.2022.3209436;10.1109/infvis.2005.1532152;10.1109/tvcg.2012.212;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2012.225 | Visual analytics,digital humanities,spatial uncertainty,trajectory visualization,book movement,historical data | 0 | 68 | 333 | |||
75 | Vis | 2023 | Explore Your Network in Minutes: A Rapid Prototyping Toolkit for Understanding Neural Networks with Visual Analytics | 10.1109/tvcg.2023.3326575 | http://dx.doi.org/10.1109/TVCG.2023.3326575 | 683 | 693 | J | Neural networks attract significant attention in almost every field due to their widespread applications in various tasks. However, developers often struggle with debugging due to the black-box nature of neural networks. Visual analytics provides an intuitive way for developers to understand the hidden states and underlying complex transformations in neural networks. Existing visual analytics tools for neural networks have been demonstrated to be effective in providing useful hints for debugging certain network architectures. However, these approaches are often architecture-specific with strong assumptions of how the network should be understood. This limits their use when the network architecture or the exploration goal changes. In this paper, we present a general model and a programming toolkit, Neural Network Visualization Builder (NNVisBuilder), for prototyping visual analytics systems to understand neural networks. NNVisBuilder covers the common data transformation and interaction model involved in existing tools for exploring neural networks. It enables developers to customize a visual analytics interface for answering their specific questions about networks. NNVisBuilder is compatible with PyTorch so that developers can integrate the visualization code into their learning code seamlessly. We demonstrate the applicability by reproducing several existing visual analytics systems for networks with NNVisBuilder. The source code and some example cases can be found at https://github.com/sysuvis/NVB. | Shaoxuan Lai;Wanna Luan;Jun Tao 0002 | Shaoxuan Lai;Wanna Luan;Jun Tao | School of Computer Science and Engineering, Sun Yat-sen University, China;School of Computer Science and Engineering, Sun Yat-sen University, China;School of Computer Science and Engineering, Sun Yat-sen University, National Supercomputer Center, Guangzhou, China | 10.1109/tvcg.2011.185;10.1109/tvcg.2020.3030342;10.1109/tvcg.2019.2934537;10.1109/tvcg.2020.3030453;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/visual.2005.1532820;10.1109/tvcg.2020.3030418 | Visualization model,toolkit,neural networks,visual diagnosis | 0 | 50 | 323 | |||
76 | Vis | 2023 | Action-Evaluator: A Visualization Approach for Player Action Evaluation in Soccer | 10.1109/tvcg.2023.3326524 | http://dx.doi.org/10.1109/TVCG.2023.3326524 | 880 | 890 | J | In soccer, player action evaluation provides a fine-grained method to analyze player performance and plays an important role in improving winning chances in future matches. However, previous studies on action evaluation only provide a score for each action, and hardly support inspecting and comparing player actions integrated with complex match context information such as team tactics and player locations. In this work, we collaborate with soccer analysts and coaches to characterize the domain problems of evaluating player performance based on action scores. We design a tailored visualization of soccer player actions that places the action choice together with the tactic it belongs to as well as the player locations in the same view. Based on the design, we introduce a visual analytics system, Action-Evaluator, to facilitate a comprehensive player action evaluation through player navigation, action investigation, and action explanation. With the system, analysts can find players to be analyzed efficiently, learn how they performed under various match situations, and obtain valuable insights to improve their action choices. The usefulness and effectiveness of this work are demonstrated by two case studies on a real-world dataset and an expert interview. | Anqi Cao;Xiao Xie;Mingxu Zhou;Hui Zhang 0051;Mingliang Xu;Yingcai Wu | Anqi Cao;Xiao Xie;Mingxu Zhou;Hui Zhang;Mingliang Xu;Yingcai Wu | State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Ministry of Education, School of Computer and Artificial Intelligence, Zhengzhou University, Engineering Research Center of Intelligent Swarm Systems, National Supercomputing Center, Zhengzhou, China;State Key Lab of CAD&CG, Zhejiang University, China | 10.1109/vast.2014.7042478;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2022.3209352;10.1109/tvcg.2022.3209452;10.1109/tvcg.2021.3114832;10.1109/tvcg.2022.3209373;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359 | Soccer Visualization,Player Evaluation,Design Study | 0 | 65 | 321 | |||
77 | Vis | 2023 | Reducing Ambiguities in Line-Based Density Plots by Image-Space Colorization | 10.1109/tvcg.2023.3327149 | http://dx.doi.org/10.1109/TVCG.2023.3327149 | 825 | 835 | J | Line-based density plots are used to reduce visual clutter in line charts with a multitude of individual lines. However, these traditional density plots are often perceived ambiguously, which obstructs the user's identification of underlying trends in complex datasets. Thus, we propose a novel image space coloring method for line-based density plots that enhances their interpretability. Our method employs color not only to visually communicate data density but also to highlight similar regions in the plot, allowing users to identify and distinguish trends easily. We achieve this by performing hierarchical clustering based on the lines passing through each region and mapping the identified clusters to the hue circle using circular MDS. Additionally, we propose a heuristic approach to assign each line to the most probable cluster, enabling users to analyze density and individual lines. We motivate our method by conducting a small-scale user study, demonstrating the effectiveness of our method using synthetic and real-world datasets, and providing an interactive online tool for generating colored line-based density plots. | Yumeng Xue;Patrick Paetzold;Rebecca Kehlbeck;Bin Chen;Kin Chung Kwan;Yunhai Wang;Oliver Deussen | Yumeng Xue;Patrick Paetzold;Rebecca Kehlbeck;Bin Chen;Kin Chung Kwan;Yunhai Wang;Oliver Deussen | University of Konstanz, Germany and Shandong University, China;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;California State University Sacramento, United States;Shandong University, China;University of Konstanz, Germany | 10.1109/infvis.2004.68;10.1109/visual.1995.480803;10.1109/tvcg.2007.70595;10.1109/tvcg.2010.176;10.1109/tvcg.2015.2467204;10.1109/visual.1996.568118;10.1109/tvcg.2006.147;10.1109/tvcg.2021.3114783;10.1109/tvcg.2009.145;10.1109/tvcg.2010.162;10.1109/visual.2002.1183788;10.1109/tvcg.2014.2346325;10.1109/tvcg.2020.3030406;10.1109/tvcg.2014.2346455;10.1109/tvcg.2006.170;10.1109/visual.1990.146383;10.1109/visual.2001.964510;10.1109/tvcg.2011.181;10.1109/tvcg.2014.2346277;10.1109/tvcg.2021.3114795;10.1109/tvcg.2013.143;10.1109/tvcg.2021.3114865;10.1109/tvcg.2012.238 | Trajectory data,times series,density-based visualization,clustering,coloring | 0 | 83 | 320 | |||
78 | Vis | 2023 | Data Formulator: AI-Powered Concept-Driven Visualization Authoring | 10.1109/tvcg.2023.3326585 | http://dx.doi.org/10.1109/TVCG.2023.3326585 | 1128 | 1138 | J | With most modern visualization tools, authors need to transform their data into tidy formats to create visualizations they want. Because this requires experience with programming or separate data processing tools, data transformation remains a barrier in visualization authoring. To address this challenge, we present a new visualization paradigm, concept binding, that separates high-level visualization intents and low-level data transformation steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an interactive visualization authoring tool. With Data Formulator, authors first define data concepts they plan to visualize using natural languages or examples, and then bind them to visual channels. Data Formulator then dispatches its AI-agent to automatically transform the input data to surface these concepts and generate desired visualizations. When presenting the results (transformed table and output visualizations) from the AI agent, Data Formulator provides feedback to help authors inspect and understand them. A user study with 10 participants shows that participants could learn and use Data Formulator to create visualizations that involve challenging data transformations, and presents interesting future research directions. | Chenglong Wang;John Thompson 0002;Bongshin Lee | Chenglong Wang;John Thompson;Bongshin Lee | Microsoft Research, USA;Microsoft Research, USA;Microsoft Research, USA | 10.1109/tvcg.2021.3114830;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2021.3114848;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030378;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2598839;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030476;10.1109/tvcg.2015.2467191;10.1109/tvcg.2022.3209470;10.1109/tvcg.2020.3030367;10.1109/tvcg.2022.3209369 | AI,visualization authoring,data transformation,programming by example,natural language,large language model | 0 | 63 | 319 | HM | ||
79 | Vis | 2023 | Causality-Based Visual Analysis of Questionnaire Responses | 10.1109/tvcg.2023.3327376 | http://dx.doi.org/10.1109/TVCG.2023.3327376 | 638 | 648 | J | As the final stage of questionnaire analysis, causal reasoning is the key to turning responses into valuable insights and actionable items for decision-makers. During the questionnaire analysis, classical statistical methods (e.g., Differences-in-Differences) have been widely exploited to evaluate causality between questions. However, due to the huge search space and complex causal structure in data, causal reasoning is still extremely challenging and time-consuming, and often conducted in a trial-and-error manner. On the other hand, existing visual methods of causal reasoning face the challenge of bringing scalability and expert knowledge together and can hardly be used in the questionnaire scenario. In this work, we present a systematic solution to help analysts effectively and efficiently explore questionnaire data and derive causality. Based on the association mining algorithm, we dig question combinations with potential inner causality and help analysts interactively explore the causal sub-graph of each question combination. Furthermore, leveraging the requirements collected from the experts, we built a visualization tool and conducted a comparative study with the state-of-the-art system to show the usability and efficiency of our system. | Renzhong Li;Weiwei Cui;Tianqi Song;Xiao Xie;Rui Ding 0001;Yun Wang 0012;Haidong Zhang;Hong Zhou 0004;Yingcai Wu | Renzhong Li;Weiwei Cui;Tianqi Song;Xiao Xie;Rui Ding;Yun Wang;Haidong Zhang;Hong Zhou;Yingcai Wu | State Key Lab of CAD&CG, Zhejiang University, China;Microsoft Research Asia, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Microsoft Research Asia, China;Microsoft Research Asia, China;Microsoft Research Asia, China;College of Computer Science and Software Engineering, Shenzhen University, China;State Key Lab of CAD&CG, Zhejiang University, China | 10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209484;10.1109/tvcg.2020.3030465;10.1109/tvcg.2021.3114824;10.1109/tvcg.2014.2346248;10.1109/tvcg.2020.3030347;10.1109/tvcg.2009.108;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2020.3028957;10.1109/tvcg.2019.2934399 | Causal analysis,Questionnaire,Design study | 0 | 44 | 304 | |||
80 | Vis | 2023 | Eleven Years of Gender Data Visualization: A Step Towards More Inclusive Gender Representation | 10.1109/tvcg.2023.3327369 | http://dx.doi.org/10.1109/TVCG.2023.3327369 | 316 | 326 | J | We present an analysis of the representation of gender as a data dimension in data visualizations and propose a set of considerations around visual variables and annotations for gender-related data. Gender is a common demographic dimension of data collected from study or survey participants, passengers, or customers, as well as across academic studies, especially in certain disciplines like sociology. Our work contributes to multiple ongoing discussions on the ethical implications of data visualizations. By choosing specific data, visual variables, and text labels, visualization designers may, inadvertently or not, perpetuate stereotypes and biases. Here, our goal is to start an evolving discussion on how to represent data on gender in data visualizations and raise awareness of the subtleties of choosing visual variables and words in gender visualizations. In order to ground this discussion, we collected and coded gender visualizations and their captions from five different scientific communities (Biology, Politics, Social Studies, Visualisation, and Human-Computer Interaction), in addition to images from Tableau Public and the Information Is Beautiful awards showcase. Overall we found that representation types are community-specific, color hue is the dominant visual channel for gender data, and nonconforming gender is under-represented. We end our paper with a discussion of considerations for gender visualization derived from our coding and the literature and recommendations for large data collection bodies. A free copy of this paper and all supplemental materials are available at https://osf.io/v9ams/. | Florent Cabric;Margrét Vilborg Bjarnadóttir;Meng Ling;Guðbjörg Linda Rafnsdóttir;Petra Isenberg | Florent Cabric;Margrét Vilborg Bjarnadóttir;Meng Ling;Guðbjörg Linda Rafnsdóttir;Petra Isenberg | Université Paris-Saclay, CNRS, Inria, LISN, France;Robert H. Smith School of Business, University of Maryland, College Park, USA;Ohio State University, USA;Faculty of Social and Human Science, University of Iceland, Reykjavik, Iceland;Université Paris-Saclay, CNRS, Inria, LISN, France | 10.1109/tvcg.2013.234;10.1109/tvcg.2013.155;10.1109/tvcg.2021.3114810;10.1109/tvcg.2011.160;10.1109/tvcg.2021.3114787;10.1109/tvcg.2012.288;10.1109/tvcg.2021.3114862;10.1109/tvcg.2012.275;10.1109/tvcg.2012.285 | Visualization,gender,visual gender representation,ethics | 0 | 84 | 294 | |||
81 | Vis | 2023 | Visualization of Discontinuous Vector Field Topology | 10.1109/tvcg.2023.3326519 | http://dx.doi.org/10.1109/TVCG.2023.3326519 | 45 | 54 | J | This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics. | Egzon Miftari;Daniel Durstewitz;Filip Sadlo | Egzon Miftari;Daniel Durstewitz;Filip Sadlo | Heidelberg University, Germany;Heidelberg University, Germany;Heidelberg University, Germany | 10.1109/visual.1997.663858;10.1109/visual.2003.1250376 | Discontinuous vector field topology,equivalence in non-unique flow,non-smooth dynamical systems | 0 | 23 | 289 | BP | ||
82 | Vis | 2023 | My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning | 10.1109/tvcg.2023.3327192 | http://dx.doi.org/10.1109/TVCG.2023.3327192 | 327 | 337 | J | Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning. | Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield | Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield | University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;Global Compliance, TD Bank, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA | 10.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467732;10.1109/vast47406.2019.8986948;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209484;10.1109/tvcg.2021.3114805;10.1109/tvcg.2022.3209377;10.1109/tvcg.2018.2864884;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591;10.1109/tvcg.2020.3030434;10.1109/tvcg.2022.3209383;10.1109/tvcg.2014.2346320;10.1109/tvcg.2020.3030471;10.1109/tvcg.2019.2934619;10.1109/tvcg.2021.3114850;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209465 | machine learning,fairness,bias,trust,visual design,gender,human-subjects studies | 0 | 93 | 285 | |||
83 | Vis | 2023 | Data Type Agnostic Visual Sensitivity Analysis | 10.1109/tvcg.2023.3327203 | http://dx.doi.org/10.1109/TVCG.2023.3327203 | 1106 | 1116 | J | Modern science and industry rely on computational models for simulation, prediction, and data analysis. Spatial blind source separation (SBSS) is a model used to analyze spatial data. Designed explicitly for spatial data analysis, it is superior to popular non-spatial methods, like PCA. However, a challenge to its practical use is setting two complex tuning parameters, which requires parameter space analysis. In this paper, we focus on sensitivity analysis (SA). SBSS parameters and outputs are spatial data, which makes SA difficult as few SA approaches in the literature assume such complex data on both sides of the model. Based on the requirements in our design study with statistics experts, we developed a visual analytics prototype for data type agnostic visual sensitivity analysis that fits SBSS and other contexts. The main advantage of our approach is that it requires only dissimilarity measures for parameter settings and outputs (Fig. 1). We evaluated the prototype heuristically with visualization experts and through interviews with two SBSS experts. In addition, we show the transferability of our approach by applying it to microclimate simulations. Study participants could confirm suspected and known parameter-output relations, find surprising associations, and identify parameter subspaces to examine in the future. During our design study and evaluation, we identified challenging future research opportunities. | Nikolaus Piccolotto;Markus Bögl;Christoph Muehlmann;Klaus Nordhausen;Peter Filzmoser;Johanna Schmidt;Silvia Miksch | Nikolaus Piccolotto;Markus Bögl;Christoph Muehlmann;Klaus Nordhausen;Peter Filzmoser;Johanna Schmidt;Silvia Miksch | TU Wien, Austria;TU Wien, Austria;TU Wien, Austria;University of Jyväskylä, Finland;TU Wien, Austria;VRVis GmbH, Austria;TU Wien, Austria | 10.1109/tvcg.2014.2346626;10.1109/tvcg.2010.190;10.1109/tvcg.2011.188;10.1109/tvcg.2018.2864477;10.1109/tvcg.2016.2598468;10.1109/vast.2011.6102450;10.1109/tvcg.2019.2934591;10.1109/tvcg.2019.2934312;10.1109/visual.2000.885678;10.1109/tvcg.2021.3114833;10.1109/tvcg.2020.3030420;10.1109/tvcg.2017.2745085;10.1109/tvcg.2018.2865051;10.1109/tvcg.2016.2598589;10.1109/tvcg.2014.2346321;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865146;10.1109/tvcg.2016.2598830;10.1109/vast.2016.7883516;10.1109/tvcg.2007.70589;10.1109/tvcg.2021.3114694 | Visual analytics,parameter space analysis,sensitivity analysis,spatial blind source separation | 0 | 72 | 280 | |||
84 | Vis | 2023 | Calliope-Net: Automatic Generation of Graph Data Facts via Annotated Node-Link Diagrams | 10.1109/tvcg.2023.3326925 | http://dx.doi.org/10.1109/TVCG.2023.3326925 | 562 | 572 | J | Graph or network data are widely studied in both data mining and visualization communities to review the relationship among different entities and groups. The data facts derived from graph visual analysis are important to help understand the social structures of complex data, especially for data journalism. However, it is challenging for data journalists to discover graph data facts and manually organize correlated facts around a meaningful topic due to the complexity of graph data and the difficulty to interpret graph narratives. Therefore, we present an automatic graph facts generation system, Calliope-Net, which consists of a fact discovery module, a fact organization module, and a visualization module. It creates annotated node-link diagrams with facts automatically discovered and organized from network data. A novel layout algorithm is designed to present meaningful and visually appealing annotated graphs. We evaluate the proposed system with two case studies and an in-lab user study. The results show that Calliope-Net can benefit users in discovering and understanding graph data facts with visually pleasing annotated visualizations. | Qing Chen 0001;Nan Chen;Wei Shuai;Guande Wu;Zhe Xu 0007;Hanghang Tong;Nan Cao 0001 | Qing Chen;Nan Chen;Wei Shuai;Guande Wu;Zhe Xu;Hanghang Tong;Nan Cao | Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;New York University, USA;University of Illinois at Urbana-Champaign, USA;University of Illinois at Urbana-Champaign, USA;Intelligent Big Data Visualization Lab, Tongji University, China | 10.1109/tvcg.2016.2598876;10.1109/tvcg.2019.2934810;10.1109/tvcg.2013.119;10.1109/tvcg.2021.3114802;10.1109/tvcg.2017.2743858;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2018.2865145;10.1109/tvcg.2019.2934398;10.1109/tvcg.2017.2745919;10.1109/tvcg.2020.3030428 | Graph Data,Application Motivated Visualization,Automatic Visualization,Narrative Visualization,Authoring Tools | 0 | 78 | 280 | |||
85 | Vis | 2023 | SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking Effectiveness | 10.1109/tvcg.2023.3326932 | http://dx.doi.org/10.1109/TVCG.2023.3326932 | 606 | 616 | J | As communications are increasingly taking place virtually, the ability to present well online is becoming an indispensable skill. Online speakers are facing unique challenges in engaging with remote audiences. However, there has been a lack of evidence-based analytical systems for people to comprehensively evaluate online speeches and further discover possibilities for improvement. This paper introduces SpeechMirror, a visual analytics system facilitating reflection on a speech based on insights from a collection of online speeches. The system estimates the impact of different speech techniques on effectiveness and applies them to a speech to give users awareness of the performance of speech techniques. A similarity recommendation approach based on speech factors or script content supports guided exploration to expand knowledge of presentation evidence and accelerate the discovery of speech delivery possibilities. SpeechMirror provides intuitive visualizations and interactions for users to understand speech factors. Among them, SpeechTwin, a novel multimodal visual summary of speech, supports rapid understanding of critical speech factors and comparison of different speech samples, and SpeechPlayer augments the speech video by integrating visualization of the speaker's body language with interaction, for focused analysis. The system utilizes visualizations suited to the distinct nature of different speech factors for user comprehension. The proposed system and visualization techniques were evaluated with domain experts and amateurs, demonstrating usability for users with low visualization literacy and its efficacy in assisting users to develop insights for potential improvement. | Ze-Yuan Huang;Qiang He;Kevin T. Maher;Xiaoming Deng 0001;Yu-Kun Lai;Cuixia Ma;Sheng Feng Qin;Yong-Jin Liu;Hongan Wang | Zeyuan Huang;Qiang He;Kevin Maher;Xiaoming Deng;Yu-Kun Lai;Cuixia Ma;Sheng-Feng Qin;Yong-Jin Liu;Hongan Wang | Beijing Key Laboratory of Human-Computer Interaction, Institute of Software, Chinese Academy of Sciences, China;Beijing Key Laboratory of Human-Computer Interaction, Institute of Software, Chinese Academy of Sciences, China;Diatom Design Limited Liability Company, USA;Beijing Key Laboratory of Human-Computer Interaction, Institute of Software, Chinese Academy of Sciences, China;School of Computer Science and Informatics, Cardiff University, United Kingdom;Beijing Key Laboratory of Human-Computer Interaction, Institute of Software, Chinese Academy of Sciences, China;School of Design, Northumbria University, United Kingdom;Department of Computer Science and Technology, MOE-Key Laboratory of Pervasive Computing, Tsinghua University, China;Beijing Key Laboratory of Human-Computer Interaction, Institute of Software, Chinese Academy of Sciences, China | 10.1109/tvcg.2011.185;10.1109/tvcg.2021.3114789;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934656 | Visual Analytics,Multimodal Analysis,Public Speaking,Online Presentation | 0 | 71 | 277 | |||
86 | Vis | 2023 | DIVI: Dynamically Interactive Visualization | 10.1109/tvcg.2023.3327172 | http://dx.doi.org/10.1109/TVCG.2023.3327172 | 403 | 413 | J | Dynamically Interactive Visualization (DIVI) is a novel approach for orchestrating interactions within and across static visualizations. DIVI deconstructs Scalable Vector Graphics charts at runtime to infer content and coordinate user input, decoupling interaction from specification logic. This decoupling allows interactions to extend and compose freely across different tools, chart types, and analysis goals. DIVI exploits positional relations of marks to detect chart components such as axes and legends, reconstruct scales and view encodings, and infer data fields. DIVI then enumerates candidate transformations across inferred data to perform linking between views. To support dynamic interaction without prior specification, we introduce a taxonomy that formalizes the space of standard interactions by chart element, interaction type, and input event. We demonstrate DIVI's usefulness for rapid data exploration and analysis through a usability study with 13 participants and a diverse gallery of dynamically interactive visualizations, including single chart, multi-view, and cross-tool configurations. | Luke S. Snyder;Jeffrey Heer | Luke S. Snyder;Jeffrey Heer | University of Washington, USA;University of Washington, USA | 10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2012.229;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2598839;10.1109/tvcg.2016.2599030;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367 | Interaction,Visualization Tools,Charts,SVG,Exploratory Data Analysis | 0 | 34 | 269 | |||
87 | Vis | 2023 | A Computational Design Pipeline to Fabricate Sensing Network Physicalizations | 10.1109/tvcg.2023.3327198 | http://dx.doi.org/10.1109/TVCG.2023.3327198 | 913 | 923 | J | Interaction is critical for data analysis and sensemaking. However, designing interactive physicalizations is challenging as it requires cross-disciplinary knowledge in visualization, fabrication, and electronics. Interactive physicalizations are typically produced in an unstructured manner, resulting in unique solutions for a specific dataset, problem, or interaction that cannot be easily extended or adapted to new scenarios or future physicalizations. To mitigate these challenges, we introduce a computational design pipeline to 3D print network physicalizations with integrated sensing capabilities. Networks are ubiquitous, yet their complex geometry also requires significant engineering considerations to provide intuitive, effective interactions for exploration. Using our pipeline, designers can readily produce network physicalizations supporting selection—the most critical atomic operation for interaction—by touch through capacitive sensing and computational inference. Our computational design pipeline introduces a new design paradigm by concurrently considering the form and interactivity of a physicalization into one cohesive fabrication workflow. We evaluate our approach using (i) computational evaluations, (ii) three usage scenarios focusing on general visualization tasks, and (iii) expert interviews. The design paradigm introduced by our pipeline can lower barriers to physicalization research, creation, and adoption. | S. Sandra Bae;Takanori Fujiwara;Anders Ynnerman;Ellen Yi-Luen Do;Michael L. Rivera;Danielle Albers Szafir | S. Sandra Bae;Takanori Fujiwara;Anders Ynnerman;Ellen Yi-Luen Do;Michael L. Rivera;Danielle Albers Szafir | University of Colorado, Boulder, United States;Linköping University, Sweden;Linköping University, Sweden;University of Colorado, Boulder, United States;University of Colorado, Boulder, United States;University of North Carolina-Chapel Hill, United States | 10.1109/infvis.2005.1532136;10.1109/tvcg.2022.3209442;10.1109/tvcg.2011.185;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934433;10.1109/tvcg.2010.213;10.1109/tvcg.2010.177;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2352953;10.1109/tvcg.2016.2598498;10.1109/tvcg.2019.2934798;10.1109/tvcg.2007.70515 | Physicalization,tangible interfaces,3D printing,computational fabrication,design automation,network data | 0 | 94 | 261 | HM | ||
88 | Vis | 2023 | 2D, 2.5D, or 3D? An Exploratory Study on Multilayer Network Visualisations in Virtual Reality | 10.1109/tvcg.2023.3327402 | http://dx.doi.org/10.1109/TVCG.2023.3327402 | 469 | 479 | J | Relational information between different types of entities is often modelled by a multilayer network (MLN) – a network with subnetworks represented by layers. The layers of an MLN can be arranged in different ways in a visual representation, however, the impact of the arrangement on the readability of the network is an open question. Therefore, we studied this impact for several commonly occurring tasks related to MLN analysis. Additionally, layer arrangements with a dimensionality beyond 2D, which are common in this scenario, motivate the use of stereoscopic displays. We ran a human subject study utilising a Virtual Reality headset to evaluate 2D, 2.5D, and 3D layer arrangements. The study employs six analysis tasks that cover the spectrum of an MLN task taxonomy, from path finding and pattern identification to comparisons between and across layers. We found no clear overall winner. However, we explore the task-to-arrangement space and derive empirical-based recommendations on the effective use of 2D, 2.5D, and 3D layer arrangements for MLNs. | Stefan P. Feyer;Bruno Pinaud;Stephen G. Kobourov;Nicolas Brich;Michael Krone;Andreas Kerren;Michael Behrisch 0001;Falk Schreiber;Karsten Klein 0001 | Stefan P. Feyer;Bruno Pinaud;Stephen Kobourov;Nicolas Brich;Michael Krone;Andreas Kerren;Michael Behrisch;Falk Schreiber;Karsten Klein | Life Science Informatics, University of Konstanz, Germany;Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, France;University of Arizona, USA;University of Tübingen, Germany;University of Tübingen, Germany;Linköping University, Sweden;Utrecht University, NL;University of Konstanz, Germany;Life Science Informatics, University of Konstanz, Germany | 10.1109/infvis.2005.1532136;10.1109/tvcg.2016.2599107;10.1109/tvcg.2020.3030371;10.1109/tvcg.2021.3114863;10.1109/tvcg.2014.2346441;10.1109/tvcg.2020.3030427;10.1109/tvcg.2018.2865192 | Network,Guidelines,VisDesign,HumanQuant,CompSystems | 0 | 67 | 259 | |||
89 | Vis | 2023 | Average Estimates in Line Graphs Are Biased Toward Areas of Higher Variability | 10.1109/tvcg.2023.3326589 | http://dx.doi.org/10.1109/TVCG.2023.3326589 | 306 | 315 | J | We investigate variability overweighting, a previously undocumented bias in line graphs, where estimates of average value are biased toward areas of higher variability in that line. We found this effect across two preregistered experiments with 140 and 420 participants. These experiments also show that the bias is reduced when using a dot encoding of the same series. We can model the bias with the average of the data series and the average of the points drawn along the line. This bias might arise because higher variability leads to stronger weighting in the average calculation, either due to the longer line segments (even though those segments contain the same number of data values) or line segments with higher variability being otherwise more visually salient. Understanding and predicting this bias is important for visualization design guidelines, recommendation systems, and tool builders, as the bias can adversely affect estimates of averages and trends. | Dominik Moritz;Lace M. K. Padilla;Francis Nguyen;Steven L. Franconeri | Dominik Moritz;Lace M. Padilla;Francis Nguyen;Steven L. Franconeri | Carnegie Mellon University, USA;Northeastern University, USA;Northwestern University, USA;UBC, Canada | 10.1109/infvis.2005.1532136;10.1109/tvcg.2018.2865077;10.1109/tvcg.2009.131;10.1109/tvcg.2021.3114783;10.1109/tvcg.2010.162;10.1109/tvcg.2021.3114684;10.1109/tvcg.2019.2934784;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114865 | bias,lines graph,ensemble perception,average | 0 | 34 | 256 | HM | ||
90 | Vis | 2023 | ViMO - Visual Analysis of Neuronal Connectivity Motifs | 10.1109/tvcg.2023.3327388 | http://dx.doi.org/10.1109/TVCG.2023.3327388 | 748 | 758 | J | Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2–6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2–4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting. | Jakob Troidl;Simon Warchol;Jinhan Choi;Jordan Matelsky;Nagaraju Dhanyasi;Xueying Wang;Brock A. Wester;Donglai Wei 0001;Jeff W. Lichtman;Hanspeter Pfister;Johanna Beyer | Jakob Troidl;Simon Warchol;Jinhan Choi;Jordan Matelsky;Nagaraju Dhanyasi;Xueying Wang;Brock Wester;Donglai Wei;Jeff W. Lichtman;Hanspeter Pfister;Johanna Beyer | School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Department of Computer Science, Boston College, United States;Applied Physics Laboratory, Johns Hopkins University, USA;Department of Cellular & Molecular Biology, Harvard University, USA;Department of Cellular & Molecular Biology, Harvard University, USA;Applied Physics Laboratory, Johns Hopkins University, USA;Department of Computer Science, Boston College, United States;Department of Cellular & Molecular Biology, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA | 10.1109/tvcg.2014.2346312;10.1109/tvcg.2013.142;10.1109/tvcg.2017.2744278;10.1109/tvcg.2017.2744898;10.1109/tvcg.2012.213;10.1109/tvcg.2011.183 | Visual motif analysis,Focus&Context,Scientific visualization,Neuroscience,Connectomics | 0 | 62 | 254 | |||
91 | Vis | 2023 | OldVisOnline: Curating a Dataset of Historical Visualizations | 10.1109/tvcg.2023.3326908 | http://dx.doi.org/10.1109/TVCG.2023.3326908 | 551 | 561 | J | With the increasing adoption of digitization, more and more historical visualizations created hundreds of years ago are accessible in digital libraries online. It provides a unique opportunity for visualization and history research. Meanwhile, there is no large-scale digital collection dedicated to historical visualizations. The visualizations are scattered in various collections, which hinders retrieval. In this study, we curate the first large-scale dataset dedicated to historical visualizations. Our dataset comprises 13K historical visualization images with corresponding processed metadata from seven digital libraries. In curating the dataset, we propose a workflow to scrape and process heterogeneous metadata. We develop a semi-automatic labeling approach to distinguish visualizations from other artifacts. Our dataset can be accessed with OldVisOnline, a system we have built to browse and label historical visualizations. We discuss our vision of usage scenarios and research opportunities with our dataset, such as textual criticism for historical visualizations. Drawing upon our experience, we summarize recommendations for future efforts to improve our dataset. | Yu Zhang 0043;Ruike Jiang;Liwenhan Xie;Yuheng Zhao;Can Liu 0004;Tianhong Ding;Siming Chen 0001;Xiaoru Yuan | Yu Zhang;Ruike Jiang;Liwenhan Xie;Yuheng Zhao;Can Liu;Tianhong Ding;Siming Chen;Xiaoru Yuan | Department of Computer Science, University of Oxford, United Kingdom;Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, China;School of Data Science, Fudan University, China;Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, China;Fundamental Software Innovation Lab, Huawei Technologies, China;School of Data Science, Fudan University, China;Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, China | 10.1109/tvcg.2015.2467757;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2020.3030338;10.1109/tvcg.2012.277;10.1109/tvcg.2019.2934431;10.1109/tvcg.2016.2598827;10.1109/tvcg.2021.3114841;10.1109/tvcg.2015.2467091;10.1109/tvcg.2020.3030396;10.1109/tvcg.2016.2599211;10.1109/tvcg.2016.2598664 | Historical visualization,dataset,digital humanities,data labeling | 0 | 91 | 253 | X | ||
92 | Vis | 2023 | Enthusiastic and Grounded, Avoidant and Cautious: Understanding Public Receptivity to Data and Visualizations | 10.1109/tvcg.2023.3326917 | http://dx.doi.org/10.1109/TVCG.2023.3326917 | 1435 | 1445 | J | Despite an abundance of open data initiatives aimed to inform and empower “general” audiences, we still know little about the ways people outside of traditional data analysis communities experience and engage with public data and visualizations. To investigate this gap, we present results from an in-depth qualitative interview study with 19 participants from diverse ethnic, occupational, and demographic backgrounds. Our findings characterize a set of lived experiences with open data and visualizations in the domain of energy consumption, production, and transmission. This work exposes information receptivity — an individual's transient state of willingness or openness to receive information —as a blind spot for the data visualization community, complementary to but distinct from previous notions of data visualization literacy and engagement. We observed four clusters of receptivity responses to data- and visualization-based rhetoric: Information-Avoidant, Data-Cautious, Data-Enthusiastic, and Domain-Grounded. Based on our findings, we highlight research opportunities for the visualization community. This exploratory work identifies the existence of diverse receptivity responses, highlighting the need to consider audiences with varying levels of openness to new information. Our findings also suggest new approaches for improving the accessibility and inclusivity of open data and visualization initiatives targeted at broad audiences. A free copy of this paper and all supplemental materials are available at https://OSF.IO/MPQ32. | Helen Ai He;Jagoda Walny;Sonja Thoma;Sheelagh Carpendale;Wesley Willett | Helen Ai He;Jagoda Walny;Sonja Thoma;Sheelagh Carpendale;Wesley Willett | University of Calgary, Canada;University of Calgary, Canada;University of Calgary, Canada;Simon Fraser University, Canada;University of Calgary, Canada | 10.1109/tvcg.2013.234;10.1109/tvcg.2014.2346984;10.1109/tvcg.2010.164;10.1109/tvcg.2011.255;10.1109/tvcg.2014.2346292;10.1109/tvcg.2015.2467195;10.1109/tvcg.2016.2598920;10.1109/tvcg.2019.2934539;10.1109/tvcg.2014.2346419;10.1109/tvcg.2007.70541;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030476;10.1109/tvcg.2007.70577 | Diverse audiences,Information receptivity,Information visualization,Open data | 0 | 87 | 249 | HM | ||
93 | Vis | 2023 | Visualization According to Statisticians: An Interview Study on the Role of Visualization for Inferential Statistics | 10.1109/tvcg.2023.3326521 | http://dx.doi.org/10.1109/TVCG.2023.3326521 | 230 | 239 | J | Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking. | Eric Newburger;Niklas Elmqvist | Eric Newburger;Niklas Elmqvist | U.S. Naval Academy, Annapolis, MD, USA;Aarhus University, Aarhus, Denmark | 10.1109/tvcg.2021.3114830;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346298;10.1109/tvcg.2018.2864907;10.1109/tvcg.2013.183;10.1109/tvcg.2010.164;10.1109/tvcg.2014.2346292;10.1109/tvcg.2007.70541;10.1109/tvcg.2010.161 | Inferential statistics,qualitative interview study,thematic coding,statistical visualization | 0 | 32 | 240 | |||
94 | Vis | 2023 | Mosaic: An Architecture for Scalable & Interoperable Data Views | 10.1109/tvcg.2023.3327189 | http://dx.doi.org/10.1109/TVCG.2023.3327189 | 436 | 446 | J | Mosaic is an architecture for greater scalability, extensibility, and interoperability of interactive data views. Mosaic decouples data processing from specification logic: clients publish their data needs as declarative queries that are then managed and automatically optimized by a coordinator that proxies access to a scalable data store. Mosaic generalizes Vegalite's selection abstraction to enable rich integration and linking across visualizations and components such as menus, text search, and tables. We demonstrate Mosaic's expressiveness, extensibility, and interoperability through examples that compose diverse visualization, interaction, and optimization techniques—many constructed using vgplot, a grammar of interactive graphics in which graphical marks act as Mosaic clients. To evaluate scalability, we present benchmark studies with order-of-magnitude performance improvements over existing web-based visualization systems—enabling flexible, real-time visual exploration of billion+ record datasets. We conclude by discussing Mosaic's potential as an open platform that bridges visualization languages, scalable visualization, and interactive data systems more broadly. | Jeffrey Heer;Dominik Moritz | Jeffrey Heer;Dominik Moritz | University of Washington, USA;Carnegie Mellon University, USA | 10.1109/tvcg.2020.3028891;10.1109/tvcg.2011.185;10.1109/tvcg.2013.179;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2020.3030372;10.1109/infvis.2004.12;10.1109/tvcg.2015.2467191;10.1109/tvcg.2021.3114796 | Visualization,Interaction,Scalability,Grammar of Graphics,Software Architecture,Databases | 0 | 49 | 239 | |||
95 | Vis | 2023 | PROWIS: A Visual Approach for Building, Managing, and Analyzing Weather Simulation Ensembles at Runtime | 10.1109/tvcg.2023.3326514 | http://dx.doi.org/10.1109/TVCG.2023.3326514 | 738 | 747 | J | Weather forecasting is essential for decision-making and is usually performed using numerical modeling. Numerical weather models, in turn, are complex tools that require specialized training and laborious setup and are challenging even for weather experts. Moreover, weather simulations are data-intensive computations and may take hours to days to complete. When the simulation is finished, the experts face challenges analyzing its outputs, a large mass of spatiotemporal and multivariate data. From the simulation setup to the analysis of results, working with weather simulations involves several manual and error-prone steps. The complexity of the problem increases exponentially when the experts must deal with ensembles of simulations, a frequent task in their daily duties. To tackle these challenges, we propose ProWis: an interactive and provenance-oriented system to help weather experts build, manage, and analyze simulation ensembles at runtime. Our system follows a human-in-the-loop approach to enable the exploration of multiple atmospheric variables and weather scenarios. ProWis was built in close collaboration with weather experts, and we demonstrate its effectiveness by presenting two case studies of rainfall events in Brazil. | Carolina Veiga Ferreira de Souza;Suzanna Maria Bonnet;Daniel de Oliveira 0001;Márcio Cataldi;Fabio Miranda 0001;Marcos Lage | Carolina Veiga Ferreira de Souza;Suzanna Maria Bonnet;Daniel de Oliveira;Marcio Cataldi;Fabio Miranda;Marcos Lage | Universidade Federal Fluminense and the University of Illinois, USA;Universidade Federal do Rio de Janeiro, Brazil;Universidade Federal Fluminense, Brazil;Universidade Federal Fluminense, Brazil;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil | 10.1109/tvcg.2016.2598869;10.1109/tvcg.2010.181;10.1109/tvcg.2018.2865024;10.1109/tvcg.2016.2598830;10.1109/tvcg.2011.225 | Weather visualization,Ensemble visualization,Provenance management,WRF visual setup | 0 | 36 | 229 | HM | ||
96 | Vis | 2023 | Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering | 10.1109/tvcg.2023.3327193 | http://dx.doi.org/10.1109/TVCG.2023.3327193 | 1380 | 1390 | J | We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution. | Lukas Herzberger;Markus Hadwiger;Robert Krüger;Peter K. Sorger;Hanspeter Pfister;Eduard Gröller;Johanna Beyer | Lukas Herzberger;Markus Hadwiger;Robert Krüger;Peter Sorger;Hanspeter Pfister;Eduard Gröller;Johanna Beyer | TU Wien, Austria;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA;Harvard Medical School, USA;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA;TU Wien, Austria;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA | 10.1109/tvcg.2017.2744238;10.1109/tvcg.2012.240;10.1109/tvcg.2021.3114786;10.1109/tvcg.2019.2934547;10.1109/visual.2003.1250384;10.1109/tvcg.2014.2346458;10.1109/visual.2003.1250385 | Volume rendering,ray-guided rendering,large-scale data,out-of-core rendering,multi-resolution,multi-channel,web-based visualization | 0 | 53 | 228 | |||
97 | Vis | 2023 | Image or Information? Examining the Nature and Impact of Visualization Perceptual Classification | 10.1109/tvcg.2023.3326919 | http://dx.doi.org/10.1109/TVCG.2023.3326919 | 1030 | 1040 | J | How do people internalize visualizations: as images or information? In this study, we investigate the nature of internalization for visualizations (i.e., how the mind encodes visualizations in memory) and how memory encoding affects its retrieval. This exploratory work examines the influence of various design elements on a user's perception of a chart. Specifically, which design elements lead to perceptions of visualization as an image (aims to provide visual references, evoke emotions, express creativity, and inspire philosophic thought) or as information (aims to present complex data, information, or ideas concisely and promote analytical thinking)? Understanding how design elements contribute to viewers perceiving a visualization more as an image or information will help designers decide which elements to include to achieve their communication goals. For this study, we annotated 500 visualizations and analyzed the responses of 250 online participants, who rated the visualizations on a bilinear scale as ‘image’ or ‘information.’ We then conducted an in-person study ($n = 101$) using a free recall task to examine how the image/information ratings and design elements impacted memory. The results revealed several interesting findings: Image-rated visualizations were perceived as more aesthetically ‘appealing,’ ‘enjoyable,’ and ‘pleasing.’ Information-rated visualizations were perceived as less ‘difficult to understand’ and more aesthetically ‘likable’ and ‘nice,’ though participants expressed higher ‘positive’ sentiment when viewing image-rated visualizations and felt less ‘guided to a conclusion.’ The presence of axes and text annotations heavily influenced the likelihood of participants rating the visualization as ‘information.’ We also found different patterns among participants that were older. Importantly, we show that visualizations internalized as ‘images’ are less effective in conveying trends and messages, though they elicit a more positive emotional judgment, while ‘informative’ visualizations exhibit annotation focused recall and elicit a more positive design judgment. We discuss the implications of this dissociation between aesthetic pleasure and perceived ease of use in visualization design. | Anjana Arunkumar;Lace M. K. Padilla;Gi-Yeul Bae;Chris Bryan | Anjana Arunkumar;Lace Padilla;Gi-Yeul Bae;Chris Bryan | Arizona State University, USA;Northeastern University, USA;Arizona State University, USA;Arizona State University, USA | 10.1109/tvcg.2020.3030375;10.1109/infvis.2005.1532136;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2013.124;10.1109/tvcg.2022.3209390;10.1109/tvcg.2011.175;10.1109/tvcg.2011.255;10.1109/infvis.1998.729564;10.1109/tvcg.2016.2598620;10.1109/tvcg.2022.3209500;10.1109/tvcg.2012.221;10.1109/tvcg.2022.3209421;10.1109/tvcg.2022.3209383;10.1109/tvcg.2007.70577;10.1109/tvcg.2012.262;10.1109/tvcg.2021.3114823 | Information Visualization,Human-Centered Computing,Perception & Cognition,Takeaways | 0 | 84 | 227 | |||
98 | Vis | 2023 | Reclaiming the Horizon: Novel Visualization Designs for Time-Series Data with Large Value Ranges | 10.1109/tvcg.2023.3326576 | http://dx.doi.org/10.1109/TVCG.2023.3326576 | 1161 | 1171 | J | We introduce two novel visualization designs to support practitioners in performing identification and discrimination tasks on large value ranges (i.e., several orders of magnitude) in time-series data: (1) The order of magnitude horizon graph, which extends the classic horizon graph; and (2) the order of magnitude line chart, which adapts the log-line chart. These new visualization designs visualize large value ranges by explicitly splitting the mantissa $m$ and exponent $e$ of a value $v=m\cdot 10^{e}$. We evaluate our novel designs against the most relevant state-of-the-art visualizations in an empirical user study. It focuses on four main tasks commonly employed in the analysis of time-series and large value ranges visualization: identification, discrimination, estimation, and trend detection. For each task we analyze error, confidence, and response time. The new order of magnitude horizon graph performs better or equal to all other designs in identification, discrimination, and estimation tasks. Only for trend detection tasks, the more traditional horizon graphs reported better performance. Our results are domain-independent, only requiring time-series data with large value ranges. | Daniel Braun 0010;Rita Borgo;Max Sondag;Tatiana von Landesberger | Daniel Braun;Rita Borgo;Max Sondag;Tatiana von Landesberger | University of Cologne, Germany;King's College London, USA;University of Cologne, Germany;University of Cologne, Germany | 10.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346428;10.1109/tvcg.2013.234;10.1109/tvcg.2013.124;10.1109/tvcg.2018.2865077;10.1109/infvis.2000.885098;10.1109/tvcg.2010.162;10.1109/infvis.2005.1532144;10.1109/tvcg.2012.253 | Visualization techniques,time-series,design study,orders of magnitude,logarithmic scale | 0 | 47 | 225 | |||
99 | Vis | 2023 | Dr. KID: Direct Remeshing and K-Set Isometric Decomposition for Scalable Physicalization of Organic Shapes | 10.1109/tvcg.2023.3326595 | http://dx.doi.org/10.1109/TVCG.2023.3326595 | 705 | 715 | J | Dr. KID is an algorithm that uses isometric decomposition for the physicalization of potato-shaped organic models in a puzzle fashion. The algorithm begins with creating a simple, regular triangular surface mesh of organic shapes, followed by iterative K-means clustering and remeshing. For clustering, we need similarity between triangles (segments) which is defined as a distance function. The distance function maps each triangle's shape to a single point in the virtual 3D space. Thus, the distance between the triangles indicates their degree of dissimilarity. K-means clustering uses this distance and sorts segments into $k$ classes. After this, remeshing is applied to minimize the distance between triangles within the same cluster by making their shapes identical. Clustering and remeshing are repeated until the distance between triangles in the same cluster reaches an acceptable threshold. We adopt a curvature-aware strategy to determine the surface thickness and finalize puzzle pieces for 3D printing. Identical hinges and holes are created for assembling the puzzle components. For smoother outcomes, we use triangle subdivision along with curvature-aware clustering, generating curved triangular patches for 3D printing. Our algorithm was evaluated using various models, and the 3D-printed results were analyzed. Findings indicate that our algorithm performs reliably on target organic shapes with minimal loss of input geometry. | Dawar Khan;Ciril Bohak;Ivan Viola | Dawar Khan;Ciril Bohak;Ivan Viola | Visual Computing Center, King Abdullah University of Science and Technology, Saudi Arabia;Visual Computing Center, King Abdullah University of Science and Technology, Saudi Arabia;Visual Computing Center, King Abdullah University of Science and Technology, Saudi Arabia | 10.1109/tvcg.2020.3030415 | Physicalization,Physical visualization,3D printing,Isometric decomposition,Direct remeshing,Biological structures,Intracellular compartments | 0 | 53 | 225 | HM | ||
100 | Vis | 2023 | A General Framework for Progressive Data Compression and Retrieval | 10.1109/tvcg.2023.3327186 | http://dx.doi.org/10.1109/TVCG.2023.3327186 | 1358 | 1368 | J | In scientific simulations, observations, and experiments, the transfer of data to and from disk and across networks has become a major bottleneck for data analysis and visualization. Compression techniques have been employed to tackle this challenge, but traditional lossy methods often demand conservative error tolerances to meet the numerical accuracy requirements of both anticipated and unknown data analysis tasks. Progressive data compression and retrieval has emerged as a promising solution, where each analysis task dictates its own accuracy needs. However, few analysis algorithms inherently support progressive data processing, and adapting compression techniques, file formats, client/server frameworks, and APIs to support progressivity can be challenging. This paper presents a framework that enables progressive-precision data queries for any data compressor or numerical representation. Our strategy hinges on a multi-component representation that successively reduces the error between the original and compressed field, allowing each field in the progressive sequence to be expressed as a partial sum of components. We have implemented this approach with four established scientific data compressors and assessed its effectiveness using real-world data sets from the SDRBench collection. The results show that our framework competes in accuracy with the standalone compressors it is based upon. Additionally, (de)compression time is proportional to the number of components requested by the user. Finally, our framework allows for fully lossless compression using lossy compressors when a sufficient number of components are employed. | Victor Antonio Paludetto Magri;Peter Lindstrom 0001 | Victor A. P. Magri;Peter Lindstrom | Lawrence Livermore National Laboratory., U.S.;Lawrence Livermore National Laboratory., U.S. | 10.1109/tvcg.2007.70516;10.1109/tvcg.2018.2864853;10.1109/tvcg.2020.3030381;10.1109/tvcg.2014.2346458;10.1109/tvcg.2006.143;10.1109/visual.2003.1250385 | Lossy to lossless compression,progressive precision,multi-component expansion,floating-point data | 0 | 53 | 224 |