ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
ConferenceYearTitleDOILinkFirstPageLastPagePaperTypeAbstractAuthorNames-DedupedAuthorNamesAuthorAffiliation
InternalReferences
AuthorKeywords
AminerCitationCount
CitationCount_CrossRef
PubsCited_CrossRef
Downloads_Xplore
Award
GraphicsReplicabilityStamp
2
InfoVis2011
D³ Data-Driven Documents
10.1109/tvcg.2011.185
http://dx.doi.org/10.1109/TVCG.2011.185
23012309J
Data-Driven Documents (D3) is a novel representation-transparent approach to visualization for the web. Rather than hide the underlying scenegraph within a toolkit-specific abstraction, D3 enables direct inspection and manipulation of a native representation: the standard document object model (DOM). With D3, designers selectively bind input data to arbitrary document elements, applying dynamic transforms to both generate and modify content. We show how representational transparency improves expressiveness and better integrates with developer tools than prior approaches, while offering comparable notational efficiency and retaining powerful declarative components. Immediate evaluation of operators further simplifies debugging and allows iterative development. Additionally, we demonstrate how D3 transforms naturally enable animation and interaction with dramatic performance improvements over intermediate representations.
Michael Bostock;Vadim Ogievetsky;Jeffrey Heer
Michael Bostock;Vadim Ogievetsky;Jeffrey Heer
Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA
10.1109/infvis.2000.885091;10.1109/infvis.2000.885098;10.1109/tvcg.2010.144;10.1109/tvcg.2009.174;10.1109/infvis.2004.12;10.1109/tvcg.2006.178;10.1109/infvis.2005.1532122;10.1109/tvcg.2008.166;10.1109/infvis.2004.64;10.1109/tvcg.2007.70539;10.1109/infvis.2000.885091
Information visualization, user interfaces, toolkits, 2D graphics379521784111668TT
3
InfoVis2014
UpSet: Visualization of Intersecting Sets
10.1109/tvcg.2014.2346248
http://dx.doi.org/10.1109/TVCG.2014.2346248
19831992J
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains.
Alexander Lex;Nils Gehlenborg;Hendrik Strobelt;Romain Vuillemot;Hanspeter Pfister
Alexander Lex;Nils Gehlenborg;Hendrik Strobelt;Romain Vuillemot;Hanspeter Pfister
Hendrik Strobelt and Hanspeter Pfister are with Harvard University.;Harvard Medical School;Hendrik Strobelt and Hanspeter Pfister are with Harvard University.;Romain Vuillemot is with Harvard University;Hendrik Strobelt and Hanspeter Pfister are with Harvard University.
10.1109/tvcg.2008.144;10.1109/tvcg.2013.184;10.1109/tvcg.2011.186;10.1109/tvcg.2010.210;10.1109/tvcg.2009.122;10.1109/tvcg.2011.185;10.1109/tvcg.2011.183;10.1109/tvcg.2008.144
Sets, set visualization, sets intersections, set attributes, set relationships, multidimensional data124315612926170TT
4
InfoVis2010
Narrative Visualization: Telling Stories with Data
10.1109/tvcg.2010.179
http://dx.doi.org/10.1109/TVCG.2010.179
11391148J
Data visualization is regularly promoted for its ability to reveal stories within data, yet these “data stories” differ in important ways from traditional forms of storytelling. Storytellers, especially online journalists, have increasingly been integrating visualizations into their narratives, in some cases allowing the visualization to function in place of a written story. In this paper, we systematically review the design space of this emerging class of visualizations. Drawing on case studies from news media to visualization research, we identify distinct genres of narrative visualization. We characterize these design differences, together with interactivity and messaging, in terms of the balance between the narrative flow intended by the author (imposed by graphical elements and the interface) and story discovery on the part of the reader (often through interactive exploration). Our framework suggests design strategies for narrative visualization, including promising under-explored approaches to journalistic storytelling and educational media.
Edward Segel;Jeffrey Heer
Edward Segel;Jeffrey Heer
University of Stanford, Stanford, CA, USA;University of Stanford, Stanford, CA, USA
10.1109/tvcg.2007.70577;10.1109/tvcg.2007.70539;10.1109/tvcg.2008.137;10.1109/vast.2007.4388992;10.1109/tvcg.2007.70577
Narrative visualization, storytelling, design methods, case study, journalism, social data analysis14087262731097TT
5
InfoVis2006
Hierarchical Edge Bundles: Visualization of Adjacency Relations in Hierarchical Data
sw
http://dx.doi.org/10.1109/TVCG.2006.147
741748J
A compound graph is a frequently encountered type of data set. Relations are given between items, and a hierarchy is defined on the items as well. We present a new method for visualizing such compound graphs. Our approach is based on visually bundling the adjacency edges, i.e., non-hierarchical edges, together. We realize this as follows. We assume that the hierarchy is shown via a standard tree visualization method. Next, we bend each adjacency edge, modeled as a B-spline curve, toward the polyline defined by the path via the inclusion edges from one node to another. This hierarchical bundling reduces visual clutter and also visualizes implicit adjacency edges between parent nodes that are the result of explicit adjacency edges between their respective child nodes. Furthermore, hierarchical edge bundling is a generic method which can be used in conjunction with existing tree visualization techniques. We illustrate our technique by providing example visualizations and discuss the results based on an informal evaluation provided by potential users of such visualizations
Danny HoltenDanny HoltenTechnische Universiteit Eindhoven, Netherlands
10.1109/infvis.2004.1;10.1109/infvis.2003.1249008;10.1109/infvis.2005.1532150;10.1109/infvis.2003.1249030;10.1109/infvis.2005.1532129;10.1109/infvis.1997.636718;10.1109/infvis.2002.1173152;10.1109/infvis.2004.1
Network visualization, edge bundling, edge aggregation, edge concentration, curves, graph visualization, tree visualization, node-link diagrams, hierarchies, treemaps1395679337861TT;BP
6
InfoVis2007
Toward a Deeper Understanding of the Role of Interaction in Information Visualization
10.1109/tvcg.2007.70515
http://dx.doi.org/10.1109/TVCG.2007.70515
12241231J
Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction.
Ji Soo Yi;Youn ah Kang;John T. Stasko;Julie A. Jacko
Ji Soo Yi;Youn ah Kang;John Stasko
Health Systems Institute, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA and The Wallace H. Coulter Department of Biomedical Engineering, Emory University
10.1109/visual.1994.346302;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559213;10.1109/visual.1991.175794;10.1109/infvis.2005.1532126;10.1109/infvis.2000.885091;10.1109/infvis.1999.801860;10.1109/infvis.2000.885086;10.1109/visual.1994.346302
Information visualization, interaction, interaction techniques, taxonomy, visual analytics11496745612777
7
InfoVis2009
A Nested Model for Visualization Design and Validation
10.1109/tvcg.2009.111
http://dx.doi.org/10.1109/TVCG.2009.111
921928J
We present a nested model for the visualization design and validation with four layers: characterize the task and data in the vocabulary of the problem domain, abstract into operations and data types, design visual encoding and interaction techniques, and create algorithms to execute techniques efficiently. The output from a level above is input to the level below, bringing attention to the design challenge that an upstream error inevitably cascades to all downstream levels. This model provides prescriptive guidance for determining appropriate evaluation approaches by identifying threats to validity unique to each level. We also provide three recommendations motivated by this model: authors should distinguish between these levels when claiming contributions at more than one of them, authors should explicitly state upstream assumptions at levels above the focus of a paper, and visualization venues should accept more papers on domain characterization.
Tamara MunznerTamara MunznerUniversity of British Columbia, Canada
10.1109/vast.2007.4389008;10.1109/infvis.2005.1532136;10.1109/tvcg.2008.117;10.1109/tvcg.2006.160;10.1109/visual.1998.745289;10.1109/tvcg.2007.70515;10.1109/tvcg.2008.109;10.1109/visual.1992.235203;10.1109/infvis.2004.59;10.1109/infvis.2005.1532124;10.1109/infvis.1998.729560;10.1109/infvis.2004.10;10.1109/tvcg.2008.125;10.1109/infvis.1997.636792;10.1109/infvis.2005.1532150;10.1109/visual.1990.146375;10.1109/vast.2007.4389008
Models, frameworks, design, evaluation10355925310257TT
8
InfoVis2016
Vega-Lite: A Grammar of Interactive Graphics
10.1109/tvcg.2016.2599030
http://dx.doi.org/10.1109/TVCG.2016.2599030
341350J
We present Vega-Lite, a high-level grammar that enables rapid specification of interactive data visualizations. Vega-Lite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multi-view displays, with a novel grammar of interaction. Users specify interactive semantics by composing selections. In Vega-Lite, a selection is an abstraction that defines input event processing, points of interest, and a predicate function for inclusion testing. Selections parameterize visual encodings by serving as input data, defining scale extents, or by driving conditional logic. The Vega-Lite compiler automatically synthesizes requisite data flow and event handling logic, which users can override for further customization. In contrast to existing reactive specifications, Vega-Lite selections decompose an interaction design into concise, enumerable semantic units. We evaluate Vega-Lite through a range of examples, demonstrating succinct specification of both customized interaction methods and common techniques such as panning, zooming, and linked selection.
Arvind Satyanarayan;Dominik Moritz;Kanit Wongsuphasawat;Jeffrey Heer
Arvind Satyanarayan;Dominik Moritz;Kanit Wongsuphasawat;Jeffrey Heer
Stanford University;University of Washington;University of Washington;University of Washington
10.1109/tvcg.2015.2467091;10.1109/tvcg.2009.174;10.1109/tvcg.2015.2467191;10.1109/tvcg.2014.2346260;10.1109/infvis.2000.885086;10.1109/tvcg.2007.70515;10.1109/tvcg.2011.185;10.1109/tvcg.2015.2467091
Information visualization;interaction;systems;toolkits;declarative specification641519317008BP
9
InfoVis2012
Design Study Methodology: Reflections from the Trenches and the Stacks
10.1109/tvcg.2012.213
http://dx.doi.org/10.1109/TVCG.2012.213
24312440J
Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance available about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies, as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies. Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines. We characterize two axes - a task clarity axis from fuzzy to crisp and an information location axis from the domain expert's head to the computer - and use these axes to reason about design study contributions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn, winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline potential pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research.
Michael Sedlmair;Miriah D. Meyer;Tamara Munzner
Michael Sedlmair;Miriah Meyer;Tamara Munzner
University of British, Colombia;University of Utah, USA;University of British, Colombia
10.1109/infvis.1999.801869;10.1109/infvis.1996.559226;10.1109/tvcg.2008.117;10.1109/tvcg.2009.152;10.1109/tvcg.2010.206;10.1109/infvis.2005.1532136;10.1109/tvcg.2010.193;10.1109/vast.2011.6102443;10.1109/tvcg.2011.174;10.1109/vast.2007.4389008;10.1109/tvcg.2009.116;10.1109/tvcg.2011.192;10.1109/tvcg.2009.128;10.1109/infvis.2003.1249023;10.1109/tvcg.2009.167;10.1109/tvcg.2009.111;10.1109/tvcg.2011.209;10.1109/tvcg.2010.137;10.1109/infvis.1999.801869
Design study, methodology, visualization, framework8285139511892HM
10
InfoVis2013
A Multi-Level Typology of Abstract Visualization Tasks
10.1109/tvcg.2013.124
http://dx.doi.org/10.1109/TVCG.2013.124
23762385J
The considerable previous work characterizing visualization usage has focused on low-level tasks or interactions and high-level tasks, leaving a gap between them that is not addressed. This gap leads to a lack of distinction between the ends and means of a task, limiting the potential for rigorous analysis. We contribute a multi-level typology of visualization tasks to address this gap, distinguishing why and how a visualization task is performed, as well as what the task inputs and outputs are. Our typology allows complex tasks to be expressed as sequences of interdependent simpler tasks, resulting in concise and flexible descriptions for tasks of varying complexity and scope. It provides abstract rather than domain-specific descriptions of tasks, so that useful comparisons can be made between visualization systems targeted at different application domains. This descriptive power supports a level of analysis required for the generation of new designs, by guiding the translation of domain-specific problems into abstract tasks, and for the qualitative evaluation of visualization usage. We demonstrate the benefits of our approach in a detailed case study, comparing task descriptions from our typology to those derived from related work. We also discuss the similarities and differences between our typology and over two dozen extant classification systems and theoretical frameworks from the literatures of visualization, human-computer interaction, information retrieval, communications, and cartography.
Matthew Brehmer;Tamara Munzner
Matthew Brehmer;Tamara Munzner
University of British Columbia, Canada;University of British Columbia, Canada
10.1109/tvcg.2007.70541;10.1109/tvcg.2012.219;10.1109/infvis.1996.559213;10.1109/tvcg.2012.213;10.1109/tvcg.2012.273;10.1109/infvis.2005.1532136;10.1109/tvcg.2010.177;10.1109/tvcg.2007.70539;10.1109/infvis.2002.1173148;10.1109/tvcg.2007.70515;10.1109/tvcg.2012.204;10.1109/tvcg.2009.111;10.1109/tvcg.2008.109;10.1109/visual.1992.235203;10.1109/infvis.2004.59;10.1109/vast.2008.4677365;10.1109/vast.2011.6102438;10.1109/tvcg.2008.121;10.1109/tvcg.2008.137;10.1109/infvis.1998.729560;10.1109/infvis.2004.10;10.1109/tvcg.2012.252;10.1109/visual.1990.146375;10.1109/tvcg.2007.70541
Typology, visualization models, task and requirements analysis, qualitative evaluation6965098410497TT
11
InfoVis2007
ManyEyes: a Site for Visualization at Internet Scale
10.1109/tvcg.2007.70577
http://dx.doi.org/10.1109/TVCG.2007.70577
11211128J
We describe the design and deployment of Many Eyes, a public Web site where users may upload data, create interactive visualizations, and carry on discussions. The goal of the site is to support collaboration around visualizations at a large scale by fostering a social style of data analysis in which visualizations not only serve as a discovery tool for individuals but also as a medium to spur discussion among users. To support this goal, the site includes novel mechanisms for end-user creation of visualizations and asynchronous collaboration around those visualizations. In addition to describing these technologies, we provide a preliminary report on the activity of our users.
Fernanda B. Viégas;Martin Wattenberg;Frank van Ham;Jesse Kriss;Matthew M. McKeon
Fernanda B. Viegas;Martin Wattenberg;Frank van Ham;Jesse Kriss;Matt McKeon
IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland
10.1109/infvis.2005.1532122;10.1109/visual.1991.175820;10.1109/infvis.2003.1249007;10.1109/infvis.2004.13
Visualization, World Wide Web, Social Software, Social Data Analysis, Communication-Minded Visualization1011468303676TT
12
SciVis2014
Fixed-Rate Compressed Floating-Point Arrays
10.1109/tvcg.2014.2346458
http://dx.doi.org/10.1109/TVCG.2014.2346458
26742683J
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4<sup>d</sup> values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Peter Lindstrom 0001Peter LindstromCenter for Applied Scientific Computing, Lawrence Livermore National Laboratory
10.1109/tvcg.2006.143;10.1109/visual.2001.964531;10.1109/tvcg.2006.186;10.1109/visual.2001.964520;10.1109/visual.2003.1250385;10.1109/tvcg.2012.209;10.1109/tvcg.2007.70516;10.1109/tvcg.2012.194;10.1109/visual.1996.568138;10.1109/tvcg.2006.143
Data compression, floating-point arrays, orthogonal block transform, embedded coding481457504152
13
Vis1991
Tree-maps: a space-filling approach to the visualization of hierarchical information structures
10.1109/visual.1991.175815
http://dx.doi.org/10.1109/VISUAL.1991.175815
284291C
A method for visualizing hierarchically structured information is described. The tree-map visualization technique makes 100% use of the available display space, mapping the full hierarchy onto a rectangular region in a space-filling manner. This efficient use of space allows very large hierarchies to be displayed in their entirety and facilitates the presentation of semantic information. Tree-maps can depict both the structure and content of the hierarchy. However, the approach is best suited to hierarchies in which the content of the leaf nodes and the structure of the hierarchy are of primary importance, and the content information associated with internal nodes is largely derived from their children.<<ETX>>
Brian Johnson;Ben Shneiderman
B. Johnson;B. Shneiderman
Department of Computer Science & Human-Computer Interaction Laboratory, University of Maryland, College Park, MD, USA;Department of Computer Science & Human-Computer Interaction Laboratory, University of Maryland, College Park, MD, USA2238418232443
14
Vis1990
Parallel coordinates: a tool for visualizing multi-dimensional geometry
10.1109/visual.1990.146402
http://dx.doi.org/10.1109/VISUAL.1990.146402
361378C
A methodology for visualizing analytic and synthetic geometry in R/sup N/ is presented. It is based on a system of parallel coordinates which induces a nonprojective mapping between N-dimensional and two-dimensional sets. Hypersurfaces are represented by their planar images which have some geometrical properties analogous to the properties of the hypersurface that they represent. A point from to line duality when N=2 generalizes to lines and hyperplanes enabling the representation of polyhedra in R/sup N/. The representation of a class of convex and non-convex hypersurfaces is discussed, together with an algorithm for constructing and displaying any interior point. The display shows some local properties of the hypersurface and provides information on the point's proximity to the boundary. Applications are discussed.<<ETX>>
Alfred Inselberg;Bernard Dimsdale
A. Inselberg;B. Dimsdale
IBM Scientific Center, Los Angeles, CA, USA and Department of Computer Sciences, University of Southern California, Los Angeles, CA, USA;IBM Scientific Center, Los Angeles, CA, USA1746407471296
15
InfoVis2013
What Makes a Visualization Memorable?
10.1109/tvcg.2013.234
http://dx.doi.org/10.1109/TVCG.2013.234
23062315J
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.
Michelle Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister
Michelle A. Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister
School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA
10.1109/tvcg.2012.221;10.1109/infvis.2004.59;10.1109/tvcg.2012.197;10.1109/tvcg.2012.245;10.1109/tvcg.2011.175;10.1109/tvcg.2012.221
Visualization taxonomy, information visualization, memorability6353873914261
16
InfoVis2007
NodeTrix: a Hybrid Visualization of Social Networks
10.1109/tvcg.2007.70582
http://dx.doi.org/10.1109/TVCG.2007.70582
13021309J
The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.
Nathalie Henry;Jean-Daniel Fekete;Michael J. McGuffin
Nathalie Henry;Jean-Daniel Fekete;Michael J. McGuffin
University of Sydney, Australia and INRIA Futurs, University of Paris-Sud 11, France;INRIA Futurs and Laboratory RI UMR CNRS 5800, France;Ontario Cancer Institute, University of Toronto, Canada
10.1109/tvcg.2006.160;10.1109/vast.2006.261426;10.1109/infvis.2005.1532126;10.1109/infvis.2004.46;10.1109/tvcg.2006.193;10.1109/infvis.2005.1532129;10.1109/tvcg.2006.166;10.1109/tvcg.2006.147;10.1109/infvis.2004.64;10.1109/infvis.2003.1249011;10.1109/tvcg.2006.160
Network visualization, Matrix visualization, Hybrid visualization, Aggregation, Interaction675383354341
17
VAST2013
Visual Exploration of Big Spatio-Temporal Urban Data: A Study of New York City Taxi Trips
10.1109/tvcg.2013.226
http://dx.doi.org/10.1109/TVCG.2013.226
21492158J
As increasing volumes of urban data are captured and become available, new opportunities arise for data-driven analysis that can lead to improvements in the lives of citizens through evidence-based decision making and policies. In this paper, we focus on a particularly important urban data set: taxi trips. Taxis are valuable sensors and information associated with taxi trips can provide unprecedented insight into many different aspects of city life, from economic activity and human behavior to mobility patterns. But analyzing these data presents many challenges. The data are complex, containing geographical and temporal components in addition to multiple variables associated with each trip. Consequently, it is hard to specify exploratory queries and to perform comparative analyses (e.g., compare different regions over time). This problem is compounded due to the size of the data-there are on average 500,000 taxi trips each day in NYC. We propose a new model that allows users to visually query taxi trips. Besides standard analytics queries, the model supports origin-destination queries that enable the study of mobility across the city. We show that this model is able to express a wide range of spatio-temporal queries, and it is also flexible in that not only can queries be composed but also different aggregations and visual representations can be applied, allowing users to explore and compare results. We have built a scalable system that implements this model which supports interactive response times; makes use of an adaptive level-of-detail rendering strategy to generate clutter-free visualization for large results; and shows hidden details to the users in a summary through the use of overlay heat maps. We present a series of case studies motivated by traffic engineers and economists that show how our model and system enable domain experts to perform tasks that were previously unattainable for them.
Nivan Ferreira;Jorge Poco;Huy T. Vo;Juliana Freire;Cláudio T. Silva
Nivan Ferreira;Jorge Poco;Huy T. Vo;Juliana Freire;Cláudio T. Silva
Polytechnic Institute of New York University, USA;Polytechnic Institute of New York University, USA;CUSP, New York University, USA;Polytechnic Institute of New York University, USA;Polytechnic Institute of New York University, USA
10.1109/infvis.2004.12;10.1109/vast.2008.4677356;10.1109/vast.2011.6102454;10.1109/tvcg.2007.70535;10.1109/vast.2010.5652467;10.1109/infvis.2005.1532150;10.1109/vast.2008.4677370;10.1109/infvis.2000.885086;10.1109/infvis.2004.12
Spatio-temporal queries, urban data, taxi movement data, visual exploration600380409925TT
18
VAST2016
Towards Better Analysis of Deep Convolutional Neural Networks
10.1109/tvcg.2016.2598831
http://dx.doi.org/10.1109/TVCG.2016.2598831
91100J
Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.
Mengchen Liu;Jiaxin Shi;Zhen Li 0044;Chongxuan Li;Jun Zhu 0001;Shixia Liu
Mengchen Liu;Jiaxin Shi;Zhen Li;Chongxuan Li;Jun Zhu;Shixia Liu
School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;School of Software and TNList, Tsinghua University
10.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467554;10.1109/tvcg.2015.2467813;10.1109/tvcg.2010.132;10.1109/tvcg.2008.135;10.1109/tvcg.2014.2346919;10.1109/tvcg.2011.239;10.1109/visual.1991.175815;10.1109/visual.2005.1532820;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433;10.1109/tvcg.2015.2468151
Deep convolutional neural networks;rectangle packing;matrix reordering;edge bundling;biclustering460363608256
19
Vis2002
Efficient simplification of point-sampled surfaces
10.1109/visual.2002.1183771
http://dx.doi.org/10.1109/VISUAL.2002.1183771
163170C
We introduce, analyze and quantitatively compare a number of surface simplification methods for point-sampled geometry. We have implemented incremental and hierarchical clustering, iterative simplification, and particle simulation algorithms to create approximations of point-based models with lower sampling density. All these methods work directly on the point cloud, requiring no intermediate tesselation. We show how local variation estimation and quadric error metrics can be employed to diminish the approximation error and concentrate more samples in regions of high curvature. To compare the quality of the simplified surfaces, we have designed a new method for computing numerical and visual error estimates for point-sampled surfaces. Our algorithms are fast, easy to implement, and create high-quality surface approximations, clearly demonstrating the effectiveness of point-based surface simplification.
Mark Pauly;Markus H. Gross;Leif Kobbelt
M. Pauly;M. Gross;L.P. Kobbelt
ETH Zurich, Switzerland;ETH Zurich, Switzerland;RWTH Aachen, Germany
10.1109/visual.2001.964503;10.1109/visual.1999.809896;10.1109/visual.2001.964502;10.1109/visual.2001.964489;10.1109/visual.2000.885722;10.1109/visual.2001.964503
1370340324117
20
Vis2006
Fast and Efficient Compression of Floating-Point Data
10.1109/tvcg.2006.143
http://dx.doi.org/10.1109/TVCG.2006.143
12451250J
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data
Peter Lindstrom 0001;Martin Isenburg
Peter Lindstrom;Martin Isenburg
Lawrence Livemore National Laboratory, USA;University of California, Berkeley, USA
10.1109/visual.1999.809868;10.1109/visual.2000.885711;10.1109/visual.2002.1183768;10.1109/visual.1996.568138;10.1109/visual.1999.809868
High throughput, lossless compression, file compaction for I/O efficiency, fast entropy coding, range coder, predictive coding, large scale simulation and visualization475337304202
21
InfoVis2007
Show Me: Automatic Presentation for Visual Analysis
10.1109/tvcg.2007.70594
http://dx.doi.org/10.1109/TVCG.2007.70594
11371144J
This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays). A key research issue for the commercial application of automatic presentation is the user experience, which must support the flow of visual analysis. User experience has not been the focus of previous research on automatic presentation. The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields. Although the use of these defaults and commands is optional, user interface logs indicate that Show Me is used by commercial users.
Jock D. Mackinlay;Pat Hanrahan;Chris Stolte
Jock Mackinlay;Pat Hanrahan;Chris Stolte
Tableau Software, USA;Stanford University and Tableau Software, USA;Tableau Software, USA
10.1109/infvis.2000.885086
Automatic presentation, visual analysis, graphic design, best practices, data visualization, small multiples570329124242
22
Vis2006
Ambient Occlusion and Edge Cueing for Enhancing Real Time Molecular Visualization
10.1109/tvcg.2006.115
http://dx.doi.org/10.1109/TVCG.2006.115
12371244J
The paper presents a set of combined techniques to enhance the real-time visualization of simple or complex molecules (up to order of 10<sup>6</sup> atoms) space fill mode. The proposed approach includes an innovative technique for efficient computation and storage of ambient occlusion terms, a small set of GPU accelerated procedural impostors for space-fill and ball-and-stick rendering, and novel edge-cueing techniques. As a result, the user's understanding of the three-dimensional structure under inspection is strongly increased (even for'still images), while the rendering still occurs in real time
Marco Tarini;Paolo Cignoni;Claudio Montani
Marco Tarini;Paolo Cignoni;Claudio Montani
Università dell'Insubria, Varese, Italy;I. S. T. I.-C. N. R, Pisa, Italy;I. S. T. I.-C. N. R, Pisa, Italy
10.1109/visual.2000.885694;10.1109/visual.2003.1250394;10.1109/visual.2000.885694
502320321383TT
23
InfoVis2007
Animated Transitions in Statistical Data Graphics
10.1109/tvcg.2007.70539
http://dx.doi.org/10.1109/TVCG.2007.70539
12401247J
In this paper we investigate the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots. We extend theoretical models of data graphics to include such transitions, introducing a taxonomy of transition types. We then propose design principles for creating effective transitions and illustrate the application of these principles in <i>DynaVis</i>, a visualization system featuring animated data graphics. Two controlled experiments were conducted to assess the efficacy of various transition types, finding that animated transitions can significantly improve graphical perception.
Jeffrey Heer;George G. Robertson
Jeffrey Heer;George Robertson
Computer Science Division, University of California, Berkeley, USA;Microsoft Research Limited, USA
10.1109/infvis.1999.801854;10.1109/infvis.2001.963279;10.1109/infvis.2002.1173148;10.1109/infvis.1999.801854
Statistical data graphics, animation, transitions, information visualization, design, experiment595310273093
24
InfoVis2007
A Taxonomy of Clutter Reduction for Information Visualisation
10.1109/tvcg.2007.70535
http://dx.doi.org/10.1109/TVCG.2007.70535
12161223J
Information visualisation is about gaining insight into data through a visual representation. This data is often multivariate and increasingly, the datasets are very large. To help us explore all this data, numerous visualisation applications, both commercial and research prototypes, have been designed using a variety of techniques and algorithms. Whether they are dedicated to geo-spatial data or skewed hierarchical data, most of the visualisations need to adopt strategies for dealing with overcrowded displays, brought about by too much data to fit in too small a display space. This paper analyses a large number of these clutter reduction methods, classifying them both in terms of how they deal with clutter reduction and more importantly, in terms of the benefits and losses. The aim of the resulting taxonomy is to act as a guide to match techniques to problems where different criteria may have different importance, and more importantly as a means to critique and hence develop existing and new techniques.
Geoffrey P. Ellis;Alan J. DixGeoffrey Ellis;Alan DixLancaster University, USA;Lancaster University, USA
10.1109/infvis.2003.1249018;10.1109/infvis.2000.885092;10.1109/tvcg.2006.138;10.1109/visual.2005.1532819;10.1109/infvis.2003.1249008;10.1109/visual.1999.809866;10.1109/infvis.2000.885091;10.1109/visual.1998.745301;10.1109/infvis.1997.636789;10.1109/infvis.2002.1173156;10.1109/infvis.2003.1249019;10.1109/infvis.1997.636792;10.1109/infvis.1995.528685;10.1109/infvis.2004.15;10.1109/tvcg.2006.170;10.1109/infvis.2003.1249018
Clutter reduction, information visualisation, occlusion, large datasets, taxonomy523295544028
25
InfoVis2011
Visualization Rhetoric: Framing Effects in Narrative Visualization
10.1109/tvcg.2011.255
http://dx.doi.org/10.1109/TVCG.2011.255
22312240J
Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.
Jessica Hullman;Nicholas Diakopoulos
Jessica Hullman;Nick Diakopoulos
University of Michigan, USA;Rutgers University, USA
10.1109/tvcg.2010.179;10.1109/tvcg.2007.70577;10.1109/tvcg.2010.177;10.1109/tvcg.2009.111;10.1109/tvcg.2010.179
Rhetoric, narrative visualization, framing effects, semiotics, denotation, connotation 464294569936
26
InfoVis2015
Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations
10.1109/tvcg.2015.2467191
http://dx.doi.org/10.1109/TVCG.2015.2467191
649658J
General visualization tools typically require manual specification of views: analysts must select data variables and then choose which transformations and visual encodings to apply. These decisions often involve both domain and visualization design expertise, and may impose a tedious specification process that impedes exploration. In this paper, we seek to complement manual chart construction with interactive navigation of a gallery of automatically-generated visualizations. We contribute Voyager, a mixed-initiative system that supports faceted browsing of recommended charts chosen according to statistical and perceptual measures. We describe Voyager's architecture, motivating design principles, and methods for generating and interacting with visualization recommendations. In a study comparing Voyager to a manual visualization specification tool, we find that Voyager facilitates exploration of previously unseen data and leads to increased data variable coverage. We then distill design implications for visualization tools, in particular the need to balance rapid exploration and targeted question-answering.
Kanit Wongsuphasawat;Dominik Moritz;Anushka Anand;Jock D. Mackinlay;Bill Howe;Jeffrey Heer
Kanit Wongsuphasawat;Dominik Moritz;Anushka Anand;Jock Mackinlay;Bill Howe;Jeffrey Heer
University of Washington;Tableau Research;Tableau Research;Tableau Research;University of Washington;University of Washington
10.1109/tvcg.2014.2346297;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346291;10.1109/infvis.2000.885086;10.1109/tvcg.2014.2346297
User interfaces, information visualization, exploratory analysis, visualization recommendation, mixed-initiative systems487292484307
27
InfoVis2007
Casual Information Visualization: Depictions of Data in Everyday Life
10.1109/tvcg.2007.70541
http://dx.doi.org/10.1109/TVCG.2007.70541
11451152J
Information visualization has often focused on providing deep insight for expert user populations and on techniques for amplifying cognition through complicated interactive visual models. This paper proposes a new subdomain for infovis research that complements the focus on analytic tasks and expert use. Instead of work-related and analytically driven infovis, we propose casual information visualization (or casual infovis) as a complement to more traditional infovis domains. Traditional infovis systems, techniques, and methods do not easily lend themselves to the broad range of user populations, from expert to novices, or from work tasks to more everyday situations. We propose definitions, perspectives, and research directions for further investigations of this emerging subfield. These perspectives build from ambient information visualization (Skog et al., 2003), social visualization, and also from artistic work that visualizes information (Viegas and Wattenberg, 2007). We seek to provide a perspective on infovis that integrates these research agendas under a coherent vocabulary and framework for design. We enumerate the following contributions. First, we demonstrate how blurry the boundary of infovis is by examining systems that exhibit many of the putative properties of infovis systems, but perhaps would not be considered so. Second, we explore the notion of insight and how, instead of a monolithic definition of insight, there may be multiple types, each with particular characteristics. Third, we discuss design challenges for systems intended for casual audiences. Finally we conclude with challenges for system evaluation in this emerging subfield.
Zachary Pousman;John T. Stasko;Michael Mateas
Zachary Pousman;John Stasko;Michael Mateas
School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;University of California, Santa Cruz, USA
10.1109/infvis.2005.1532126;10.1109/infvis.2004.8;10.1109/infvis.2003.1249031;10.1109/infvis.2004.59;10.1109/visual.1990.146375
Casual information visualization, ambient infovis, social infovis, editorial, design, evaluation501275425133
28
VAST2012
Enterprise Data Analysis and Visualization: An Interview Study
10.1109/tvcg.2012.219
http://dx.doi.org/10.1109/TVCG.2012.219
29172926J
Organizations rely on data analysts to model customer engagement, streamline operations, improve production, inform business decisions, and combat fraud. Though numerous analysis and visualization tools have been built to improve the scale and efficiency at which analysts can work, there has been little research on how analysis takes place within the social and organizational context of companies. To better understand the enterprise analysts' ecosystem, we conducted semi-structured interviews with 35 data analysts from 25 organizations across a variety of sectors, including healthcare, retail, marketing and finance. Based on our interview data, we characterize the process of industrial data analysis and document how organizational features of an enterprise impact it. We describe recurring pain points, outstanding challenges, and barriers to adoption for visual analytic tools. Finally, we discuss design implications and opportunities for visual analysis research.
Sean Kandel;Andreas Paepcke;Joseph M. Hellerstein;Jeffrey Heer
Sean Kandel;Andreas Paepcke;Joseph M. Hellerstein;Jeffrey Heer
University of Stanford, USA;University of Stanford, USA;University of California, Berkeley, USA;University of Stanford, USA
10.1109/tvcg.2008.137;10.1109/vast.2008.4677365;10.1109/vast.2011.6102438;10.1109/infvis.2005.1532136;10.1109/vast.2010.5652880;10.1109/vast.2009.5333878;10.1109/vast.2007.4389011;10.1109/vast.2011.6102435;10.1109/tvcg.2008.137
Data, analysis, visualization, enterprise500274377112HM
29
InfoVis2008
Rolling the Dice: Multidimensional Visual Exploration using Scatterplot Matrix Navigation
10.1109/tvcg.2008.153
http://dx.doi.org/10.1109/TVCG.2008.153
11411148J
Scatterplots remain one of the most popular and widely-used visual representations for multidimensional data due to their simplicity, familiarity and visual clarity, even if they lack some of the flexibility and visual expressiveness of newer multidimensional visualization techniques. This paper presents new interactive methods to explore multidimensional data using scatterplots. This exploration is performed using a matrix of scatterplots that gives an overview of the possible configurations, thumbnails of the scatterplots, and support for interactive navigation in the multidimensional space. Transitions between scatterplots are performed as animated rotations in 3D space, somewhat akin to rolling dice. Users can iteratively build queries using bounding volumes in the dataset, sculpting the query from different viewpoints to become more and more refined. Furthermore, the dimensions in the navigation space can be reordered, manually or automatically, to highlight salient correlations and differences among them. An example scenario presents the interaction techniques supporting smooth and effortless visual exploration of multidimensional datasets.
Niklas Elmqvist;Pierre Dragicevic;Jean-Daniel Fekete
Niklas Elmqvist;Pierre Dragicevic;Jean-Daniel Fekete
INRIA, Paris, France;INRIA, Paris, France;INRIA, Paris, France
10.1109/tvcg.2007.70515;10.1109/vast.2007.4389013;10.1109/tvcg.2007.70577;10.1109/visual.1990.146386;10.1109/vast.2006.261452;10.1109/infvis.2005.1532136;10.1109/visual.1994.346302;10.1109/infvis.1998.729559;10.1109/visual.1995.485139;10.1109/infvis.2003.1249016;10.1109/infvis.2000.885086;10.1109/infvis.2004.3;10.1109/infvis.2004.64;10.1109/tvcg.2007.70539;10.1109/infvis.2004.15;10.1109/tvcg.2007.70515
Visual exploration, visual queries, visual analytics, navigation, multivariate data, interaction555271423992BP
30
InfoVis2011
TextFlow: Towards Better Understanding of Evolving Topics in Text
10.1109/tvcg.2011.239
http://dx.doi.org/10.1109/TVCG.2011.239
24122421J
Understanding how topics evolve in text data is an important and challenging task. Although much work has been devoted to topic analysis, the study of topic evolution has largely been limited to individual topics. In this paper, we introduce TextFlow, a seamless integration of visualization and topic mining techniques, for analyzing various evolution patterns that emerge from multiple topics. We first extend an existing analysis technique to extract three-level features: the topic evolution trend, the critical event, and the keyword correlation. Then a coherent visualization that consists of three new visual components is designed to convey complex relationships between them. Through interaction, the topic mining model and visualization can communicate with each other to help users refine the analysis result and gain insights into the data progressively. Finally, two case studies are conducted to demonstrate the effectiveness and usefulness of TextFlow in helping users understand the major topic evolution patterns in time-varying text data.
Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong 0001
Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong
Hong Kong University of Science and Technology, Hong Kong, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China;Microsoft Research Asia, China;Zhejiang University, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China
10.1109/vast.2010.5652931;10.1109/vast.2009.5333443;10.1109/tvcg.2006.156;10.1109/tvcg.2009.171;10.1109/tvcg.2008.166;10.1109/tvcg.2010.129;10.1109/vast.2008.4677364;10.1109/infvis.2005.1532122;10.1109/vast.2009.5333437;10.1109/infvis.2005.1532152;10.1109/vast.2010.5652931
Text visualization, Topic evolution, Hierarchical Dirichlet process, Critical event466264354574
31
VAST2013
Visual Traffic Jam Analysis Based on Trajectory Data
10.1109/tvcg.2013.228
http://dx.doi.org/10.1109/TVCG.2013.228
21592168J
In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.
Zuchao Wang;Min Lu 0002;Xiaoru Yuan;Junping Zhang;Huub van de Wetering
Zuchao Wang;Min Lu;Xiaoru Yuan;Junping Zhang;Huub van de Wetering
Key Laboratory of Machine Perception (Ministry of Education), Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), Peking University, China;Shanghai Key Laboratory of Intelligent Information Processing, and School of Computer Science, Fudan University, China and Key Laboratory of Machine Perception (Ministry of Education), Peking University;Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, China;Department of Mathematics and Computer Science, Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL
10.1109/visual.1997.663866;10.1109/vast.2011.6102454;10.1109/tvcg.2009.145;10.1109/vast.2012.6400556;10.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2011.202;10.1109/vast.2012.6400553;10.1109/tvcg.2012.265;10.1109/tvcg.2011.181;10.1109/vast.2009.5332593;10.1109/tvcg.2008.125;10.1109/vast.2011.6102455;10.1109/vast.2010.5653580
Traffic visualization, traffic jam propagation401263547804
32
InfoVis2009
Participatory Visualization with Wordle
10.1109/tvcg.2009.171
http://dx.doi.org/10.1109/TVCG.2009.171
11371144J
We discuss the design and usage of ldquoWordle,rdquo a Web-based tool for visualizing text. Wordle creates tag-cloud-like displays that give careful attention to typography, color, and composition. We describe the algorithms used to balance various aesthetic criteria and create the distinctive Wordle layouts. We then present the results of a study of Wordle usage, based both on spontaneous behaviour observed in the wild, and on a large-scale survey of Wordle users. The results suggest that Wordles have become a kind of medium of expression, and that a ldquoparticipatory culturerdquo has arisen around them.
Fernanda B. Viégas;Martin Wattenberg;Jonathan Feinberg
Fernanda B. Viegas;Martin Wattenberg;Jonathan Feinberg
IBM Research, USA;IBM Research, USA;IBM Research, USA
10.1109/infvis.2005.1532122;10.1109/tvcg.2007.70577
Visualization, text, tag cloud, participatory culture, memory, educational visualization, social data analysis534263153675
33
Vis2001
Point set surfaces
10.1109/visual.2001.964489
http://dx.doi.org/10.1109/VISUAL.2001.964489
2128C
We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). We present tools to increase or decrease the density of the points, thus, allowing an adjustment of the spacing among the points to control the fidelity of the representation. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates.
Marc Alexa;Johannes Behr;Daniel Cohen-Or;Shachar Fleishman;David Levin;Cláudio T. Silva
M. Alexa;J. Behr;D. Cohen-Or;S. Fleishman;D. Levin;C.T. Silva
Technical University of Darmstadt, Germany;ZGDV Darmstadt, Germany;Tel-Aviv University, Israel;Tel-Aviv University, Israel;Tel-Aviv University, Israel;AT and T Laboratories, USA
10.1109/visual.1997.663930;10.1109/visual.1998.745327;10.1109/visual.1997.663930
surface representation and reconstruction, moving least squares, point sample rendering, 3D acquisition1090259461225
34
InfoVis2008
Effectiveness of Animation in Trend Visualization
10.1109/tvcg.2008.125
http://dx.doi.org/10.1109/TVCG.2008.125
13251332J
Animation has been used to show trends in multi-dimensional data. This technique has recently gained new prominence for presentations, most notably with Gapminder Trendalyzer. In Trendalyzer, animation together with interesting data and an engaging presenter helps the audience understand the results of an analysis of the data. It is less clear whether trend animation is effective for analysis. This paper proposes two alternative trend visualizations that use static depictions of trends: one which shows traces of all trends overlaid simultaneously in one display and a second that uses a small multiples display to show the trend traces side-by-side. The paper evaluates the three visualizations for both analysis and presentation. Results indicate that trend animation can be challenging to use even for presentations; while it is the fastest technique for presentation and participants find it enjoyable and exciting, it does lead to many participant errors. Animation is the least effective form for analysis; both static depictions of trends are significantly faster than animation, and the small multiples display is more accurate.
George G. Robertson;Roland Fernandez;Danyel Fisher;Bongshin Lee;John T. Stasko
George Robertson;Roland Fernandez;Danyel Fisher;Bongshin Lee;John Stasko
Microsoft Research;Microsoft Research;Microsoft Research;Microsoft Research;Georgia Institute of Technology
10.1109/infvis.1999.801854;10.1109/tvcg.2007.70539
Information visualization, animation, trends, design, experiment480259214273TT
35
InfoVis2008
Stacked Graphs - Geometry & Aesthetics
10.1109/tvcg.2008.166
http://dx.doi.org/10.1109/TVCG.2008.166
12451252J
In February 2008, the New York Times published an unusual chart of box office revenues for 7500 movies over 21 years. The chart was based on a similar visualization, developed by the first author, that displayed trends in music listening. This paper describes the design decisions and algorithms behind these graphics, and discusses the reaction on the Web. We suggest that this type of complex layered graph is effective for displaying large data sets to a mass audience. We provide a mathematical analysis of how this layered graph relates to traditional stacked graphs and to techniques such as ThemeRiver, showing how each method is optimizing a different ldquoenergy functionrdquo. Finally, we discuss techniques for coloring and ordering the layers of such graphs. Throughout the paper, we emphasize the interplay between considerations of aesthetics and legibility.
Lee Byron;Martin Wattenberg
Lee Byron;Martin Wattenberg
The New York Times;Visual Communication Laboratory at IBM
10.1109/tvcg.2006.163;10.1109/infvis.2005.1532122;10.1109/tvcg.2007.70577;10.1109/infvis.2000.885098;10.1109/tvcg.2006.163
Streamgraph, ThemeRiver, listening history, lastfm, aesthetics, communication-minded visualization, time series557257193104HM
36
Vis2003
Acceleration techniques for GPU-based volume rendering
10.1109/visual.2003.1250384
http://dx.doi.org/10.1109/VISUAL.2003.1250384
287292C
Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card.
Jens H. Krüger;Rüdiger Westermann
J. Kruger;R. Westermann
Computer Graphics and Visualization Group, Technical University Munich, Germany;Computer Graphics and Visualization Group, Technical University Munich, Germany
10.1109/visual.1999.809889;10.1109/visual.1997.663880;10.1109/visual.1993.398852;10.1109/visual.2002.1183764
Volume Rendering, Programmable Graphics Hardware, Ray-Casting1310254161900TT
37
VAST2014
Knowledge Generation Model for Visual Analytics
10.1109/tvcg.2014.2346481
http://dx.doi.org/10.1109/TVCG.2014.2346481
16041613J
Visual analytics enables us to analyze huge information spaces in order to support complex decision making and data exploration. Humans play a central role in generating knowledge from the snippets of evidence emerging from visual data analysis. Although prior research provides frameworks that generalize this process, their scope is often narrowly focused so they do not encompass different perspectives at different levels. This paper proposes a knowledge generation model for visual analytics that ties together these diverse frameworks, yet retains previously developed models (e.g., KDD process) to describe individual segments of the overall visual analytic processes. To test its utility, a real world visual analytics system is compared against the model, demonstrating that the knowledge generation process model provides a useful guideline when developing and evaluating such systems. The model is used to effectively compare different data analysis systems. Furthermore, the model provides a common language and description of visual analytic processes, which can be used for communication between researchers. At the end, our model reflects areas of research that future researchers can embark on.
Dominik Sacha;Andreas Stoffel;Florian Stoffel;Bum Chul Kwon;Geoffrey P. Ellis;Daniel A. Keim
Dominik Sacha;Andreas Stoffel;Florian Stoffel;Bum Chul Kwon;Geoffrey Ellis;Daniel A. Keim
Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz
10.1109/visual.2005.1532781;10.1109/tvcg.2013.124;10.1109/vast.2009.5333023;10.1109/tvcg.2011.229;10.1109/tvcg.2008.109;10.1109/vast.2008.4677361;10.1109/vast.2008.4677365;10.1109/vast.2010.5652879;10.1109/tvcg.2012.273;10.1109/vast.2008.4677358;10.1109/tvcg.2008.121;10.1109/vast.2007.4389006;10.1109/vast.2011.6102435;10.1109/tvcg.2013.120;10.1109/visual.2005.1532781
Visual Analytics, Knowledge Generation, Reasoning, Visualization Taxonomies and Models, Interaction381250436020TT
38
VAST2017
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
10.1109/tvcg.2017.2744878
http://dx.doi.org/10.1109/TVCG.2017.2744878
112J
We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.
Kanit Wongsuphasawat;Daniel Smilkov;James Wexler;Jimbo Wilson;Dandelion Mané;Doug Fritz;Dilip Krishnan;Fernanda B. Viégas;Martin Wattenberg
Kanit Wongsuphasawat;Daniel Smilkov;James Wexler;Jimbo Wilson;Dandelion Mané;Doug Fritz;Dilip Krishnan;Fernanda B. Viégas;Martin Wattenberg
Paul G. Allen School of Computer Science & Engineering, University of Washington;Google Research;Google Research;Google Research;Google Research;Google Research;Google Research;Google Research;Google Research
10.1109/infvis.2005.1532130;10.1109/tvcg.2006.156;10.1109/infvis.2004.66;10.1109/tvcg.2015.2467451;10.1109/tvcg.2016.2598831;10.1109/visual.2005.1532820;10.1109/infvis.2004.43;10.1109/tvcg.2015.2467251;10.1109/tvcg.2011.185;10.1109/tvcg.2006.120;10.1109/infvis.2005.1532130
Neural Network,Graph Visualization,Dataflow Graph,Clustered Graph3322305710418BP
39
SciVis2013
A Systematic Review on the Practice of Evaluating Visualization
10.1109/tvcg.2013.126
http://dx.doi.org/10.1109/TVCG.2013.126
28182827J
We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by Lam et al. [2012]. The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90% of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity.
Tobias Isenberg 0001;Petra Isenberg;Jian Chen 0006;Michael Sedlmair;Torsten Möller
Tobias Isenberg;Petra Isenberg;Jian Chen;Michael Sedlmair;Torsten Möller
INRIA, France;INRIA, France;University of Maryland, Baltimore, USA;University of Vienna, Austria;University of Vienna, Austria
10.1109/tvcg.2009.121;10.1109/visual.2005.1532781;10.1109/tvcg.2006.143;10.1109/tvcg.2011.224;10.1109/tvcg.2010.199;10.1109/tvcg.2010.223;10.1109/tvcg.2012.213;10.1109/tvcg.2010.134;10.1109/tvcg.2009.194;10.1109/tvcg.2011.174;10.1109/tvcg.2009.111;10.1109/tvcg.2011.206;10.1109/tvcg.2012.234;10.1109/tvcg.2012.292;10.1109/tvcg.2008.128;10.1109/tvcg.2009.167;10.1109/tvcg.2012.223;10.1109/visual.1994.346285;10.1109/tvcg.2009.121
Evaluation, validation, systematic review, visualization, scientific visualization, information visualization353229747202
40
InfoVis2009
Bubble Sets: Revealing Set Relations with Isocontours over Existing Visualizations
10.1109/tvcg.2009.122
http://dx.doi.org/10.1109/TVCG.2009.122
10091016J
While many data sets contain multiple relationships, depicting more than one data relationship within a single visualization is challenging. We introduce Bubble Sets as a visualization technique for data that has both a primary data relation with a semantically significant spatial organization and a significant set membership relation in which members of the same set are not necessarily adjacent in the primary layout. In order to maintain the spatial rights of the primary data relation, we avoid layout adjustment techniques that improve set cluster continuity and density. Instead, we use a continuous, possibly concave, isocontour to delineate set membership, without disrupting the primary layout. Optimizations minimize cluster overlap and provide for calculation of the isocontours at interactive speeds. Case studies show how this technique can be used to indicate multiple sets on a variety of common visualizations.
Christopher Collins 0001;Gerald Penn;Sheelagh Carpendale
Christopher Collins;Gerald Penn;Sheelagh Carpendale
University of Toronto, Canada;University of Toronto, Canada;University of Calgary, Canada
10.1109/tvcg.2006.122;10.1109/infvis.2005.1532150;10.1109/tvcg.2008.130;10.1109/tvcg.2008.144;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70521;10.1109/tvcg.2008.153;10.1109/tvcg.2006.122
clustering, spatial layout, graph visualization, tree visualization402228232740
41
VAST2011
SensePlace2: GeoTwitter analytics support for situational awareness
10.1109/vast.2011.6102456
http://dx.doi.org/10.1109/VAST.2011.6102456
181190C
Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.
Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford
Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford
GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA
10.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922
social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis 455227373217TT
42
VAST2017
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models
10.1109/tvcg.2017.2744718
http://dx.doi.org/10.1109/TVCG.2017.2744718
8897J
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ActiVis, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance-and subset-level. ActiVis has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ActiVis may work with different models.
Minsuk Kahng;Pierre Y. Andrews;Aditya Kalro;Duen Horng (Polo) Chau
Minsuk Kahng;Pierre Y. Andrews;Aditya Kalro;Duen Horng (Polo) Chau
Georgia Institute of Technology;Facebook;Facebook;Georgia Institute of Technology
10.1109/vast.2015.7347637;10.1109/vast.2010.5652443;10.1109/tvcg.2013.157;10.1109/tvcg.2014.2346482;10.1109/tvcg.2015.2467622;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/visual.2005.1532820;10.1109/vast.2011.6102453;10.1109/vast.2015.7347637
Visual analytics,deep learning,machine learning,information visualization341226383530
43
InfoVis2012
Stacking-Based Visualization of Trajectory Attribute Data
10.1109/tvcg.2012.265
http://dx.doi.org/10.1109/TVCG.2012.265
25652574J
Visualizing trajectory attribute data is challenging because it involves showing the trajectories in their spatio-temporal context as well as the attribute values associated with the individual points of trajectories. Previous work on trajectory visualization addresses selected aspects of this problem, but not all of them. We present a novel approach to visualizing trajectory attribute data. Our solution covers space, time, and attribute values. Based on an analysis of relevant visualization tasks, we designed the visualization solution around the principle of stacking trajectory bands. The core of our approach is a hybrid 2D/3D display. A 2D map serves as a reference for the spatial context, and the trajectories are visualized as stacked 3D trajectory bands along which attribute values are encoded by color. Time is integrated through appropriate ordering of bands and through a dynamic query mechanism that feeds temporally aggregated information to a circular time display. An additional 2D time graph shows temporal information in full detail by stacking 2D trajectory bands. Our solution is equipped with analytical and interactive mechanisms for selecting and ordering of trajectories, and adjusting the color mapping, as well as coordinated highlighting and dedicated 3D navigation. We demonstrate the usefulness of our novel visualization by three examples related to radiation surveillance, traffic analysis, and maritime navigation. User feedback obtained in a small experiment indicates that our hybrid 2D/3D solution can be operated quite well.
Christian Tominski;Heidrun Schumann;Gennady L. Andrienko;Natalia V. Andrienko
Christian Tominski;Heidrun Schumann;Gennady Andrienko;Natalia Andrienko
University of Rostock, Germany;University of Rostock, Germany;Fraunhofer Institute IAIS, Germany;Fraunhofer Institute IAIS, Germany
10.1109/tvcg.2010.197;10.1109/vast.2011.6102455;10.1109/vast.2009.5332593;10.1109/visual.1995.480803;10.1109/infvis.2004.27;10.1109/infvis.2005.1532144;10.1109/vast.2011.6102454;10.1109/vast.2010.5653580
Visualization, interaction, exploratory analysis, trajectory attribute data, spatio-temporal data366226353792
44
InfoVis2009
Protovis: A Graphical Toolkit for Visualization
10.1109/tvcg.2009.174
http://dx.doi.org/10.1109/TVCG.2009.174
11211128J
Despite myriad tools for visualizing data, there remains a gap between the notational efficiency of high-level visualization systems and the expressiveness and accessibility of low-level graphical systems. Powerful visualization systems may be inflexible or impose abstractions foreign to visual thinking, while graphical systems such as rendering APIs and vector-based drawing programs are tedious for complex work. We argue that an easy-to-use graphical system tailored for visualization is needed. In response, we contribute Protovis, an extensible toolkit for constructing visualizations by composing simple graphical primitives. In Protovis, designers specify visualizations as a hierarchy of marks with visual properties defined as functions of data. This representation achieves a level of expressiveness comparable to low-level graphics systems, while improving efficiency - the effort required to specify a visualization - and accessibility - the effort required to learn and modify the representation. We substantiate this claim through a diverse collection of examples and comparative analysis with popular visualization tools.
Michael Bostock;Jeffrey Heer
Michael Bostock;Jeffrey Heer
Computer Science Department, University of Stanford, Stanford, CA, USA;Computer Science Department, University of Stanford, Stanford, CA, USA
10.1109/visual.1999.809864;10.1109/infvis.2004.12;10.1109/tvcg.2006.178;10.1109/tvcg.2007.70577;10.1109/infvis.1998.729560;10.1109/vast.2007.4389011;10.1109/tvcg.2008.166;10.1109/infvis.2004.64;10.1109/infvis.2000.885086;10.1109/vast.2007.4388996;10.1109/visual.1999.809864
Information visualization, user interfaces, toolkits, 2D graphics495222382804
45
InfoVis2017
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
10.1109/tvcg.2017.2744158
http://dx.doi.org/10.1109/TVCG.2017.2744158
667676J
Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVis, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. Long-term usage data after putting the tool online revealed great interest in the machine learning community.
Hendrik Strobelt;Sebastian Gehrmann;Hanspeter Pfister;Alexander M. Rush
Hendrik Strobelt;Sebastian Gehrmann;Hanspeter Pfister;Alexander M. Rush
Harvard SEAS;Harvard SEAS;Harvard SEAS;Harvard SEAS
10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598838;10.1109/visual.2005.1532820;10.1109/tvcg.2016.2598831
Visualization,Machine Learning,Recurrent Neural Networks,LSTM416220353330
46
InfoVis2013
LineUp: Visual Analysis of Multi-Attribute Rankings
10.1109/tvcg.2013.173
http://dx.doi.org/10.1109/TVCG.2013.173
22772286J
Rankings are a popular and universal approach to structuring otherwise unorganized collections of items by computing a rank for each item based on the value of one or more of its attributes. This allows us, for example, to prioritize tasks or to evaluate the performance of products relative to each other. While the visualization of a ranking itself is straightforward, its interpretation is not, because the rank of an item represents only a summary of a potentially complicated relationship between its attributes and those of the other items. It is also common that alternative rankings exist which need to be compared and analyzed to gain insight into how multiple heterogeneous attributes affect the rankings. Advanced visual exploration tools are needed to make this process efficient. In this paper we present a comprehensive analysis of requirements for the visualization of multi-attribute rankings. Based on these considerations, we propose LineUp - a novel and scalable visualization technique that uses bar charts. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and flexibly refine parameters to explore the effect of changes in the attribute combination. This process can be employed to derive actionable insights as to which attributes of an item need to be modified in order for its rank to change. Additionally, through integration of slope graphs, LineUp can also be used to compare multiple alternative rankings on the same set of items, for example, over time or across different attribute combinations. We evaluate the effectiveness of the proposed multi-attribute visualization technique in a qualitative study. The study shows that users are able to successfully solve complex ranking tasks in a short period of time.
Samuel Gratzl;Alexander Lex;Nils Gehlenborg;Hanspeter Pfister;Marc Streit
Samuel Gratzl;Alexander Lex;Nils Gehlenborg;Hanspeter Pfister;Marc Streit
Johannes Kepler University of Linz, Austria;Johannes Kepler University of Linz, Austria;Harvard University, USA;Harvard University, USA;Harvard Medical School, USA
10.1109/tvcg.2012.253;10.1109/tvcg.2008.166;10.1109/visual.1996.568118;10.1109/tvcg.2008.181;10.1109/tvcg.2007.70539;10.1109/tvcg.2009.111
Ranking visualization, ranking, scoring, multi-attribute, multifactorial, multi-faceted, stacked bar charts329219352729BP
47
InfoVis2015
Beyond Memorability: Visualization Recognition and Recall
10.1109/tvcg.2015.2467732
http://dx.doi.org/10.1109/TVCG.2015.2467732
519528J
In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.
Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva
Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva
University of British Columbia, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;University of Michigan;School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT)
10.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2011.193;10.1109/tvcg.2012.233;10.1109/tvcg.2011.175;10.1109/tvcg.2013.234;10.1109/tvcg.2012.215;10.1109/vast.2010.5653598;10.1109/tvcg.2012.245;10.1109/tvcg.2012.221;10.1109/tvcg.2012.197
Information visualization, memorability, recognition, recall, eye-tracking study295218485663
48
InfoVis1995
Visualizing the non-visual: spatial analysis and interaction with information from text documents
10.1109/infvis.1995.528686
http://dx.doi.org/10.1109/INFVIS.1995.528686
5158C
The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.
James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow
J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow
Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA
10.1109/visual.1993.398863;10.1109/visual.1993.398863
914217112605TT
49
InfoVis2018
What Do We Talk About When We Talk About Dashboards?
10.1109/tvcg.2018.2864903
http://dx.doi.org/10.1109/TVCG.2018.2864903
682692J
Dashboards are one of the most common use cases for data visualization, and their design and contexts of use are considerably different from exploratory visualization tools. In this paper, we look at the broad scope of how dashboards are used in practice through an analysis of dashboard examples and documentation about their use. We systematically review the literature surrounding dashboard use, construct a design space for dashboards, and identify major dashboard types. We characterize dashboards by their design goals, levels of interaction, and the practices around them. Our framework and literature review suggest a number of fruitful research directions to better support dashboard design, implementation and use.
Alper Sarikaya 0001;Michael Correll;Lyn Bartram;Melanie Tory;Danyel Fisher
Alper Sarikaya;Michael Correll;Lyn Bartram;Melanie Tory;Danyel Fisher
Microsoft Corp, Redmond, WA, US;Tableau Research;Simon Fraser University, Burnaby, BC, CA;Tableau Research;Honeycomb.io
10.1109/tvcg.2013.124;10.1109/tvcg.2017.2744198;10.1109/tvcg.2013.120;10.1109/tvcg.2013.124
Dashboards,literature review,survey,design space,open coding2492096612169
50
InfoVis2008
Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation
10.1109/tvcg.2008.137
http://dx.doi.org/10.1109/TVCG.2008.137
11891196J
Interactive history tools, ranging from basic undo and redo to branching timelines of user actions, facilitate iterative forms of interaction. In this paper, we investigate the design of history mechanisms for information visualization. We present a design space analysis of both architectural and interface issues, identifying design decisions and associated trade-offs. Based on this analysis, we contribute a design study of graphical history tools for Tableau, a database visualization system. These tools record and visualize interaction histories, support data analysis and communication of findings, and contribute novel mechanisms for presenting, managing, and exporting histories. Furthermore, we have analyzed aggregated collections of history sessions to evaluate Tableau usage. We describe additional tools for analyzing userspsila history logs and how they have been applied to study usage patterns in Tableau.
Jeffrey Heer;Jock D. Mackinlay;Chris Stolte;Maneesh Agrawala
Jeffrey Heer;Jock Mackinlay;Chris Stolte;Maneesh Agrawala
University of California Berkeley, USA;Tableau Software, Inc.;Tableau Software, Inc.;University of California Berkeley, USA
10.1109/infvis.2000.885086;10.1109/visual.1993.398857;10.1109/visual.1999.809871;10.1109/infvis.2004.2;10.1109/visual.1995.480801;10.1109/tvcg.2007.70594;10.1109/vast.2007.4388992
Visualization, history, undo, analysis, presentation, evaluation416209313636
51
Vis1997
ROAMing terrain: Real-time Optimally Adapting Meshes
10.1109/visual.1997.663860
http://dx.doi.org/10.1109/VISUAL.1997.663860
8188C
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
Mark A. Duchaineau;Murray Wolinsky;David E. Sigeti;Mark C. Miller;Charles Aldrich;Mark B. Mineev-Weinstein
M. Duchaineau;M. Wolinsky;D.E. Sigeti;M.C. Miller;C. Aldrich;M.B. Mineev-Weinstein
Los Alamos National Laboratory, USA and Lawrence Livemore National Laboratory;Los Alamos National Laboratory, USA;Los Alamos National Laboratory, USA;;Los Alamos National Laboratory, USA;Los Alamos National Laboratory, USA
10.1109/visual.1996.567600;10.1109/visual.1996.568126;10.1109/visual.1996.568125;10.1109/visual.1995.480813;10.1109/visual.1995.480805;10.1109/visual.1996.567600
triangle bintree, view-dependent mesh, frame-to-frame coherence, greedy algorithms142520719594
52
VAST2016
Visualizing the Hidden Activity of Artificial Neural Networks
10.1109/tvcg.2016.2598838
http://dx.doi.org/10.1109/TVCG.2016.2598838
101110J
In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.
Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea
Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea
University of Groningen, University of Campinas;University of São Paulo;University of Campinas;University of Groningen
10.1109/tvcg.2011.178;10.1109/tvcg.2011.220;10.1109/tvcg.2013.150;10.1109/tvcg.2014.2346578;10.1109/tvcg.2008.125;10.1109/tvcg.2015.2467553;10.1109/tvcg.2011.178
Artificial neural networks;dimensionality reduction;algorithm understanding303207506274
53
InfoVis2008
The Word Tree, an Interactive Visual Concordance
10.1109/tvcg.2008.172
http://dx.doi.org/10.1109/TVCG.2008.172
12211228J
We introduce the Word Tree, a new visualization and information-retrieval technique aimed at text documents. A Word Tree is a graphical version of the traditional "keyword-in-context" method, and enables rapid querying and exploration of bodies of text. In this paper we describe the design of the technique, along with some of the technical issues that arise in its implementation. In addition, we discuss the results of several months of public deployment of word trees on Many Eyes, which provides a window onto the ways in which users obtain value from the visualization.
Martin Wattenberg;Fernanda B. Viégas
Martin Wattenberg;Fernanda B. Viégas
IBM Research;IBM Research
10.1109/infvis.2002.1173155;10.1109/vast.2007.4389006;10.1109/tvcg.2007.70577;10.1109/infvis.2002.1173148
Text visualization, document visualization, Many Eyes, case study, concordance, information retrieval, search449206153020
54
InfoVis2005
Low-level components of analytic activity in information visualization
10.1109/infvis.2005.1532136
http://dx.doi.org/10.1109/INFVIS.2005.1532136
111117C
Existing system level taxonomies of visualization tasks are geared more towards the design of particular representations than the facilitation of user analytic activity. We present a set of ten low level analysis tasks that largely capture people's activities while employing information visualization tools for understanding data. To help develop these tasks, we collected nearly 200 sample questions from students about how they would analyze five particular data sets from different domains. The questions, while not being totally comprehensive, illustrated the sheer variety of analytic questions typically posed by users when employing information visualization systems. We hope that the presented set of tasks is useful for information visualization system designers as a kind of common substrate to discuss the relative analytic capabilities of the systems. Further, the tasks may provide a form of checklist for system designers.
Robert A. Amar;James Eagan;John T. Stasko
R. Amar;J. Eagan;J. Stasko
Georgia Institute of Technology,College of Computing, GVU Center;Georgia Institute of Technology,College of Computing, GVU Center;Georgia Institute of Technology,College of Computing, GVU Center
10.1109/visual.1990.146375;10.1109/infvis.1998.729560;10.1109/infvis.2000.885092;10.1109/infvis.2004.5;10.1109/infvis.2001.963289;10.1109/visual.1990.146375
Analytic activity, taxonomy, knowledge discovery, design, evaluation844205154315
55
InfoVis2014
The Effects of Interactive Latency on Exploratory Visual Analysis
10.1109/tvcg.2014.2346452
http://dx.doi.org/10.1109/TVCG.2014.2346452
21222131J
To support effective exploration, it is often stated that interactive visualizations should provide rapid response times. However, the effects of interactive latency on the process and outcomes of exploratory visual analysis have not been systematically studied. We present an experiment measuring user behavior and knowledge discovery with interactive visualizations under varying latency conditions. We observe that an additional delay of 500ms incurs significant costs, decreasing user activity and data set coverage. Analyzing verbal data from think-aloud protocols, we find that increased latency reduces the rate at which users make observations, draw generalizations and generate hypotheses. Moreover, we note interaction effects in which initial exposure to higher latencies leads to subsequently reduced performance in a low-latency setting. Overall, increased latency causes users to shift exploration strategy, in turn affecting performance. We discuss how these results can inform the design of interactive analysis tools.
Zhicheng Liu 0001;Jeffrey Heer
Zhicheng Liu;Jeffrey Heer
Adobe Research;University of Washington
10.1109/tvcg.2010.177;10.1109/tvcg.2013.179;10.1109/tvcg.2010.177
Interaction, latency, exploratory analysis, interactive visualization, scalability, user performance, verbal analysis354204452130
56
InfoVis2007
Scented Widgets: Improving Navigation Cues with Embedded Visualizations
10.1109/tvcg.2007.70589
http://dx.doi.org/10.1109/TVCG.2007.70589
11291136J
This paper presents scented widgets, graphical user interface controls enhanced with embedded visualizations that facilitate navigation in information spaces. We describe design guidelines for adding visual cues to common user interface widgets such as radio buttons, sliders, and combo boxes and contribute a general software framework for applying scented widgets within applications with minimal modifications to existing source code. We provide a number of example applications and describe a controlled experiment which finds that users exploring unfamiliar data make up to twice as many unique discoveries using widgets imbued with social navigation data. However, these differences equalize as familiarity with the data increases.
Wesley Willett;Jeffrey Heer;Maneesh Agrawala
Wesley Willett;Jeffrey Heer;Maneesh Agrawala
Computer Science Division, University of California Berkeley, USA;Computer Science Division, University of California Berkeley, USA;Computer Science Division, University of California Berkeley, USA
10.1109/infvis.1999.801862;10.1109/infvis.1999.801862
Information visualization, user interface toolkits, information foraging, social navigation, social data analysis330202221846
57
InfoVis2006
Network Visualization by Semantic Substrates
10.1109/tvcg.2006.166
http://dx.doi.org/10.1109/TVCG.2006.166
733740J
Networks have remained a challenge for information visualization designers because of the complex issues of node and link layout coupled with the rich set of tasks that users present. This paper offers a strategy based on two principles: (1) layouts are based on user-defined semantic substrates, which are non-overlapping regions in which node placement is based on node attributes, (2) users interactively adjust sliders to control link visibility to limit clutter and thus ensure comprehensibility of source and destination. Scalability is further facilitated by user control of which nodes are visible. We illustrate our semantic substrates approach as implemented in NVSS 1.0 with legal precedent data for up to 1122 court cases in three regions with 7645 legal citations
Ben Shneiderman;Aleks Aris
Ben Shneiderman;Aleks Aris
Computer Science Department and the Human-Computer Interaction Laboratory, University of Maryland, College Park, USA;Computer Science Department and the Human-Computer Interaction Laboratory, University of Maryland, College Park, USA
10.1109/infvis.2004.1;10.1109/infvis.2005.1532124;10.1109/infvis.2005.1532126;10.1109/vast.2006.261429;10.1109/infvis.2004.1
Network visualization, semantic substrate, information visualization, graphical user interfaces491200382442
58
VAST2013
Temporal Event Sequence Simplification
10.1109/tvcg.2013.200
http://dx.doi.org/10.1109/TVCG.2013.200
22272236J
Electronic Health Records (EHRs) have emerged as a cost-effective data source for conducting medical research. The difficulty in using EHRs for research purposes, however, is that both patient selection and record analysis must be conducted across very large, and typically very noisy datasets. Our previous work introduced EventFlow, a visualization tool that transforms an entire dataset of temporal event records into an aggregated display, allowing researchers to analyze population-level patterns and trends. As datasets become larger and more varied, however, it becomes increasingly difficult to provide a succinct, summarizing display. This paper presents a series of user-driven data simplifications that allow researchers to pare event records down to their core elements. Furthermore, we present a novel metric for measuring visual complexity, and a language for codifying disjoint strategies into an overarching simplification framework. These simplifications were used by real-world researchers to gain new and valuable insights from initially overwhelming datasets.
Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman
Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman
University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA
10.1109/tvcg.2009.117;10.1109/tvcg.2012.213;10.1109/vast.2010.5652890
Event sequences, simplification, electronic heath records, temporal query318199332755HM
59
InfoVis2000
ThemeRiver: visualizing theme changes over time
10.1109/infvis.2000.885098
http://dx.doi.org/10.1109/INFVIS.2000.885098
115123C
ThemeRiver/sup TM/ is a prototype system that visualizes thematic variations over time within a large collection of documents. The "river" flows from left to right through time, changing width to depict changes in thematic strength of temporally associated documents. Colored "currents" flowing within the river narrow or widen to indicate decreases or increases in the strength of an individual topic or a group of topics in the associated documents. The river is shown within the context of a timeline and a corresponding textual presentation of external events.
Susan Havre;Elizabeth G. Hetzler;Lucy T. Nowell
S. Havre;B. Hetzler;L. Nowell
Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA
10.1109/infvis.1995.528686;10.1109/infvis.1997.636789;10.1109/infvis.1998.729570
660196181856
60
Vis2010
Noodles: A Tool for Visualization of Numerical Weather Model Ensemble Uncertainty
10.1109/tvcg.2010.181
http://dx.doi.org/10.1109/TVCG.2010.181
14211430J
Numerical weather prediction ensembles are routinely used for operational weather forecasting. The members of these ensembles are individual simulations with either slightly perturbed initial conditions or different model parameterizations, or occasionally both. Multi-member ensemble output is usually large, multivariate, and challenging to interpret interactively. Forecast meteorologists are interested in understanding the uncertainties associated with numerical weather prediction; specifically variability between the ensemble members. Currently, visualization of ensemble members is mostly accomplished through spaghetti plots of a single midtroposphere pressure surface height contour. In order to explore new uncertainty visualization methods, the Weather Research and Forecasting (WRF) model was used to create a 48-hour, 18 member parameterization ensemble of the 13 March 1993 "Superstorm". A tool was designed to interactively explore the ensemble uncertainty of three important weather variables: water-vapor mixing ratio, perturbation potential temperature, and perturbation pressure. Uncertainty was quantified using individual ensemble member standard deviation, inter-quartile range, and the width of the 95% confidence interval. Bootstrapping was employed to overcome the dependence on normality in the uncertainty metrics. A coordinated view of ribbon and glyph-based uncertainty visualization, spaghetti plots, iso-pressure colormaps, and data transect plots was provided to two meteorologists for expert evaluation. They found it useful in assessing uncertainty in the data, especially in finding outliers in the ensemble run and therefore avoiding the WRF parameterizations that lead to these outliers. Additionally, the meteorologists could identify spatial regions where the uncertainty was significantly high, allowing for identification of poorly simulated storm environments and physical interpretation of these model issues.
Jibonananda Sanyal;Song Zhang 0004;Jamie L. Dyer;Andrew Mercer 0001;Philip Amburn;Robert J. Moorhead
Jibonananda Sanyal;Song Zhang;Jamie Dyer;Andrew Mercer;Philip Amburn;Robert Moorhead
Geosystems Research Institute, Mississippi State University, USA;Department of Computer Science and Engineering, Mississippi State University, USA;Department of Geosciences, Mississippi State University, USA;Department of Geosciences and Northern Gulf Institute, Mississippi State University, USA;Geosystems Research Institute, Mississippi State University, USA;Geosystems Research Institute, Mississippi State University, USA
10.1109/tvcg.2009.114;10.1109/infvis.2002.1173145;10.1109/tvcg.2009.114
Uncertainty visualization, weather ensemble, geographic/geospatial visualization, glyph-based techniques, time-varying data, qualitative evaluation307192583132
61
InfoVis2008
Geometry-Based Edge Clustering for Graph Visualization
10.1109/tvcg.2008.135
http://dx.doi.org/10.1109/TVCG.2008.135
12771284J
Graphs have been widely used to model relationships among data. For large graphs, excessive edge crossings make the display visually cluttered and thus difficult to explore. In this paper, we propose a novel geometry-based edge-clustering framework that can group edges into bundles to reduce the overall edge crossings. Our method uses a control mesh to guide the edge-clustering process; edge bundles can be formed by forcing all edges to pass through some control points on the mesh. The control mesh can be generated at different levels of detail either manually or automatically based on underlying graph patterns. Users can further interact with the edge-clustering results through several advanced visualization techniques such as color and opacity enhancement. Compared with other edge-clustering methods, our approach is intuitive, flexible, and efficient. The experiments on some large graphs demonstrate the effectiveness of our method.
Weiwei Cui;Hong Zhou 0004;Huamin Qu;Pak Chung Wong;Xiaoming Li 0001
Weiwei Cui;Hong Zhou;Huamin Qu;Pak Chung Wong;Xiaoming Li
Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Pacific Northwest National Laboratory;Peking University, China
10.1109/tvcg.2007.70535;10.1109/tvcg.2007.70580;10.1109/infvis.2004.43;10.1109/infvis.2003.1249008;10.1109/infvis.2005.1532150;10.1109/infvis.2004.66;10.1109/infvis.2005.1532138;10.1109/tvcg.2006.147;10.1109/tvcg.2007.70535
Graph visualization, visual clutter, mesh, edge clustering406189232616
62
Vis1998
Smooth view-dependent level-of-detail control and its application to terrain rendering
10.1109/visual.1998.745282
http://dx.doi.org/10.1109/VISUAL.1998.745282
3542C
The key to real-time rendering of large-scale surfaces is to locally adapt surface geometric complexity to changing view parameters. Several schemes have been developed to address this problem of view-dependent level-of-detail control. Among these, the view-dependent progressive mesh (VDPM) framework represents an arbitrary triangle mesh as a hierarchy of geometrically optimized refinement transformations, from which accurate approximating meshes can be efficiently retrieved. In this paper we extend the general VDPM framework to provide temporal coherence through the run-time creation of geomorphs. These geomorphs eliminate "popping" artifacts by smoothly interpolating geometry. Their implementation requires new output-sensitive data structures, which have the added benefit of reducing memory use. We specialize the VDPM framework to the important case of terrain rendering. To handle huge terrain grids, we introduce a block-based simplification scheme that constructs a progressive mesh as a hierarchy of block refinements. We demonstrate the need for an accurate approximation metric during simplification. Our contributions are highlighted in a real-time flyover of a large, rugged terrain. Notably, the use of geomorphs results in visually smooth rendering even at 72 frames/sec on a graphics workstation.
Hugues HoppeH. HoppeMicrosoft Research, USA
10.1109/visual.1997.663865;10.1109/visual.1996.567600;10.1109/visual.1996.568126;10.1109/visual.1997.663860;10.1109/visual.1997.663908;10.1109/visual.1997.663865
104418823773TT
63
VAST2013
UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization
10.1109/tvcg.2013.212
http://dx.doi.org/10.1109/TVCG.2013.212
19922001J
Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.
Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park
Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park
Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA
10.1109/tvcg.2012.258;10.1109/vast.2009.5332629;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485;10.1109/vast.2007.4388999;10.1109/vast.2007.4389006;10.1109/tvcg.2008.138;10.1109/vast.2010.5652443;10.1109/tvcg.2012.258
Latent Dirichlet allocation, nonnegative matrix factorization, topic modeling, visual analytics, interactive clustering, text analytics317188363118
64
InfoVis2011
Quality Metrics in High-Dimensional Data Visualization: An Overview and Systematization
10.1109/tvcg.2011.229
http://dx.doi.org/10.1109/TVCG.2011.229
22032212J
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.
Enrico Bertini;Andrada Tatu;Daniel A. Keim
Enrico Bertini;Andrada Tatu;Daniel Keim
University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany
10.1109/infvis.2005.1532145;10.1109/vast.2010.5652433;10.1109/vast.2006.261423;10.1109/tvcg.2010.184;10.1109/tvcg.2010.179;10.1109/infvis.2004.15;10.1109/tvcg.2006.161;10.1109/tvcg.2007.70515;10.1109/infvis.2005.1532142;10.1109/visual.1990.146402;10.1109/infvis.2003.1249006;10.1109/visual.1990.146386;10.1109/tvcg.2006.138;10.1109/infvis.2004.59;10.1109/vast.2009.5332628;10.1109/infvis.2003.1249015;10.1109/vast.2010.5652450;10.1109/tvcg.2007.70535;10.1109/infvis.1998.729559;10.1109/infvis.2000.885092;10.1109/infvis.2004.3;10.1109/tvcg.2009.153;10.1109/infvis.1997.636794;10.1109/infvis.2005.1532145
Quality Metrics, High-Dimensional Data Visualization311188605361
65
VAST2018
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records
10.1109/tvcg.2018.2865027
http://dx.doi.org/10.1109/TVCG.2018.2865027
299309J
We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.
Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo
Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo
IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR
10.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591;10.1109/tvcg.2013.212
Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare224187854417
66
InfoVis2010
Graphical Perception of Multiple Time Series
10.1109/tvcg.2010.162
http://dx.doi.org/10.1109/TVCG.2010.162
927934J
Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759-1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series--such as small multiples and horizon graphs--are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques--like standard line graphs--are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.
Waqas Javed;Bryan McDonnel;Niklas Elmqvist
Waqas Javed;Bryan McDonnel;Niklas Elmqvist
Purdue University, West Lafayette, IN, USA;Purdue University, West Lafayette, IN, USA;Purdue University, West Lafayette, IN, USA
10.1109/tvcg.2008.166;10.1109/tvcg.2007.70583;10.1109/tvcg.2007.70535;10.1109/infvis.1999.801851;10.1109/tvcg.2008.125;10.1109/infvis.2005.1532144;10.1109/tvcg.2008.166
Line graphs, braided graphs, horizon graphs, small multiples, stacked graphs, evaluation, design guidelines321187294256
67
InfoVis2006
ASK-graphView: a large scale graph visualization system
10.1109/tvcg.2006.120
http://dx.doi.org/10.1109/TVCG.2006.120
669676J
We describe ASK-GraphView, a node-link-based graph visualization system that allows clustering and interactive navigation of large graphs, ranging in size up to 16 million edges. The system uses a scalable architecture and a series of increasingly sophisticated clustering algorithms to construct a hierarchy on an arbitrary, weighted undirected input graph. By lowering the interactivity requirements we can scale to substantially bigger graphs. The user is allowed to navigate this hierarchy in a top down manner by interactively expanding individual clusters. ASK-GraphView also provides facilities for filtering and coloring, annotation and cluster labeling
James Abello;Frank van Ham;Neeraj Krishnan
James Abello;Frank Van Ham;Neeraj Krishnan
Ask.com and DIMACS, Rutgers University, USA;IBM;Ask.com
10.1109/infvis.2004.46;10.1109/infvis.2005.1532127;10.1109/infvis.2004.66;10.1109/infvis.1997.636718;10.1109/infvis.2004.43;10.1109/infvis.2004.46
Information visualization, graph visualization, graph clustering399187252991
68
InfoVis2000
A taxonomy of visualization techniques using the data state reference model
10.1109/infvis.2000.885092
http://dx.doi.org/10.1109/INFVIS.2000.885092
6975C
In previous work, researchers have attempted to construct taxonomies of information visualization techniques by examining the data domains that are compatible with these techniques. This is useful because implementers can quickly identify various techniques that can be applied to their domain of interest. However, these taxonomies do not help the implementers understand how to apply and implement these techniques. The author extends and proposes a new way to taxonomize information visualization techniques by using the Data State Model (E.H. Chi and J.T. Reidl, 1998). In fact, as the taxonomic analysis in the paper will show, many of the techniques share similar operating steps that can easily be reused. The paper shows that the Data State Model not only helps researchers understand the space of design, but also helps implementers understand how information visualization techniques can be applied more broadly.
Ed Huai-hsin ChiE.H. ChiXerox Palo Alto Research Center, Palo Alto, CA, USA
10.1109/infvis.1997.636761;10.1109/infvis.1997.636792;10.1109/infvis.1998.729560;10.1109/infvis.1997.636761
Information Visualization, Data State Model,Reference Model, Taxonomy, Techniques, Operators84818783819
69
InfoVis2015
Reactive Vega: A Streaming Dataflow Architecture for Declarative Interactive Visualization
10.1109/tvcg.2015.2467091
http://dx.doi.org/10.1109/TVCG.2015.2467091
659668J
We present Reactive Vega, a system architecture that provides the first robust and comprehensive treatment of declarative visual and interaction design for data visualization. Starting from a single declarative specification, Reactive Vega constructs a dataflow graph in which input data, scene graph elements, and interaction events are all treated as first-class streaming data sources. To support expressive interactive visualizations that may involve time-varying scalar, relational, or hierarchical data, Reactive Vega's dataflow graph can dynamically re-write itself at runtime by extending or pruning branches in a data-driven fashion. We discuss both compile- and run-time optimizations applied within Reactive Vega, and share the results of benchmark studies that indicate superior interactive performance to both D3 and the original, non-reactive Vega system.
Arvind Satyanarayan;Ryan Russell;Jane Hoffswell;Jeffrey Heer
Arvind Satyanarayan;Ryan Russell;Jane Hoffswell;Jeffrey Heer
Stanford University;University of Washington;University of Washington;University of Washington
10.1109/visual.1995.480821;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2010.144;10.1109/tvcg.2014.2346250;10.1109/tvcg.2013.179;10.1109/tvcg.2010.177;10.1109/visual.1996.567752;10.1109/infvis.2000.885086;10.1109/infvis.2004.12;10.1109/tvcg.2015.2467191;10.1109/tvcg.2007.70515;10.1109/visual.1995.480821
Information visualization, systems, toolkits, declarative specification, optimization, interaction, streaming data267186412469
70
Vis1991
The asymptotic decider: resolving the ambiguity in marching cubes
10.1109/visual.1991.175782
http://dx.doi.org/10.1109/VISUAL.1991.175782
83C
A method for computing isovalue or contour surfaces of a trivariate function is discussed. The input data are values of the trivariate function, F/sub ijk/, at the cuberille grid points (x/sub i/, y/sub j/, z/sub k/), and the output of a collection of triangles representing the surface consisting of all points where F(x,y,z) is a constant value. The method is a modification that is intended to correct a problem with a previous method.<<ETX>>
Gregory M. Nielson;Bernd Hamann
G.M. Nielson;B. Hamann
Computer Science, Arizona State University, Tempe, AZ, USA;Computer Science, Arizona State University, Tempe, AZ, USA
10.1109/visual.1990.146363;10.1109/visual.1990.146363
86818416569
71
InfoVis2006
MatrixExplorer: a Dual-Representation System to Explore Social Networks
10.1109/tvcg.2006.160
http://dx.doi.org/10.1109/TVCG.2006.160
677684J
MatrixExplorer is a network visualization system that uses two representations: node-link diagrams and matrices. Its design comes from a list of requirements formalized after several interviews and a participatory design session conducted with social science researchers. Although matrices are commonly used in social networks analysis, very few systems support the matrix-based representations to visualize and analyze networks. MatrixExplorer provides several novel features to support the exploration of social networks with a matrix-based representation, in addition to the standard interactive filtering and clustering functions. It provides tools to reorder (layout) matrices, to annotate and compare findings across different layouts and find consensus among several clusterings. MatrixExplorer also supports node-link diagram views which are familiar to most users and remain a convenient way to publish or communicate exploration results. Matrix and node-link representations are kept synchronized at all stages of the exploration process
Nathalie Henry;Jean-Daniel Fekete
Nathalie Henry;Jean-daniel Fekete
INRIA Futurs/LRI and University of Sydney, France;INRIA-Futurs/LRI, France
10.1109/infvis.2004.64;10.1109/infvis.2004.64
social networks visualization, node-link diagrams, matrix-based representations, exploratory process, matrix ordering, interactive clustering, consensus402184351996
72
VAST2015
The Role of Uncertainty, Awareness, and Trust in Visual Analytics
10.1109/tvcg.2015.2467591
http://dx.doi.org/10.1109/TVCG.2015.2467591
240249J
Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making.
Dominik Sacha;Hansi Senaratne;Bum Chul Kwon;Geoffrey P. Ellis;Daniel A. Keim
Dominik Sacha;Hansi Senaratne;Bum Chul Kwon;Geoffrey Ellis;Daniel A. Keim
Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz
10.1109/tvcg.2014.2346575;10.1109/visual.2000.885679;10.1109/vast.2008.4677385;10.1109/vast.2009.5332611;10.1109/tvcg.2012.260;10.1109/vast.2011.6102473;10.1109/vast.2009.5333020;10.1109/vast.2011.6102435;10.1109/tvcg.2012.279;10.1109/tvcg.2014.2346481;10.1109/vast.2006.261416;10.1109/tvcg.2014.2346575
Visual Analytics, Knowledge Generation, Uncertainty Measures and Propagation, Trust Building, Human Factors261180834217
73
InfoVis2012
Visual Semiotics & Uncertainty Visualization: An Empirical Study
10.1109/tvcg.2012.279
http://dx.doi.org/10.1109/TVCG.2012.279
24962505J
This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.
Alan M. MacEachren;Robert E. Roth;James O'Brien;Bonan Li;Derek Swingley;Mark Gahegan
Alan M. MacEachren;Robert E. Roth;James O'Brien;Bonan Li;Derek Swingley;Mark Gahegan
Pennsylvania State University, USA;University of Wisconsin-Madison, USA;Risk Frontiers, Macquarie University, Australia;ZillionInfo, USA;Pennsylvania State University, USA;University of Auckland, New Zealand
10.1109/visual.1992.235199;10.1109/tvcg.2011.197;10.1109/tvcg.2009.114;10.1109/tvcg.2011.209
Uncertainty visualization, uncertainty categories, visual variables, semiotics295180346047HM
74
InfoVis2009
Flow Mapping and Multivariate Visualization of Large Spatial Interaction Data
10.1109/tvcg.2009.143
http://dx.doi.org/10.1109/TVCG.2009.143
10411048J
Spatial interactions (or flows), such as population migration and disease spread, naturally form a weighted location-to-location network (graph). Such geographically embedded networks (graphs) are usually very large. For example, the county-to-county migration data in the U.S. has thousands of counties and about a million migration paths. Moreover, many variables are associated with each flow, such as the number of migrants for different age groups, income levels, and occupations. It is a challenging task to visualize such data and discover network structures, multivariate relations, and their geographic patterns simultaneously. This paper addresses these challenges by developing an integrated interactive visualization framework that consists three coupled components: (1) a spatially constrained graph partitioning method that can construct a hierarchy of geographical regions (communities), where there are more flows or connections within regions than across regions; (2) a multivariate clustering and visualization method to detect and present multivariate patterns in the aggregated region-to-region flows; and (3) a highly interactive flow mapping component to map both flow and multivariate patterns in the geographic space, at different hierarchical levels. The proposed approach can process relatively large data sets and effectively discover and visualize major flow structures and multivariate relations at the same time. User interactions are supported to facilitate the understanding of both an overview and detailed patterns.
Diansheng GuoDiansheng GuoDepartment of Geography, University of South Carolina, USA
10.1109/tvcg.2008.135;10.1109/tvcg.2006.147;10.1109/tvcg.2006.138;10.1109/infvis.2005.1532150;10.1109/tvcg.2008.135
hierarchical clustering, graph partitioning, flow mapping, spatial interaction, contiguity constraints, multidimensional visualization, coordinated views, data mining305179433706
75
Vis1998
Simplifying surfaces with color and texture using quadric error metrics
10.1109/visual.1998.745312
http://dx.doi.org/10.1109/VISUAL.1998.745312
263269C
There are a variety of application areas in which there is a need for simplifying complex polygonal surface models. These models often have material properties such as colors, textures, and surface normals. Our surface simplification algorithm, based on iterative edge contraction and quadric error metrics, can rapidly produce high quality approximations of such models. We present a natural extension of our original error metric that can account for a wide range of vertex attributes.
Michael Garland;Paul S. Heckbert
M. Garland;P.S. Heckbert
Computer Science Department, Carnegie Mellon University, Pittsburgh, USA;Carnegie Mellon University, USA
10.1109/visual.1997.663908;10.1109/visual.1996.568126;10.1109/visual.1997.663908
surface simplification, multiresolution modeling, level of detail, quadric error metric, edge contraction, surface properties, discontinuity preservation74417819494
76
InfoVis2018
Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco
10.1109/tvcg.2018.2865240
http://dx.doi.org/10.1109/TVCG.2018.2865240
438448J
There exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.
Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith 0001;Bill Howe;Jeffrey Heer
Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith;Bill Howe;Jeffrey Heer
University of Washington;University of Washington;University of Washington;University of Washington;University of California Santa Cruz;University of Washington;University of Washington
10.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346984;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2744320;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2467191
Automated Visualization Design,Perceptual Effectiveness,Constraints,Knowledge Bases,Answer Set Programming225177673238BP
77
InfoVis2011
Local Affine Multidimensional Projection
10.1109/tvcg.2011.220
http://dx.doi.org/10.1109/TVCG.2011.220
25632571J
Multidimensional projection techniques have experienced many improvements lately, mainly regarding computational times and accuracy. However, existing methods do not yet provide flexible enough mechanisms for visualization-oriented fully interactive applications. This work presents a new multidimensional projection technique designed to be more flexible and versatile than other methods. This novel approach, called Local Affine Multidimensional Projection (LAMP), relies on orthogonal mapping theory to build accurate local transformations that can be dynamically modified according to user knowledge. The accuracy, flexibility and computational efficiency of LAMP is confirmed by a comprehensive set of comparisons. LAMP's versatility is exploited in an application which seeks to correlate data that, in principle, has no connection as well as in visual exploration of textual documents.
Paulo Joia;Danilo Barbosa Coimbra;José Alberto Cuminato;Fernando Vieira Paulovich;Luis Gustavo Nonato
Paulo Joia;Danilo Coimbra;Jose A. Cuminato;Fernando V. Paulovich;Luis G. Nonato
Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil
10.1109/visual.1996.567787;10.1109/tvcg.2009.140;10.1109/tvcg.2007.70580;10.1109/infvis.2002.1173159;10.1109/tvcg.2010.207;10.1109/tvcg.2010.170;10.1109/infvis.2002.1173161
Multidimensional Projection, High Dimensional Data, Visual Data Mining2176361501HM
78
InfoVis2013
Nanocubes for Real-Time Exploration of Spatiotemporal Datasets
10.1109/tvcg.2013.179
http://dx.doi.org/10.1109/TVCG.2013.179
24562465J
Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop's main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.
Lauro Didier Lins;James T. Klosowski;Carlos Eduardo Scheidegger
Lauro Lins;James T. Klosowski;Carlos Scheidegger
AT&T Research, USA;AT&T Research, USA;AT&T Research, USA
10.1109/tvcg.2006.161;10.1109/infvis.2002.1173141;10.1109/tvcg.2009.191;10.1109/vast.2008.4677357;10.1109/tvcg.2007.70594;10.1109/infvis.2002.1173156;10.1109/visual.1990.146386;10.1109/tvcg.2011.185;10.1109/tvcg.2006.161
Data cube, Data structures, Interactive exploration319174363076HM
79
InfoVis2010
How Information Visualization Novices Construct Visualizations
10.1109/tvcg.2010.164
http://dx.doi.org/10.1109/TVCG.2010.164
943952J
It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which information visualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.
Lars Grammel;Melanie Tory;Margaret-Anne D. Storey
Lars Grammel;Melanie Tory;Margaret-Anne Storey
University of Victoria, Canada;University of Victoria, Canada;University of Victoria, Canada
10.1109/tvcg.2007.70515;10.1109/tvcg.2006.163;10.1109/tvcg.2007.70541;10.1109/vast.2009.5333878;10.1109/tvcg.2008.109;10.1109/vast.2006.261428;10.1109/tvcg.2007.70577;10.1109/vast.2008.4677358;10.1109/vast.2008.4677365;10.1109/tvcg.2007.70535;10.1109/infvis.2005.1532136;10.1109/infvis.1998.729560;10.1109/tvcg.2007.70594;10.1109/infvis.2000.885086;10.1109/infvis.2001.963289;10.1109/infvis.2000.885092;10.1109/tvcg.2008.137;10.1109/tvcg.2007.70515
Empirical study, visualization, visualization construction, visual analytics, visual mapping, novices283174404311
80
InfoVis2004
A Comparison of the Readability of Graphs Using Node-Link and Matrix-Based Representations
10.1109/infvis.2004.1
http://dx.doi.org/10.1109/INFVIS.2004.1
1724C
In this paper, we describe a taxonomy of generic graph related tasks and an evaluation aiming at assessing the readability of two representations of graphs: matrix-based representations and node-link diagrams. This evaluation bears on seven generic tasks and leads to important recommendations with regard to the representation of graphs according to their size and density. For instance, we show that when graphs are bigger than twenty vertices, the matrix-based visualization performs better than node-link diagrams on most tasks. Only path finding is consistently in favor of node-link diagrams throughout the evaluation
Mohammad Ghoniem;Jean-Daniel Fekete;Philippe Castagliola
M. Ghoniem;J.-D. Fekete;P. Castagliola
Ecole des Mines de Nantes, Nantes, France;INRIA Futurs/LRI, Université Paris Sud, Orsay, France; IRCCyN, Ecole des Mines de Nantes, Nantes, France
10.1109/infvis.2003.1249030
Visualization of graphs, adjacency matrices, node-link representation, readability, evaluation532173142394TT
81
Vis1994
XmdvTool: integrating multiple methods for visualizing multivariate data
10.1109/visual.1994.346302
http://dx.doi.org/10.1109/VISUAL.1994.346302
326333C
Much of the attention in visualization research has focussed on data rooted in physical phenomena, which is generally limited to three or four dimensions. However, many sources of data do not share this dimensional restriction. A critical problem in the analysis of such data is providing researchers with tools to gain insights into characteristics of the data, such as anomalies and patterns. Several visualization methods have been developed to address this problem, and each has its strengths and weaknesses. This paper describes a system named XmdvTool which integrates several of the most common methods for projecting multivariate data onto a two-dimensional screen. This integration allows users to explore their data in a variety of formats with ease. A view enhancement mechanism called an N-dimensional brush is also described. The brush allows users to gain insights into spatial relationships over N dimensions by highlighting data which falls within a user-specified subspace.<<ETX>>
Matthew O. WardM.O. WardComputer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA
10.1109/visual.1990.146386;10.1109/visual.1990.146387;10.1109/visual.1990.146402;10.1109/visual.1990.146386
57217116453
82
VAST2012
Spatiotemporal social media analytics for abnormal event detection and examination using seasonal-trend decomposition
10.1109/vast.2012.6400557
http://dx.doi.org/10.1109/VAST.2012.6400557
143152C
Recent advances in technology have enabled social media services to support space-time indexed data, and internet users from all over the world have created a large volume of time-stamped, geo-located data. Such spatiotemporal data has immense value for increasing situational awareness of local events, providing insights for investigations and understanding the extent of incidents, their severity, and consequences, as well as their time-evolving nature. In analyzing social media data, researchers have mainly focused on finding temporal trends according to volume-based importance. Hence, a relatively small volume of relevant messages may easily be obscured by a huge data set indicating normal situations. In this paper, we present a visual analytics approach that provides users with scalable and interactive social media data analysis and visualization including the exploration and examination of abnormal topics and events within various social media data sources, such as Twitter, Flickr and YouTube. In order to find and understand abnormal events, the analyst can first extract major topics from a set of selected messages and rank them probabilistically using Latent Dirichlet Allocation. He can then apply seasonal trend decomposition together with traditional control chart methods to find unusual peaks and outliers within topic time series. Our case studies show that situational awareness can be improved by incorporating the anomaly and trend examination techniques into a highly interactive visual analysis process.
Junghoon Chae;Dennis Thom;Harald Bosch;Yun Jang;Ross Maciejewski;David S. Ebert;Thomas Ertl
Junghoon Chae;Dennis Thom;Harald Bosch;Yun Jang;Ross Maciejewski;David S. Ebert;Thomas Ertl
Purdue University;University of Stuttgart;University of Stuttgart;Sejong University;Arizona State University;Purdue University;University of Stuttgart
10.1109/vast.2011.6102456;10.1109/vast.2011.6102461;10.1109/tvcg.2008.175;10.1109/vast.2011.6102488;10.1109/vast.2011.6102456
354171393235
83
InfoVis2016
Embedded Data Representations
10.1109/tvcg.2016.2598608
http://dx.doi.org/10.1109/TVCG.2016.2598608
461470J
We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.
Wesley Willett;Yvonne Jansen;Pierre Dragicevic
Wesley Willett;Yvonne Jansen;Pierre Dragicevic
University of Calgary;University of Copenhagen;Inria
10.1109/tvcg.2013.134;10.1109/infvis.1998.729560;10.1109/tvcg.2013.134
augmented reality;Information visualization;data physicalization;ambient displays;ubiquitous computing192169544325
84
InfoVis2012
Exploring Flow, Factors, and Outcomes of Temporal Event Sequences with the Outflow Visualization
10.1109/tvcg.2012.225
http://dx.doi.org/10.1109/TVCG.2012.225
26592668J
Event sequence data is common in many domains, ranging from electronic medical records (EMRs) to sports events. Moreover, such sequences often result in measurable outcomes (e.g., life or death, win or loss). Collections of event sequences can be aggregated together to form event progression pathways. These pathways can then be connected with outcomes to model how alternative chains of events may lead to different results. This paper describes the Outflow visualization technique, designed to (1) aggregate multiple event sequences, (2) display the aggregate pathways through different event states with timing and cardinality, (3) summarize the pathways' corresponding outcomes, and (4) allow users to explore external factors that correlate with specific pathway state transitions. Results from a user study with twelve participants show that users were able to learn how to use Outflow easily with limited training and perform a range of tasks both accurately and rapidly.
Krist Wongsuphasawat;David Gotz
Krist Wongsuphasawat;David Gotz
University of Maryland, USA;IBM Thomas J. Watson Research Center, USA
10.1109/tvcg.2009.181;10.1109/vast.2011.6102453;10.1109/tvcg.2006.192;10.1109/infvis.2005.1532150;10.1109/vast.2009.5332595;10.1109/tvcg.2009.117;10.1109/infvis.2005.1532152;10.1109/vast.2006.261421;10.1109/tvcg.2009.181
Outflow, information visualization, temporal event sequences, state diagram, state transition254167352099
85
InfoVis2000
Focus+context display and navigation techniques for enhancing radial, space-filling hierarchy visualizations
10.1109/infvis.2000.885091
http://dx.doi.org/10.1109/INFVIS.2000.885091
5765C
Radial, space-filling visualizations can be useful for depicting information hierarchies, but they suffer from one major problem. As the hierarchy grows in size, many items become small, peripheral slices that are difficult to distinguish. We have developed three visualization/interaction techniques that provide flexible browsing of the display. The techniques allow viewers to examine the small items in detail while providing context within the entire information hierarchy. Additionally, smooth transitions between views help users maintain orientation within the complete information space.
John T. Stasko;Eugene ZhangJ. Stasko;E. ZhangGVU Center and College of Computing, Georgia Institute of Technology, Atlanta, GA, USA;GVU Center and College of Computing, Georgia Institute of Technology, Atlanta, GA, USA
10.1109/infvis.1999.801860;10.1109/visual.1992.235217;10.1109/infvis.1998.729557;10.1109/visual.1991.175815;10.1109/infvis.1999.801860
629167151292
86
Vis2003
Curvature-based transfer functions for direct volume rendering: methods and applications
10.1109/visual.2003.1250414
http://dx.doi.org/10.1109/VISUAL.2003.1250414
513520C
Direct volume rendering of scalar fields uses a transfer function to map locally measured data properties to opacities and colors. The domain of the transfer function is typically the one-dimensional space of scalar data values. This paper advances the use of curvature information in multi-dimensional transfer functions, with a methodology for computing high-quality curvature measurements. The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field. We give concrete guidelines for implementing the methodology, and illustrate the importance of choosing accurate filters for computing derivatives with convolution. Curvature-based transfer functions are shown to extend the expressivity and utility of volume rendering through contributions in three different application areas: nonphotorealistic volume rendering, surface smoothing via anisotropic diffusion, and visualization of isosurface uncertainty.
Gordon L. Kindlmann;Ross T. Whitaker;Tolga Tasdizen;Torsten Möller
G. Kindlmann;R. Whitaker;T. Tasdizen;T. Moller
Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Graphics, Usability, and Visualization (GrUVi) Laboratory, Simon Fraser University, Canada
10.1109/visual.2000.885696;10.1109/visual.2002.1183766;10.1109/visual.1995.480795;10.1109/visual.2000.885694;10.1109/visual.1994.346331;10.1109/visual.2002.1183777;10.1109/visual.2000.885696
volume rendering, implicit surface curvature, convolution-based differentiation, non-photorealistic rendering, surface processing, uncertainty visualization, flowline curvature583166361354
87
InfoVis2014
Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error
10.1109/tvcg.2014.2346298
http://dx.doi.org/10.1109/TVCG.2014.2346298
21422151J
When making an inference or comparison with uncertain, noisy, or incomplete data, measurement error and confidence intervals can be as important for judgment as the actual mean values of different groups. These often misunderstood statistical quantities are frequently represented by bar charts with error bars. This paper investigates drawbacks with this standard encoding, and considers a set of alternatives designed to more effectively communicate the implications of mean and error data to a general audience, drawing from lessons learned from the use of visual statistics in the information visualization community. We present a series of crowd-sourced experiments that confirm that the encoding of mean and error significantly changes how viewers make decisions about uncertain data. Careful consideration of design tradeoffs in the visual presentation of data results in human reasoning that is more consistently aligned with statistical inferences. We suggest the use of gradient plots (which use transparency to encode uncertainty) and violin plots (which use width) as better alternatives for inferential tasks than bar charts with error bars.
Michael Correll;Michael Gleicher
Michael Correll;Michael Gleicher
Department of Computer Sciences, University of Wisconsin-Madison;Department of Computer Sciences, University of Wisconsin-Madison
10.1109/tvcg.2012.220;10.1109/tvcg.2012.199;10.1109/tvcg.2012.262;10.1109/tvcg.2011.175;10.1109/tvcg.2012.279;10.1109/tvcg.2012.220
Visual statistics, information visualization, crowd-sourcing, empirical evaluation237165353081
88
VAST2018
RuleMatrix: Visualizing and Understanding Classifiers with Rules
10.1109/tvcg.2018.2864812
http://dx.doi.org/10.1109/TVCG.2018.2864812
342352J
With the growing adoption of machine learning techniques, there is a surge of research interest towards making machine learning systems more transparent and interpretable. Various visualizations have been developed to help model developers understand, diagnose, and refine machine learning models. However, a large number of potential but neglected users are the domain experts with little knowledge of machine learning but are expected to work with machine learning systems. In this paper, we present an interactive visualization technique to help users with little expertise in machine learning to understand, explore and validate predictive models. By viewing the model as a black box, we extract a standardized rule-based knowledge representation from its input-output behavior. Then, we design RuleMatrix, a matrix-based visualization of rules to help users navigate and verify the rules and the black-box model. We evaluate the effectiveness of RuleMatrix via two use cases and a usability study.
Yao Ming;Huamin Qu;Enrico Bertini
Yao Ming;Huamin Qu;Enrico Bertini
Hong Kong University of Science and Technology, Kowloon, HK;Hong Kong University of Science and Technology, Kowloon, HK;New York University
10.1109/tvcg.2017.2744683;10.1109/tvcg.2017.2744718;10.1109/vast.2017.8585720;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2017.2744158;10.1109/visual.2005.1532820;10.1109/vast.2011.6102453;10.1109/tvcg.2017.2744878;10.1109/tvcg.2017.2744683
explainable machine learning,rule visualization,visual analytics171164483162
89
Vis1999
New quadric metric for simplifying meshes with appearance attributes
10.1109/visual.1999.809869
http://dx.doi.org/10.1109/VISUAL.1999.809869
59510C
Complex triangle meshes arise naturally in many areas of computer graphics and visualization. Previous work has shown that a quadric error metric allows fast and accurate geometric simplification of meshes. This quadric approach was recently generalized to handle meshes with appearance attributes. In this paper we present an improved quadric error metric for simplifying meshes with attributes. The new metric, based on geometric correspondence in 3D, requires less storage, evaluates more quickly, and results in more accurate simplified meshes. Meshes often have attribute discontinuities, such as surface creases and material boundaries, which require multiple attribute vectors per vertex. We show that a wedge-based mesh data structure captures such discontinuities efficiently and permits simultaneous optimization of these multiple attribute vectors. In addition to the new quadric metric, we experiment with two techniques proposed in geometric simplification, memoryless simplification and volume preservation, and show that both of these are beneficial within the quadric framework. The new scheme is demonstrated on a variety of meshes with colors and normals.
Hugues HoppeH. HoppeMicrosoft Research Limited, USA
10.1109/visual.1998.745312;10.1109/visual.1998.745285;10.1109/visual.1998.745314;10.1109/visual.1997.663908;10.1109/visual.1998.745312
level of detail, mesh decimation, multiresolution59016318278
90
InfoVis2017
The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?
10.1109/tvcg.2017.2745941
http://dx.doi.org/10.1109/TVCG.2017.2745941
457467J
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Benjamin Bach;Ronell Sicat;Johanna Beyer;Maxime Cordeil;Hanspeter Pfister
Benjamin Bach;Ronell Sicat;Johanna Beyer;Maxime Cordeil;Hanspeter Pfister
Harvard University;Harvard University;Harvard University;Monash University;Harvard University
10.1109/tvcg.2011.234;10.1109/tvcg.2012.216;10.1109/tvcg.2016.2599107;10.1109/tvcg.2008.153;10.1109/tvcg.2013.121;10.1109/tvcg.2013.134;10.1109/tvcg.2015.2467202;10.1109/tvcg.2011.234
Augmented Reality,3D Interaction,User Study,Immersive Displays212163676281
91
VAST2016
Characterizing Guidance in Visual Analytics
10.1109/tvcg.2016.2598468
http://dx.doi.org/10.1109/TVCG.2016.2598468
111120J
Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.
Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski
Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski
Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany
10.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482;10.1109/visual.2000.885678
Visual analytics;guidance model;assistance;user support183163553549
92
Vis1999
Multi-projector displays using camera-based registration
10.1109/visual.1999.809883
http://dx.doi.org/10.1109/VISUAL.1999.809883
161522C
Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.
Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs
R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs
Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USAdisplay, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration519162211123
93
VAST2020
CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization
10.1109/tvcg.2020.3030418
http://dx.doi.org/10.1109/TVCG.2020.3030418
13961406J
Deep learning's great success motivates many practitioners and students to learn about this exciting technology. However, it is often challenging for beginners to take their first step due to the complexity of understanding and applying deep learning. We present CNN Explainer, an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks (CNNs), a foundational deep learning model architecture. Our tool addresses key challenges that novices face while learning about CNNs, which we identify from interviews with instructors and a survey with past students. CNN Explainer tightly integrates a model overview that summarizes a CNN's structure, and on-demand, dynamic visual explanation views that help users understand the underlying components of CNNs. Through smooth transitions across levels of abstraction, our tool enables users to inspect the interplay between low-level mathematical operations and high-level model structures. A qualitative user study shows that CNN Explainer helps users more easily understand the inner workings of CNNs, and is engaging and enjoyable to use. We also derive design lessons from our study. Developed using modern web technologies, CNN Explainer runs locally in users' web browsers without the need for installation or specialized hardware, broadening the public's education access to modern deep learning techniques.
Zijie J. Wang;Robert Turko;Omar Shaikh;Haekyu Park;Nilaksh Das;Fred Hohman;Minsuk Kahng;Duen Horng (Polo) Chau
Zijie J. Wang;Robert Turko;Omar Shaikh;Haekyu Park;Nilaksh Das;Fred Hohman;Minsuk Kahng;Duen Horng Polo Chau
Georgia Tech.;Georgia Tech.;Georgia Tech.;Georgia Tech.;Georgia Tech.;Georgia Tech.;Oregon State University;Georgia Tech.
10.1109/tvcg.2011.185;10.1109/tvcg.2019.2934659;10.1109/tvcg.2017.2744718;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744358;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744683;10.1109/tvcg.2011.185
Deep learning,machine learning,convolutional neural networks,visual analytics103161596898
94
VAST2016
Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis
10.1109/tvcg.2016.2598495
http://dx.doi.org/10.1109/TVCG.2016.2598495
241250J
Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.
Dominik Sacha;Leishi Zhang;Michael Sedlmair;John Aldo Lee;Jaakko Peltonen;Daniel Weiskopf;Stephen C. North;Daniel A. Keim
Dominik Sacha;Leishi Zhang;Michael Sedlmair;John A. Lee;Jaakko Peltonen;Daniel Weiskopf;Stephen C. North;Daniel A. Keim
University of Konstanz, Germany;Middlesex University, UK;University of Vienna, Austria;SSS, Belgian F.R.S.-FNRS.;Helsinki Institute for Information Technology HIIT, University of Tampere, Finland;University of Konstanz, Germany;Infovisible LLC, Oldwick, U.S.A.;VISUS, University of Stuttgart, Germany
10.1109/tvcg.2012.195;10.1109/tvcg.2009.153;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346481;10.1109/vast.2011.6102449;10.1109/tvcg.2007.70515;10.1109/vast.2008.4677350;10.1109/vast.2009.5332629;10.1109/vast.2010.5652443;10.1109/vast.2014.7042492;10.1109/tvcg.2015.2467132;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346321;10.1109/tvcg.2013.153;10.1109/vast.2010.5652484;10.1109/tvcg.2006.156;10.1109/tvcg.2015.2467717;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/vast.2010.5652392;10.1109/tvcg.2013.126;10.1109/tvcg.2012.195
Interactive visualization;machine learning;visual analytics;dimensionality reduction248160594482
95
VAST2015
MobilityGraphs: Visual Analysis of Mass Mobility Dynamics via Spatio-Temporal Graphs and Clustering
10.1109/tvcg.2015.2468111
http://dx.doi.org/10.1109/TVCG.2015.2468111
1120J
Learning more about people mobility is an important task for official decision makers and urban planners. Mobility data sets characterize the variation of the presence of people in different places over time as well as movements (or flows) of people between the places. The analysis of mobility data is challenging due to the need to analyze and compare spatial situations (i.e., presence and flows of people at certain time moments) and to gain an understanding of the spatio-temporal changes (variations of situations over time). Traditional flow visualizations usually fail due to massive clutter. Modern approaches offer limited support for investigating the complex variation of the movements over longer time periods. We propose a visual analytics methodology that solves these issues by combined spatial and temporal simplifications. We have developed a graph-based method, called MobilityGraphs, which reveals movement patterns that were occluded in flow maps. Our method enables the visual representation of the spatio-temporal variation of movements for long time series of spatial situations originally containing a large number of intersecting flows. The interactive system supports data exploration from various perspectives and at various levels of detail by interactive setting of clustering parameters. The feasibility our approach was tested on aggregated mobility data derived from a set of geolocated Twitter posts within the Greater London city area and mobile phone call data records in Abidjan, Ivory Coast. We could show that MobilityGraphs support the identification of regular daily and weekly movement patterns of resident population.
Tatiana von Landesberger;Felix Brodkorb;Philipp Roskosch;Natalia V. Andrienko;Gennady L. Andrienko;Andreas Kerren
Tatiana von Landesberger;Felix Brodkorb;Philipp Roskosch;Natalia Andrienko;Gennady Andrienko;Andreas Kerren
Technical University of Darmstadt, Germany;Technical University of Darmstadt, Germany;Technical University of Darmstadt, Germany;Fraunhofer IAIS, City University, London, UK;Fraunhofer IAIS, City University, London, UK;Fraunhofer IAIS, City University, London, UK
10.1109/tvcg.2011.202;10.1109/tvcg.2011.226;10.1109/tvcg.2011.233;10.1109/infvis.2004.18;10.1109/tvcg.2009.143;10.1109/tvcg.2014.2346271;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346441;10.1109/infvis.1999.801851;10.1109/vast.2012.6400553;10.1109/vast.2009.5333893;10.1109/infvis.2005.1532150;10.1109/tvcg.2011.202
Visual analytics, movement data, networks, graphs, temporal aggregation, spatial aggregation, flows, clustering211159564670
96
VAST2009
Parallel Tag Clouds to explore and analyze faceted text corpora
10.1109/vast.2009.5333443
http://dx.doi.org/10.1109/VAST.2009.5333443
9198C
Do court cases differ from place to place? What kind of picture do we get by looking at a country's collection of law cases? We introduce parallel tag clouds: a new way to visualize differences amongst facets of very large metadata-rich text corpora. We have pointed parallel tag clouds at a collection of over 600,000 US Circuit Court decisions spanning a period of 50 years and have discovered regional as well as linguistic differences between courts. The visualization technique combines graphical elements from parallel coordinates and traditional tag clouds to provide rich overviews of a document collection while acting as an entry point for exploration of individual texts. We augment basic parallel tag clouds with a details-in-context display and an option to visualize changes over a second facet of the data, such as time. We also address text mining challenges such as selecting the best words to visualize, and how to do so in reasonable time periods to maintain interactivity.
Christopher Collins 0001;Fernanda B. Viégas;Martin Wattenberg
Christopher Collins;Fernanda B. Viegas;Martin Wattenberg
University of Toronto, Canada;IBM Research, India;IBM Research, India
10.1109/infvis.1995.528686;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.175;10.1109/tvcg.2008.172;10.1109/vast.2007.4389006;10.1109/tvcg.2006.166;10.1109/infvis.1995.528686
Text visualization, corpus visualization, information retrieval, text mining, tag clouds333158352438TT
97
InfoVis2009
"Search, Show Context, Expand on Demand": Supporting Large Graph Exploration with Degree-of-Interest
10.1109/tvcg.2009.108
http://dx.doi.org/10.1109/TVCG.2009.108
953960J
A common goal in graph visualization research is the design of novel techniques for displaying an overview of an entire graph. However, there are many situations where such an overview is not relevant or practical for users, as analyzing the global structure may not be related to the main task of the users that have semi-specific information needs. Furthermore, users accessing large graph databases through an online connection or users running on less powerful (mobile) hardware simply do not have the resources needed to compute these overviews. In this paper, we advocate an interaction model that allows users to remotely browse the immediate context graph around a specific node of interest. We show how Furnas' original degree of interest function can be adapted from trees to graphs and how we can use this metric to extract useful contextual subgraphs, control the complexity of the generated visualization and direct users to interesting datapoints in the context. We demonstrate the effectiveness of our approach with an exploration of a dense online database containing over 3 million legal citations.
Frank van Ham;Adam Perer
Frank van Ham;Adam Perer
IBM ILOG Research, Gentilly, France;IBM Research, Haifa, Israel
10.1109/tvcg.2006.122;10.1109/infvis.2004.66;10.1109/infvis.2004.43;10.1109/tvcg.2006.166;10.1109/tvcg.2006.147;10.1109/tvcg.2006.122
Graph visualization, network visualization, degree of interest, legal citation networks, focus+context282155242506
98
VAST2008
Spatio-temporal aggregation for visual analysis of movements
10.1109/vast.2008.4677356
http://dx.doi.org/10.1109/VAST.2008.4677356
5158C
Data about movements of various objects are collected in growing amounts by means of current tracking technologies. Traditional approaches to visualization and interactive exploration of movement data cannot cope with data of such sizes. In this research paper we investigate the ways of using aggregation for visual analysis of movement data. We define aggregation methods suitable for movement data and find visualization and interaction techniques to represent results of aggregations and enable comprehensive exploration of the data. We consider two possible views of movement, traffic-oriented and trajectory-oriented. Each view requires different methods of analysis and of data aggregation. We illustrate our argument with example data resulting from tracking multiple cars in Milan and example analysis tasks from the domain of city traffic management.
Gennady L. Andrienko;Natalia V. Andrienko
Gennady Andrienko;Natalia Andrienko
Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt-Augustin, Germany;Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt-Augustin, GermanyMovement data, spatio-temporal data, aggregation, scalable visualization, geovisualization327154172149
99
InfoVis2010
Mental Models; Visual Reasoning and Interaction in Information Visualization: A Top-down Perspective
10.1109/tvcg.2010.177
http://dx.doi.org/10.1109/TVCG.2010.177
9991008J
Although previous research has suggested that examining the interplay between internal and external representations can benefit our understanding of the role of information visualization (InfoVis) in human cognitive activities, there has been little work detailing the nature of internal representations, the relationship between internal and external representations and how interaction is related to these representations. In this paper, we identify and illustrate a specific kind of internal representation, mental models, and outline the high-level relationships between mental models and external visualizations. We present a top-down perspective of reasoning as model construction and simulation, and discuss the role of visualization in model based reasoning. From this perspective, interaction can be understood as active modeling for three primary purposes: external anchoring, information foraging, and cognitive offloading. Finally we discuss the implications of our approach for design, evaluation and theory development.
Zhicheng Liu 0001;John T. Stasko
Zhicheng Liu;John Stasko
School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA
10.1109/tvcg.2009.187;10.1109/tvcg.2008.155;10.1109/infvis.2001.963289;10.1109/tvcg.2009.109;10.1109/tvcg.2007.70515;10.1109/tvcg.2009.180;10.1109/tvcg.2008.109;10.1109/tvcg.2008.171;10.1109/tvcg.2008.121;10.1109/vast.2008.4677365;10.1109/tvcg.2009.187
Mental model, model-based reasoning, distributed cognition, interaction, theory, information visualization340154675630
100
InfoVis2010
A Visual Backchannel for Large-Scale Events
10.1109/tvcg.2010.129
http://dx.doi.org/10.1109/TVCG.2010.129
11291138J
We introduce the concept of a Visual Backchannel as a novel way of following and exploring online conversations about large-scale events. Microblogging communities, such as Twitter, are increasingly used as digital backchannels for timely exchange of brief comments and impressions during political speeches, sport competitions, natural disasters, and other large events. Currently, shared updates are typically displayed in the form of a simple list, making it difficult to get an overview of the fast-paced discussions as it happens in the moment and how it evolves over time. In contrast, our Visual Backchannel design provides an evolving, interactive, and multi-faceted visual overview of large-scale ongoing conversations on Twitter. To visualize a continuously updating information stream, we include visual saliency for what is happening now and what has just happened, set in the context of the evolving conversation. As part of a fully web-based coordinated-view system we introduce Topic Streams, a temporally adjustable stacked graph visualizing topics over time, a People Spiral representing participants and their activity, and an Image Cloud encoding the popularity of event photos by size. Together with a post listing, these mutually linked views support cross-filtering along topics, participants, and time ranges. We discuss our design considerations, in particular with respect to evolving visualizations of dynamically changing data. Initial feedback indicates significant interest and suggests several unanticipated uses.
Marian Dörk;Daniel M. Gruen;Carey Williamson;Sheelagh Carpendale
Marian Dörk;Daniel Gruen;Carey Williamson;Sheelagh Carpendale
University of Calgary, Canada;IBM Research Division, IBM Thomas J. Watson Research Center, USA;University of Calgary, Canada;University of Calgary, Canada
10.1109/vast.2009.5333443;10.1109/tvcg.2007.70541;10.1109/tvcg.2008.166;10.1109/tvcg.2008.175;10.1109/infvis.2005.1532133;10.1109/infvis.2003.1249028;10.1109/vast.2008.4677364;10.1109/vast.2009.5333437;10.1109/vast.2009.5333443
Backchannel, information visualization, events, multiple views, microblogging, information retrieval, World Wide Web266153431980