IEEE VIS papers 1990-2015
Comments
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
Comment only
 
 
Still loading...
ABCDEFGHIJKLMNOPQRSTUVWXYZAAABACADAEAFAGAHAIAJAK
1
ConferenceYearPaper TitlePaper DOILinkFirst pageLast pagePanel, Keynote, Captstone, Demo, Poster, ...Paper type: C=conference paper, J = journal paper, M=miscellaneous (capstone, keynote, VAST challenge, panel, poster, ...)AbstractAuthor NamesFirst Author AffiliationAuthor IDsDeduped author namesReferencesAuthor KeywordsIEEE XPLORE Article Number (deprecated)IEEE Xplore Number Guessed (deprecated)References (deprecated)
2
InfoVis2015
A comparative study between RadViz and Star Coordinates
10.1109/TVCG.2015.2467324
http://dx.doi.org/10.1109/TVCG.2015.2467324
619628J
RadViz and star coordinates are two of the most popular projection-based multivariate visualization techniques that arrange variables in radial layouts. Formally, the main difference between them consists of a nonlinear normalization step inherent in RadViz. In this paper we show that, although RadViz can be useful when analyzing sparse data, in general this design choice limits its applicability and introduces several drawbacks for exploratory data analysis. In particular, we observe that the normalization step introduces nonlinear distortions, can encumber outlier detection, prevents associating the plots with useful linear mappings, and impedes estimating original data attributes accurately. In addition, users have greater flexibility when choosing different layouts and views of the data in star coordinates. Therefore, we suggest that analysts and researchers should carefully consider whether RadViz's normalization step is beneficial regarding the data sets' characteristics and analysis tasks.
Rubio-Sanchez, M.;Raya, L.;Diaz, F.;Sanchez, A.
;;;;;;Rubio-Sanchez, M.;Raya, L.;Diaz, F.;Sanchez, A.
10.1109/VAST.2010.5652433;10.1109/INFVIS.1998.729559;10.1109/VISUAL.1997.663916;10.1109/TVCG.2013.182;10.1109/TVCG.2014.2346258;10.1109/TVCG.2008.173
RadViz, Star coordinates, Exploratory data analysis, Cluster analysis, Classification, Outlier detection71926995652433;729568;663916;6634131;6875998;4658161
3
InfoVis2015
A Linguistic Approach to Categorical Color Assignment for Data Visualization
10.1109/TVCG.2015.2467471
http://dx.doi.org/10.1109/TVCG.2015.2467471
698707J
When data categories have strong color associations, it is useful to use these semantically meaningful concept-color associations in data visualizations. In this paper, we explore how linguistic information about the terms defining the data can be used to generate semantically meaningful colors. To do this effectively, we need first to establish that a term has a strong semantic color association, then discover which color or colors express it. Using co-occurrence measures of color name frequencies from Google n-grams, we define a measure for colorability that describes how strongly associated a given term is to any of a set of basic color terms. We then show how this colorability score can be used with additional semantic analysis to rank and retrieve a representative color from Google Images. Alternatively, we use symbolic relationships defined by WordNet to select identity colors for categories such as countries or brands. To create visually distinct color palettes, we use k-means clustering to create visually distinct sets, iteratively reassigning terms with multiple basic color associations as needed. This can be additionally constrained to use colors only in a predefined palette.
Setlur, V.;Stone, M.C.;;Setlur, V.;Stone, M.C.linguistics, natural language processing, semantics, color names, categorical color, Google n-grams, WordNet, XKCD7192709
4
InfoVis2015
A Psychophysical Investigation of Size as a Physical Variable
10.1109/TVCG.2015.2467951
http://dx.doi.org/10.1109/TVCG.2015.2467951
479488J
Physical visualizations, or data physicalizations, encode data in attributes of physical shapes. Despite a considerable body of work on visual variables, ΓÇ£physical variablesΓÇ¥ remain poorly understood. One of them is physical size. A difficulty for solid elements is that ΓÇ£sizeΓÇ¥ is ambiguous - it can refer to either length/diameter, surface, or volume. Thus, it is unclear for designers of physicalizations how to effectively encode quantities in physical size. To investigate, we ran an experiment where participants estimated ratios between quantities represented by solid bars and spheres. Our results suggest that solid bars are compared based on their length, consistent with previous findings for 2D and 3D bars on flat media. But for spheres, participants' estimates are rather proportional to their surface. Depending on the estimation method used, judgments are rather consistent across participants, thus the use of perceptually-optimized size scales seems possible. We conclude by discussing implications for the design of data physicalizations and the need for more empirical studies on physical variables.
Jansen, Y.;Hornbaek, K.Univ. of Copenhagen, Copenhagen, Denmark|c|;;Jansen, Y.;Hornbaek, K.
10.1109/TVCG.2012.251;10.1109/TVCG.2013.234;10.1109/TVCG.2012.220;10.1109/TVCG.2013.134;10.1109/TVCG.2007.70541;10.1109/TVCG.2014.2352953;10.1109/TVCG.2014.2346320
Data physicalization, physical visualization, psychophysics, experiment, physical variable71948456327257;6634103;6327283;6634126;4376134;6888482;6876021
5
InfoVis2015
A Simple Approach for Boundary Improvement of Euler Diagrams
10.1109/TVCG.2015.2467992
http://dx.doi.org/10.1109/TVCG.2015.2467992
678687J
General methods for drawing Euler diagrams tend to generate irregular polygons. Yet, empirical evidence indicates that smoother contours make these diagrams easier to read. In this paper, we present a simple method to smooth the boundaries of any Euler diagram drawing. When refining the diagram, the method must ensure that set elements remain inside their appropriate boundaries and that no region is removed or created in the diagram. Our approach uses a force system that improves the diagram while at the same time ensuring its topological structure does not change. We demonstrate the effectiveness of the approach through case studies and quantitative evaluations.
Simonetto, P.;Archambault, D.;Scheidegger, C.
;;;;Simonetto, P.;Archambault, D.;Scheidegger, C.E.
10.1109/TVCG.2011.186;10.1109/TVCG.2013.184;10.1109/TVCG.2009.122;10.1109/TVCG.2014.2346248;10.1109/TVCG.2010.210
Euler diagrams, Boundary Improvement, Force-Directed Approaches71926936064991;6634104;5290706;6876017;5613447
6
InfoVis2015
Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives
10.1109/TVCG.2015.2467321
http://dx.doi.org/10.1109/TVCG.2015.2467321
509518J
While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.
Byrne, L.;Angus, D.;Wiles, J.;;;;Byrne, L.;Angus, D.;Wiles, J.
10.1109/TVCG.2013.234;10.1109/TVCG.2010.126;10.1109/INFVIS.2005.1532122;10.1109/TVCG.2011.255;10.1109/TVCG.2007.70594;10.1109/TVCG.2010.179;10.1109/INFVIS.2004.59;10.1109/TVCG.2012.221;10.1109/TVCG.2008.171
Visual Design, Taxonomies, Illustrative Visualization, Design Methodologies7192636
6634103;5613455;1532122;6064988;4376133;5613452;1382903;6327280;4658138
7
InfoVis2015
AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations
10.1109/TVCG.2015.2467051
http://dx.doi.org/10.1109/TVCG.2015.2467051
688697J
Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university.
Yalcin, M.A.;Elmqvist, N.;Bederson, B.B.
Univ. of Maryland, College Park, MD, USA|c|;;;;Yalcin, M.A.;Elmqvist, N.;Bederson, B.B.
10.1109/TVCG.2011.186;10.1109/TVCG.2013.184;10.1109/TVCG.2011.185;10.1109/TVCG.2009.122;10.1109/TVCG.2007.70535;10.1109/TVCG.2008.144;10.1109/INFVIS.2004.1;10.1109/TVCG.2007.70539;10.1109/TVCG.2008.141;10.1109/TVCG.2014.2346248;10.1109/TVCG.2010.210;10.1109/TVCG.2014.2346249
Multi-valued attributes, sets, visualization, set visualization, data exploration, interaction, design, scalability7194854
6064991;6634104;6064996;5290706;4376143;4658148;1382886;4376146;4658145;6876017;5613447;6876026
8
InfoVis2015
AmbiguityVis: Visualization of Ambiguity in Graph Layouts
10.1109/TVCG.2015.2467691
http://dx.doi.org/10.1109/TVCG.2015.2467691
359368J
Node-link diagrams provide an intuitive way to explore networks and have inspired a large number of automated graph layout strategies that optimize aesthetic criteria. However, any particular drawing approach cannot fully satisfy all these criteria simultaneously, producing drawings with visual ambiguities that can impede the understanding of network structure. To bring attention to these potentially problematic areas present in the drawing, this paper presents a technique that highlights common types of visual ambiguities: ambiguous spatial relationships between nodes and edges, visual overlap between community structures, and ambiguity in edge bundling and metanodes. Metrics, including newly proposed metrics for abnormal edge lengths, visual overlap in community structures and node/edge aggregation, are proposed to quantify areas of ambiguity in the drawing. These metrics and others are then displayed using a heatmap-based visualization that provides visual feedback to developers of graph drawing and visualization approaches, allowing them to quickly identify misleading areas. The novel metrics and the heatmap-based visualization allow a user to explore ambiguities in graph layouts from multiple perspectives in order to make reasonable graph layout choices. The effectiveness of the technique is demonstrated through case studies and expert reviews.
Yong Wang;Qiaomu Shen;Archambault, D.;Zhiguang Zhou;Min Zhu;Sixiao Yang;Huamin Qu
;;;;;;;;;;;;Yong Wang;Qiaomu Shen;Archambault, D.;Zhiguang Zhou;Min Zhu;Sixiao Yang;Huamin Qu
10.1109/TVCG.2006.120;10.1109/TVCG.2006.147;10.1109/TVCG.2012.245;10.1109/VAST.2009.5332628;10.1109/TVCG.2008.155;10.1109/TVCG.2012.189
Visual Ambiguity, Visualization, Node-link diagram, Graph layout, Graph visualization71927244015416;4015425;6327253;5332628;4658147;6327250
9
InfoVis2015
Automatic Selection of Partitioning Variables for Small Multiple Displays
10.1109/TVCG.2015.2467323
http://dx.doi.org/10.1109/TVCG.2015.2467323
669677J
Effective small multiple displays are created by partitioning a visualization on variables that reveal interesting conditional structure in the data. We propose a method that automatically ranks partitioning variables, allowing analysts to focus on the most promising small multiple displays. Our approach is based on a randomized, non-parametric permutation test, which allows us to handle a wide range of quality measures for visual patterns defined on many different visualization types, while discounting spurious patterns. We demonstrate the effectiveness of our approach on scatterplots of real-world, multidimensional datasets.
Anand, A.;Talbot, J.;;Anand, A.;Talbot, J.
10.1109/VAST.2010.5652433;10.1109/INFVIS.1998.729559;10.1109/TVCG.2011.229;10.1109/TVCG.2006.161;10.1109/TVCG.2010.184;10.1109/TVCG.2009.153;10.1109/INFVIS.2003.1249006;10.1109/TVCG.2007.70594;10.1109/VAST.2006.261423;10.1109/INFVIS.2000.885086;10.1109/VAST.2009.5332628;10.1109/TVCG.2010.161;10.1109/INFVIS.2005.1532142
Small multiple displays, Visualization selection, Multidimensional data7192658
5652433;729568;6064985;4015421;5613439;5290704;1249006;4376133;4035766;885086;5332628;5613434;1532142
10
InfoVis2015
Beyond Memorability: Visualization Recognition and Recall
10.1109/TVCG.2015.2467732
http://dx.doi.org/10.1109/TVCG.2015.2467732
519528J
In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable ΓÇ£at-a-glanceΓÇ¥ are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.
Borkin, M.A.;Bylinskii, Z.;Nam Wook Kim;Bainbridge, C.M.;Yeh, C.S.;Borkin, D.;Pfister, H.;Oliva, A.
;;;;;;;;;;;;;;Borkin, M.;Bylinskii, Z.;Nam Wook Kim;Bainbridge, C.M.;Yeh, C.S.;Borkin, D.;Pfister, H.;Oliva, A.
10.1109/TVCG.2012.197;10.1109/TVCG.2013.234;10.1109/TVCG.2011.193;10.1109/TVCG.2012.233;10.1109/TVCG.2011.175;10.1109/TVCG.2013.234;10.1109/TVCG.2012.215;10.1109/VAST.2010.5653598;10.1109/TVCG.2012.245;10.1109/TVCG.2012.221
Information visualization, memorability, recognition, recall, eye-tracking study7192646
6327282;6634103;6065011;6327245;6064986;6634103;6327247;5653598;6327253;6327280
11
InfoVis2015
Beyond Weber's Law: A Second Look at Ranking Visualizations of Correlation
10.1109/TVCG.2015.2467671
http://dx.doi.org/10.1109/TVCG.2015.2467671
469478J
Models of human perception - including perceptual ΓÇ£lawsΓÇ¥ - can be valuable tools for deriving visualization design recommendations. However, it is important to assess the explanatory power of such models when using them to inform design. We present a secondary analysis of data previously used to rank the effectiveness of bivariate visualizations for assessing correlation (measured with Pearson's r) according to the well-known Weber-Fechner Law. Beginning with the model of Harrison et al. [1], we present a sequence of refinements including incorporation of individual differences, log transformation, censored regression, and adoption of Bayesian statistics. Our model incorporates all observations dropped from the original analysis, including data near ceilings caused by the data collection process and entire visualizations dropped due to large numbers of observations worse than chance. This model deviates from Weber's Law, but provides improved predictive accuracy and generalization. Using Bayesian credibility intervals, we derive a partial ranking that groups visualizations with similar performance, and we give precise estimates of the difference in performance between these groups. We find that compared to other visualizations, scatterplots are unique in combining low variance between individuals and high precision on both positively- and negatively correlated data. We conclude with a discussion of the value of data sharing and replication, and share implications for modeling similar experimental data.
Kay, M.;Heer, J.;;Kay, M.;Heer, J.10.1109/TVCG.2014.2346979Weber's law, perception of correlation, log transformation, censored regression, Bayesian methods71926616875978
12
InfoVis2015
Evaluation of Parallel Coordinates: Overview, Categorization and Guidelines for Future Research
10.1109/TVCG.2015.2466992
http://dx.doi.org/10.1109/TVCG.2015.2466992
579588J
The parallel coordinates technique is widely used for the analysis of multivariate data. During recent decades significant research efforts have been devoted to exploring the applicability of the technique and to expand upon it, resulting in a variety of extensions. Of these many research activities, a surprisingly small number concerns user-centred evaluations investigating actual use and usability issues for different tasks, data and domains. The result is a clear lack of convincing evidence to support and guide uptake by users as well as future research directions. To address these issues this paper contributes a thorough literature survey of what has been done in the area of user-centred evaluation of parallel coordinates. These evaluations are divided into four categories based on characterization of use, derived from the survey. Based on the data from the survey and the categorization combined with the authors' experience of working with parallel coordinates, a set of guidelines for future research directions is proposed.
Johansson, J.;Forsell, C.
Norrkoping Visualization Center C, Linkoping Univ., Linkoping, Sweden|c|;
;Johansson, J.;Forsell, C.
10.1109/TVCG.2014.2346626;10.1109/TVCG.2011.201;10.1109/VISUAL.1999.809866;10.1109/TVCG.2014.2346979;10.1109/INFVIS.2002.1173157;10.1109/TVCG.2013.126;10.1109/INFVIS.2005.1532138;10.1109/TVCG.2009.153;10.1109/INFVIS.2004.15;10.1109/INFVIS.2004.5;10.1109/TVCG.2011.197;10.1109/VISUAL.1997.663867
Survey, evaluation, guidelines, parallel coordinates7192677
6875958;6064997;809866;6875978;1173157;6634108;1532138;5290704;1382895;1382884;6065022;663867
13
InfoVis2015
Guidelines for Effective Usage of Text Highlighting Techniques
10.1109/TVCG.2015.2467759
http://dx.doi.org/10.1109/TVCG.2015.2467759
489498J
Semi-automatic text analysis involves manual inspection of text. Often, different text annotations (like part-of-speech or named entities) are indicated by using distinctive text highlighting techniques. In typesetting there exist well-known formatting conventions, such as bold typeface, italics, or background coloring, that are useful for highlighting certain parts of a given text. Also, many advanced techniques for visualization and highlighting of text exist; yet, standard typesetting is common, and the effects of standard typesetting on the perception of text are not fully understood. As such, we surveyed and tested the effectiveness of common text highlighting techniques, both individually and in combination, to discover how to maximize pop-out effects while minimizing visual interference between techniques. To validate our findings, we conducted a series of crowd-sourced experiments to determine: i) a ranking of nine commonly-used text highlighting techniques; ii) the degree of visual interference between pairs of text highlighting techniques; iii) the effectiveness of techniques for visual conjunctive search. Our results show that increasing font size works best as a single highlighting technique, and that there are significant visual interferences between some pairs of highlighting techniques. We discuss the pros and cons of different combinations as a design guideline to choose text highlighting techniques for text viewers.
Strobelt, H.;Oelke, D.;Bum Chul Kwon;Schreck, T.;Pfister, H.
;;;;;;;;Strobelt, H.;Oelke, D.;Bum Chul Kwon;Schreck, T.;Pfister, H.
10.1109/TVCG.2012.277;10.1109/VAST.2007.4389004;10.1109/TVCG.2014.2346677;10.1109/TVCG.2007.70594;10.1109/TVCG.2011.183;10.1109/TVCG.2009.139;10.1109/VAST.2011.6102453;10.1109/INFVIS.1995.528686
Text highlighting techniques, visual document analytics, text annotation, crowdsourced study71927186327290;4389004;6875959;4376133;6064990;5290723;6102453;528686
14
InfoVis2015
High-Quality Ultra-Compact Grid Layout of Grouped Networks
10.1109/TVCG.2015.2467251
http://dx.doi.org/10.1109/TVCG.2015.2467251
339348J
Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.
Yoghourdjian, V.;Dwyer, T.;Gange, G.;Kieffer, S.;Klein, K.;Marriott, K.
;;;;;;;;;;Yoghourdjian, V.;Dwyer, T.;Gange, G.;Kieffer, S.;Klein, K.;Marriott, K.
10.1109/TVCG.2008.117;10.1109/TVCG.2013.151;10.1109/TVCG.2006.156;10.1109/TVCG.2009.109;10.1109/INFVIS.2003.1249009;10.1109/TVCG.2015.2467451;10.1109/TVCG.2012.245
Network visualization, graph drawing, power graph, optimization, large-neighborhood search71927334658137;6634098;4015435;5290700;1249009;7192690;6327253
15
InfoVis2015HOLA: Human-like Orthogonal Network Layout10.1109/TVCG.2015.2467451
http://dx.doi.org/10.1109/TVCG.2015.2467451
349358J
Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new ΓÇ£human-centredΓÇ¥ methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.
Kieffer, S.;Dwyer, T.;Marriott, K.;Wybrow, M.
;;;;;;Kieffer, S.;Dwyer, T.;Marriott, K.;Wybrow, M.
10.1109/TVCG.2006.120;10.1109/TVCG.2012.208;10.1109/TVCG.2013.151;10.1109/TVCG.2006.156;10.1109/TVCG.2009.109;10.1109/TVCG.2008.141;10.1109/TVCG.2006.147;10.1109/TVCG.2012.245;10.1109/TVCG.2008.155
Graph layout, orthogonal layout, automatic layout algorithms, user-generated layout, graph-drawing aesthetics7192690
4015416;6327251;6634098;4015435;5290700;4658145;4015425;6327253;4658147
16
InfoVis2015
How do People Make Sense of Unfamiliar Visualizations?: A Grounded Model of Novice's Information Visualization Sensemaking
10.1109/TVCG.2015.2467195
http://dx.doi.org/10.1109/TVCG.2015.2467195
499508J
In this paper, we would like to investigate how people make sense of unfamiliar information visualizations. In order to achieve the research goal, we conducted a qualitative study by observing 13 participants when they endeavored to make sense of three unfamiliar visualizations (i.e., a parallel-coordinates plot, a chord diagram, and a treemap) that they encountered for the first time. We collected data including audio/video record of think-aloud sessions and semi-structured interview; and analyzed the data using the grounded theory method. The primary result of this study is a grounded model of NOvice's information VIsualization Sensemaking (NOVIS model), which consists of the five major cognitive activities: 1 encountering visualization, 2 constructing a frame, 3 exploring visualization, 4 questioning the frame, and 5 floundering on visualization. We introduce the NOVIS model by explaining the five activities with representative quotes from our participants. We also explore the dynamics in the model. Lastly, we compare with other existing models and share further research directions that arose from our observations.
Sukwon Lee;Sung-Hee Kim;Ya-Hsin Hung;Lam, H.;Youn-ah Kang;Ji Soo Yi
Sch. of Ind. Eng., Purdue Univ., West Lafayette, IN, USA|c|;;;;;
;;;;;Sukwon Lee;Sung-Hee Kim;Ya-Hsin Hung;Lam, H.;Youn-ah Kang;Ji Soo Yi
10.1109/TVCG.2013.234;10.1109/TVCG.2014.2346984;10.1109/TVCG.2010.164;10.1109/VAST.2011.6102435;10.1109/TVCG.2014.2346452;10.1109/TVCG.2010.177;10.1109/TVCG.2014.2346481;10.1109/TVCG.2010.179;10.1109/TVCG.2007.70515
Sensemaking model, information visualization, novice users, grounded theory, qualitative study7192668
6634103;6875906;5613431;6102435;6876022;5613437;6875967;5613452;4376144
17
InfoVis2015
Improving Bayesian Reasoning: The Effects of Phrasing, Visualization, and Spatial Ability
10.1109/TVCG.2015.2467758
http://dx.doi.org/10.1109/TVCG.2015.2467758
529538J
Decades of research have repeatedly shown that people perform poorly at estimating and understanding conditional probabilities that are inherent in Bayesian reasoning problems. Yet in the medical domain, both physicians and patients make daily, life-critical judgments based on conditional probability. Although there have been a number of attempts to develop more effective ways to facilitate Bayesian reasoning, reports of these findings tend to be inconsistent and sometimes even contradictory. For instance, the reported accuracies for individuals being able to correctly estimate conditional probability range from 6% to 62%. In this work, we show that problem representation can significantly affect accuracies. By controlling the amount of information presented to the user, we demonstrate how text and visualization designs can increase overall accuracies to as high as 77%. Additionally, we found that for users with high spatial ability, our designs can further improve their accuracies to as high as 100%. By and large, our findings provide explanations for the inconsistent reports on accuracy in Bayesian reasoning tasks and show a significant improvement over existing methods. We believe that these findings can have immediate impact on risk communication in health-related fields.
Ottley, A.;Peck, E.M.;Harrison, L.T.;Afergan, D.;Ziemkiewicz, C.;Taylor, H.A.;Han, P.K.J.;Chang, R.
;;;;;;;;;;;;;;Ottley, A.;Peck, E.M.;Harrison, L.;Afergan, D.;Ziemkiewicz, C.;Taylor, H.A.;Han, P.K.J.;Chang, R.
10.1109/TVCG.2014.2346575;10.1109/VAST.2010.5653587;10.1109/TVCG.2011.255;10.1109/TVCG.2013.119;10.1109/TVCG.2012.199;10.1109/TVCG.2010.179;10.1109/VISUAL.2005.1532836
Bayesian Reasoning, Visualization, Spatial Ability, Individual Differences71927206875913;5653587;6064988;6634182;6327259;5613452;1532836
18
InfoVis2015
Matches, Mismatches, and Methods: Multiple-View Workflows for Energy Portfolio Analysis
10.1109/TVCG.2015.2466971
http://dx.doi.org/10.1109/TVCG.2015.2466971
449458J
The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted a visualization design study, beginning with a thorough work domain analysis and a characterization of data and task abstractions. We describe generalizable visual encoding design choices for time-oriented data framed in terms of matches and mismatches, as well as considerations for workflow design. Our designs address several research questions pertaining to scalability, view coordination, and the inappropriateness of line charts for derived and aggregated data due to a combination of data semantics and domain convention. We also present guidelines relating to familiarity and trust, as well as methodological considerations for visualization design studies. Our designs were adopted by our collaborators and incorporated into the design of an energy analysis software application that will be deployed to tens of thousands of energy workers in their client base.
Brehmer, M.;Ng, J.;Tate, K.;Munzner, T.
;;;;;;Brehmer, M.;Ng, J.;Tate, K.;Munzner, T.
10.1109/TVCG.2011.185;10.1109/TVCG.2013.124;10.1109/TVCG.2008.166;10.1109/TVCG.2013.145;10.1109/TVCG.2013.173;10.1109/TVCG.2010.162;10.1109/TVCG.2007.70583;10.1109/TVCG.2011.209;10.1109/TVCG.2014.2346331;10.1109/TVCG.2014.2346578;10.1109/TVCG.2009.111;10.1109/TVCG.2011.196;10.1109/TVCG.2012.213;10.1109/INFVIS.1999.801851;10.1109/INFVIS.2005.1532122
Design study, design methodologies, time series data, task and requirements analysis, coordinated and multiple views7225156
6064996;6634168;4658136;6634166;6634146;5613429;4376151;6065017;6876000;6875995;5290695;6065016;6327248;801851;1532122
19
InfoVis2015
Off the Radar: Comparative Evaluation of Radial Visualization Solutions for Composite Indicators
10.1109/TVCG.2015.2467322
http://dx.doi.org/10.1109/TVCG.2015.2467322
569578J
A composite indicator (CI) is a measuring and benchmark tool used to capture multi-dimensional concepts, such as Information and Communication Technology (ICT) usage. Individual indicators are selected and combined to reflect a phenomena being measured. Visualization of a composite indicator is recommended as a tool to enable interested stakeholders, as well as the public audience, to better understand the indicator components and evolution overtime. However, existing CI visualizations introduce a variety of solutions and there is a lack in CI's visualization guidelines. Radial visualizations are popular among these solutions because of CI's inherent multi-dimensionality. Although in dispute, Radar-charts are often used for CI presentation. However, no empirical evidence on Radar's effectiveness and efficiency for common CI tasks is available. In this paper, we aim to fill this gap by reporting on a controlled experiment that compares the Radar chart technique with two other radial visualization methods: Flowercharts as used in the well-known OECD Betterlife index, and Circle-charts which could be adopted for this purpose. Examples of these charts in the current context are shown in Figure 1. We evaluated these charts, showing the same data with each of the mentioned techniques applying small multiple views for different dimensions of the data. We compared users' performance and preference empirically under a formal task-taxonomy. Results indicate that the Radar chart was the least effective and least liked, while performance of the two other options were mixed and dependent on the task. Results also showed strong preference of participants toward the Flower chart. Summarizing our results, we provide specific design guidelines for composite indicator visualization.
Albo, Y.;Lanir, J.;Bak, P.;Rafaeli, S.Univ. of Haifa, Haifa, Israel|c|;;;;;;Albo, Y.;Lanir, J.;Bak, P.;Rafaeli, S.
10.1109/TVCG.2010.209;10.1109/TVCG.2008.125
Visualization evaluation, radial layout design, composite indicator visualization, experiment71926485613430;4658146
20
InfoVis2015
Optimal Sets of Projections of High-Dimensional Data
10.1109/TVCG.2015.2467132
http://dx.doi.org/10.1109/TVCG.2015.2467132
609618J
Finding good projections of n-dimensional datasets into a 2D visualization domain is one of the most important problems in Information Visualization. Users are interested in getting maximal insight into the data by exploring a minimal number of projections. However, if the number is too small or improper projections are used, then important data patterns might be overlooked. We propose a data-driven approach to find minimal sets of projections that uniquely show certain data patterns. For this we introduce a dissimilarity measure of data projections that discards affine transformations of projections and prevents repetitions of the same data patterns. Based on this, we provide complete data tours of at most n/2 projections. Furthermore, we propose optimal paths of projection matrices for an interactive data exploration. We illustrate our technique with a set of state-of-the-art real high-dimensional benchmark datasets.
Lehmann, D.J.;Theisel, H.Univ. of Magdeburg, Magdeburg, Germany|c|;;Lehmann, D.J.;Theisel, H.
10.1109/VAST.2010.5652433;10.1109/VAST.2011.6102437;10.1109/TVCG.2011.229;10.1109/VISUAL.1997.663916;10.1109/TVCG.2011.220;10.1109/TVCG.2013.182;10.1109/TVCG.2010.207;10.1109/VAST.2006.261423;10.1109/INFVIS.2005.1532142
Multivariate Projections, Star Coordinates, Radial Visualization, High-dimensional Data7192684
5652433;6102437;6064985;663916;6065024;6634131;5613468;4035766;1532142
21
InfoVis2015Orientation-Enhanced Parallel Coordinate Plots10.1109/TVCG.2015.2467872
http://dx.doi.org/10.1109/TVCG.2015.2467872
589598J
Parallel Coordinate Plots (PCPs) is one of the most powerful techniques for the visualization of multivariate data. However, for large datasets, the representation suffers from clutter due to overplotting. In this case, discerning the underlying data information and selecting specific interesting patterns can become difficult. We propose a new and simple technique to improve the display of PCPs by emphasizing the underlying data structure. Our Orientation-enhanced Parallel Coordinate Plots (OPCPs) improve pattern and outlier discernibility by visually enhancing parts of each PCP polyline with respect to its slope. This enhancement also allows us to introduce a novel and efficient selection method, the Orientation-enhanced Brushing (O-Brushing). Our solution is particularly useful when multiple patterns are present or when the view on certain patterns is obstructed by noise. We present the results of our approach with several synthetic and real-world datasets. Finally, we conducted a user evaluation, which verifies the advantages of the OPCPs in terms of discernibility of information in complex data. It also confirms that O-Brushing eases the selection of data patterns in PCPs and reduces the amount of necessary user interactions compared to state-of-the-art brushing techniques.
Raidou, R.G.;Eisemann, M.;Breeuwer, M.;Eisemann, E.;Vilanova, A.
;;;;;;;;Raidou, R.G.;Eisemann, M.;Breeuwer, M.;Eisemann, E.;Vilanova, A.
10.1109/INFVIS.1998.729559;10.1109/INFVIS.2004.68;10.1109/TVCG.2006.138;10.1109/TVCG.2007.70535;10.1109/INFVIS.2005.1532141;10.1109/VISUAL.1999.809866;10.1109/TVCG.2011.166;10.1109/TVCG.2014.2346979;10.1109/INFVIS.2002.1173157;10.1109/INFVIS.2005.1532138;10.1109/TVCG.2009.153;10.1109/VISUAL.1995.485139;10.1109/TVCG.2006.170;10.1109/INFVIS.2004.15;10.1109/VISUAL.1994.346302;10.1109/INFVIS.2003.1249008;10.1109/VISUAL.1996.567800;10.1109/INFVIS.2003.1249015;10.1109/TVCG.2009.179
Parallel Coordinates, Orientation-enhanced Parallel Coordinates, Brushing, Orientation-enhanced Brushing, Data Readability, Data Selection7192696
729568;1382894;4015422;4376143;1532141;809866;6065025;6875978;1173157;1532138;5290704;485139;4015444;1382895;346302;1249008;567800;1249015;5290705
22
InfoVis2015Poemage: Visualizing the Sonic Topology of a Poem10.1109/TVCG.2015.2467811
http://dx.doi.org/10.1109/TVCG.2015.2467811
439448J
The digital humanities have experienced tremendous growth within the last decade, mostly in the context of developing computational tools that support what is called distant reading - collecting and analyzing huge amounts of textual data for synoptic evaluation. On the other end of the spectrum is a practice at the heart of the traditional humanities, close reading - the careful, in-depth analysis of a single text in order to extract, engage, and even generate as much productive meaning as possible. The true value of computation to close reading is still very much an open question. During a two-year design study, we explored this question with several poetry scholars, focusing on an investigation of sound and linguistic devices in poetry. The contributions of our design study include a problem characterization and data abstraction of the use of sound in poetry as well as Poemage, a visualization tool for interactively exploring the sonic topology of a poem. The design of Poemage is grounded in the evaluation of a series of technology probes we deployed to our poetry collaborators, and we validate the final design with several case studies that illustrate the disruptive impact technology can have on poetry scholarship. Finally, we also contribute a reflection on the challenges we faced conducting visualization research in literary studies.
McCurdy, N.;Lein, J.;Coles, K.;Meyer, M.
;;;;;;McCurdy, N.;Lein, J.;Coles, K.;Meyer, M.
10.1109/TVCG.2011.186;10.1109/TVCG.2009.122;10.1109/VAST.2009.5333443;10.1109/TVCG.2008.135;10.1109/TVCG.2011.233;10.1109/INFVIS.2005.1532126;10.1109/TVCG.2012.213;10.1109/VAST.2007.4389006;10.1109/TVCG.2009.165;10.1109/TVCG.2009.171;10.1109/INFVIS.2002.1173155;10.1109/TVCG.2008.172;10.1109/INFVIS.1995.528686
Visualization in the humanities, design studies, text and document data, graph/network data7192712
6064991;5290706;5333443;4658140;6065003;1532126;6327248;4389006;5290726;5290722;1173155;4658133;528686
23
InfoVis2015
Probing Projections: Interaction Techniques for Interpreting Arrangements and Errors of Dimensionality Reductions
10.1109/TVCG.2015.2467717
http://dx.doi.org/10.1109/TVCG.2015.2467717
629638J
We introduce a set of integrated interaction techniques to interpret and interrogate dimensionality-reduced data. Projection techniques generally aim to make a high-dimensional information space visible in form of a planar layout. However, the meaning of the resulting data projections can be hard to grasp. It is seldom clear why elements are placed far apart or close together and the inevitable approximation errors of any projection technique are not exposed to the viewer. Previous research on dimensionality reduction focuses on the efficient generation of data projections, interactive customisation of the model, and comparison of different projection techniques. There has been only little research on how the visualization resulting from data projection is interacted with. We contribute the concept of probing as an integrated approach to interpreting the meaning and quality of visualizations and propose a set of interactive methods to examine dimensionality-reduced data as well as the projection itself. The methods let viewers see approximation errors, question the positioning of elements, compare them to each other, and visualize the influence of data dimensions on the projection space. We created a web-based system implementing these methods, and report on findings from an evaluation with data analysts using the prototype to examine multidimensional datasets.
Stahnke, J.;Dörk, M.;Müller, B.;Thom, A.
;;;;;;Stahnke, J.;Dörk, M.;Müller, B.;Thom, A.
10.1109/TVCG.2013.157;10.1109/TVCG.2011.255;10.1109/VAST.2010.5652392;10.1109/VISUAL.1990.146402;10.1109/TVCG.2009.153;10.1109/TVCG.2012.279;10.1109/TVCG.2014.2346419;10.1109/TVCG.2013.153;10.1109/TVCG.2009.127;10.1109/VISUAL.1994.346302;10.1109/TVCG.2007.70589;10.1109/INFVIS.2004.60;10.1109/INFVIS.1995.528686
Information visualization, interactivity, dimensionality reduction, multidimensional scaling7192695
6634124;6064988;5652392;146402;5290704;6327255;6876023;6634128;5290709;346302;4376132;1382891;528686
24
InfoVis2015
Reactive Vega: A Streaming Dataflow Architecture for Declarative Interactive Visualization
10.1109/TVCG.2015.2467091
http://dx.doi.org/10.1109/TVCG.2015.2467091
659668J
We present Reactive Vega, a system architecture that provides the first robust and comprehensive treatment of declarative visual and interaction design for data visualization. Starting from a single declarative specification, Reactive Vega constructs a dataflow graph in which input data, scene graph elements, and interaction events are all treated as first-class streaming data sources. To support expressive interactive visualizations that may involve time-varying scalar, relational, or hierarchical data, Reactive Vega's dataflow graph can dynamically re-write itself at runtime by extending or pruning branches in a data-driven fashion. We discuss both compile- and run-time optimizations applied within Reactive Vega, and share the results of benchmark studies that indicate superior interactive performance to both D3 and the original, non-reactive Vega system.
Satyanarayan, A.;Russell, R.;Hoffswell, J.;Heer, J.
;;;;;;Satyanarayan, A.;Russell, R.;Hoffswell, J.;Heer, J.
10.1109/VISUAL.1995.480821;10.1109/TVCG.2009.174;10.1109/TVCG.2011.185;10.1109/TVCG.2010.144;10.1109/TVCG.2014.2346250;10.1109/TVCG.2013.179;10.1109/TVCG.2010.177;10.1109/VISUAL.1996.567752;10.1109/INFVIS.2000.885086;10.1109/INFVIS.2004.12;10.1109/TVCG.2015.2467191;10.1109/TVCG.2007.70515
Information visualization, systems, toolkits, declarative specification, optimization, interaction, streaming data7192704
480821;5290720;6064996;5613453;6875985;6634137;5613437;567746_1;885086;1382904;7192728;4376144
25
InfoVis2015
SchemeLens: A Content-Aware Vector-Based Fisheye Technique for Navigating Large Systems Diagrams
10.1109/TVCG.2015.2467035
http://dx.doi.org/10.1109/TVCG.2015.2467035
330338J
System schematics, such as those used for electrical or hydraulic systems, can be large and complex. Fisheye techniques can help navigate such large documents by maintaining the context around a focus region, but the distortion introduced by traditional fisheye techniques can impair the readability of the diagram. We present SchemeLens, a vector-based, topology-aware fisheye technique which aims to maintain the readability of the diagram. Vector-based scaling reduces distortion to components, but distorts layout. We present several strategies to reduce this distortion by using the structure of the topology, including orthogonality and alignment, and a model of user intention to foster smooth and predictable navigation. We evaluate this approach through two user studies: Results show that (1) SchemeLens is 16-27% faster than both round and rectangular flat-top fisheye lenses at finding and identifying a target along one or several paths in a network diagram; (2) augmenting SchemeLens with a model of user intentions aids in learning the network topology.
Cohé, A.;Liutkus, B.;Bailly, G.;Eagan, J.;Lecolinet, E.
;;;;;;;;Cohé, A.;Liutkus, B.;Bailly, G.;Eagan, J.;Lecolinet, E.
10.1109/INFVIS.2004.66;10.1109/TVCG.2012.245;10.1109/INFVIS.2003.1249008
Fisheye, vector-scaling, content-aware, network schematics, interactive zoom, navigation, information visualization71926811382906;6327253;1249008
26
InfoVis2015
Sketching Designs Using the Five Design-Sheet Methodology
10.1109/TVCG.2015.2467271
http://dx.doi.org/10.1109/TVCG.2015.2467271
419428J
Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.
Roberts, J.C.;Headleand, C.;Ritsos, P.D.
;;;;Roberts, J.C.;Headleand, C.;Ritsos, P.D.
10.1109/TVCG.2010.132;10.1109/INFVIS.2000.885092;10.1109/TVCG.2006.178;10.1109/VISUAL.1994.346304;10.1109/TVCG.2014.2346331;10.1109/TVCG.2009.111;10.1109/TVCG.2012.213;10.1109/INFVIS.2004.59;10.1109/TVCG.2012.262;10.1109/TVCG.2007.70515;10.1109/TVCG.2008.171
Lo-fidelity prototyping, User-centred design, Sketching for visualization, Ideation7192707
5613460;885092;4015439;346304;6876000;5290695;6327248;1382903;6327281;4376144;4658138
27
InfoVis2015Spatial Reasoning and Data Displays10.1109/TVCG.2015.2469125
http://dx.doi.org/10.1109/TVCG.2015.2469125
459468J
Graphics convey numerical information very efficiently, but rely on a different set of mental processes than tabular displays. Here, we present a study relating demographic characteristics and visual skills to perception of graphical lineups. We conclude that lineups are essentially a classification test in a visual domain, and that performance on the lineup protocol is associated with general aptitude, rather than specific tasks such as card rotation and spatial manipulation. We also examine the possibility that specific graphical tasks may be associated with certain visual skills and conclude that more research is necessary to understand which visual skills are required in order to understand certain plot types.
VanderPlas, S.;Hofmann, H.;;VanderPlas, S.;Hofmann, H.
10.1109/TVCG.2012.230;10.1109/TVCG.2014.2346320;10.1109/TVCG.2010.161
Data visualization, Perception, Statistical graphics, Statistical computing72178496327249;6876021;5613434
28
InfoVis2015
Speculative Practices: Utilizing InfoVis to Explore Untapped Literary Collections
10.1109/TVCG.2015.2467452
http://dx.doi.org/10.1109/TVCG.2015.2467452
429438J
In this paper we exemplify how information visualization supports speculative thinking, hypotheses testing, and preliminary interpretation processes as part of literary research. While InfoVis has become a buzz topic in the digital humanities, skepticism remains about how effectively it integrates into and expands on traditional humanities research approaches. From an InfoVis perspective, we lack case studies that show the specific design challenges that make literary studies and humanities research at large a unique application area for information visualization. We examine these questions through our case study of the Speculative W@nderverse, a visualization tool that was designed to enable the analysis and exploration of an untapped literary collection consisting of thousands of science fiction short stories. We present the results of two empirical studies that involved general-interest readers and literary scholars who used the evolving visualization prototype as part of their research for over a year. Our findings suggest a design space for visualizing literary collections that is defined by (1) their academic and public relevance, (2) the tension between qualitative vs. quantitative methods of interpretation, (3) result-vs. process-driven approaches to InfoVis, and (4) the unique material and visual qualities of cultural collections. Through the Speculative W@nderverse we demonstrate how visualization can bridge these sometimes contradictory perspectives by cultivating curiosity and providing entry points into literary collections while, at the same time, supporting multiple aspects of humanities research processes.
Hinrichs, U.;Forlini, S.;Moynihan, B.SACHI Group, Univ. of St. Andrews, St. Andrews, UK|c|;;;;Hinrichs, U.;Forlini, S.;Moynihan, B.
10.1109/TVCG.2012.272;10.1109/TVCG.2014.2346431;10.1109/TVCG.2008.175;10.1109/TVCG.2008.127;10.1109/TVCG.2007.70541;10.1109/TVCG.2012.213;10.1109/VAST.2007.4389006;10.1109/TVCG.2009.165;10.1109/TVCG.2007.70577;10.1109/TVCG.2009.171;10.1109/TVCG.2008.172;10.1109/VAST.2008.4677370
Digital Humanities, Interlinked Visualization, Literary Studies, Cultural Collections, Science Fiction7192666
6327285;6875900;4658131;4658128;4376134;6327248;4389006;5290726;4376131;5290722;4658133;4677370
29
InfoVis2015
Suggested Interactivity: Seeking Perceived Affordances for Information Visualization
10.1109/TVCG.2015.2467201
http://dx.doi.org/10.1109/TVCG.2015.2467201
639648J
In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.
Boy, J.;Eveillard, L.;Detienne, F.;Fekete, J.-D.
;;;;;;Boy, J.;Eveillard, L.;Detienne, F.;Fekete, J.
10.1109/TVCG.2014.2346984;10.1109/TVCG.2013.134;10.1109/TVCG.2010.179;10.1109/INFVIS.2005.1532122
Suggested interactivity, perceived affordances, information visualization for the people, online visualization71926376875906;6634126;5613452;1532122
30
InfoVis2015
Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data
10.1109/TVCG.2015.2467851
http://dx.doi.org/10.1109/TVCG.2015.2467851
559568J
We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets.
Bach, B.;Conglei Shi;Heulot, N.;Madhyastha, T.;Grabowski, T.;Dragicevic, P.
Microsoft Res.-Inria Joint Centre, USA|c|;;;;;;;;;;Bach, B.;Conglei Shi;Heulot, N.;Madhyastha, T.;Grabowski, T.;Dragicevic, P.
10.1109/TVCG.2011.186;10.1109/TVCG.2007.70535;10.1109/INFVIS.2004.1;10.1109/TVCG.2014.2346325;10.1109/TVCG.2013.192;10.1109/INFVIS.2002.1173155
Temporal data visualization, information visualization, multidimensional scaling71926396064991;4376143;1382886;6875930;6634087;1173155
31
InfoVis2015
TimeNotes: A Study on Effective Chart Visualization and Interaction Techniques for Time-Series Data
10.1109/TVCG.2015.2467751
http://dx.doi.org/10.1109/TVCG.2015.2467751
549558J
Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.
Walker, J.;Borgo, R.;Jones, M.W.;;;;Walker, J.;Borgo, R.;Jones, M.W.
10.1109/TVCG.2009.181;10.1109/TVCG.2014.2346428;10.1109/INFVIS.2005.1532148;10.1109/TVCG.2011.160;10.1109/TVCG.2010.162;10.1109/TVCG.2010.193;10.1109/INFVIS.1999.801860;10.1109/TVCG.2011.195
Time-series Exploration, Focus+Context, Lens, Interaction Techniques71927355290701;6875940;1532148;6065014;5613429;5613426;801860;6065009
32
InfoVis2015
TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients
10.1109/TVCG.2015.2467325
http://dx.doi.org/10.1109/TVCG.2015.2467325
409418J
We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan.
Loorak, M.H.;Perin, C.;Kamal, N.;Hill, M.;Carpendale, S.
Dept. of Comput. Sci., Univ. of Calgary, Calgary, AB, Canada|c|;;;;
;;;;Loorak, M.H.;Perin, C.;Kamal, N.;Hill, M.;Carpendale, S.
10.1109/INFVIS.2005.1532136;10.1109/VAST.2006.261421;10.1109/TVCG.2014.2346682;10.1109/TVCG.2013.200;10.1109/TVCG.2014.2346279;10.1109/INFVIS.2005.1532152;10.1109/TVCG.2009.187;10.1109/TVCG.2012.225;10.1109/TVCG.2007.70515
Multi-dimensional data, Temporal event sequences, Electronic health records7192713
1532136;4035762;6875996;6634100;6875988;1532152;5290711;6327272;4376144
33
InfoVis2015Vials: Visualizing Alternative Splicing of Genes10.1109/TVCG.2015.2467911
http://dx.doi.org/10.1109/TVCG.2015.2467911
399408J
Alternative splicing is a process by which the same DNA sequence is used to assemble different proteins, called protein isoforms. Alternative splicing works by selectively omitting some of the coding regions (exons) typically associated with a gene. Detection of alternative splicing is difficult and uses a combination of advanced data acquisition methods and statistical inference. Knowledge about the abundance of isoforms is important for understanding both normal processes and diseases and to eventually improve treatment through targeted therapies. The data, however, is complex and current visualizations for isoforms are neither perceptually efficient nor scalable. To remedy this, we developed Vials, a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups. Our tool thus enables experts to (a) identify patterns of isoform abundance in groups of samples and (b) evaluate the quality of the data. We demonstrate the value of our tool in case studies using publicly available datasets.
Strobelt, H.;Alsallakh, B.;Botros, J.;Peterson, B.;Borowsky, M.;Pfister, H.;Lex, A.
;;;;;;;;;;;;Strobelt, H.;Alsallakh, B.;Botros, J.;Peterson, B.;Borowsky, M.;Pfister, H.;Lex, A.
10.1109/TVCG.2013.214;10.1109/TVCG.2013.223;10.1109/TVCG.2014.2346248
Biology visualization, protein isoforms, mRNA-seq, directed acyclic graphs, multivariate networks71926916634170;6634091;6876017
34
InfoVis2015
Visual Encodings of Temporal Uncertainty: A Comparative User Study
10.1109/TVCG.2015.2467752
http://dx.doi.org/10.1109/TVCG.2015.2467752
539548J
A number of studies have investigated different ways of visualizing uncertainty. However, in the temporal dimension, it is still an open question how to best represent uncertainty, since the special characteristics of time require special visual encodings and may provoke different interpretations. Thus, we have conducted a comprehensive study comparing alternative visual encodings of intervals with uncertain start and end times: gradient plots, violin plots, accumulated probability plots, error bars, centered error bars, and ambiguation. Our results reveal significant differences in error rates and completion time for these different visualization types and different tasks. We recommend using ambiguation - using a lighter color value to represent uncertain regions - or error bars for judging durations and temporal bounds, and gradient plots - using fading color or transparency - for judging probability values.
Gschwandtner, T.;Bögl, M.;Federico, P.;Miksch, S.
;;;;;;Gschwandtner, T.;Bögl, M.;Federico, P.;Miksch, S.
10.1109/TVCG.2014.2346298;10.1109/TVCG.2012.279;10.1109/INFVIS.2002.1173145;10.1109/TVCG.2009.114
Uncertainty, temporal intervals, visualization71926676875915;6327255;1173145;5290731
35
InfoVis2015
Visual Mementos: Reflecting Memories with Personal Data
10.1109/TVCG.2015.2467831
http://dx.doi.org/10.1109/TVCG.2015.2467831
369378J
In this paper we discuss the creation of visual mementos as a new application area for visualization. We define visual mementos as visualizations of personally relevant data for the purpose of reminiscing, and sharing of life experiences. Today more people collect digital information about their life than ever before. The shift from physical to digital archives poses new challenges and opportunities for self-reflection and self-representation. Drawing on research on autobiographical memory and on the role of artifacts in reminiscing, we identified design challenges for visual mementos: mapping data to evoke familiarity, expressing subjectivity, and obscuring sensitive details for sharing. Visual mementos can make use of the known strengths of visualization in revealing patterns to show the familiar instead of the unexpected, and extend representational mappings beyond the objective to include the more subjective. To understand whether people's subjective views on their past can be reflected in a visual representation, we developed, deployed and studied a technology probe that exemplifies our concept of visual mementos. Our results show how reminiscing has been supported and reveal promising new directions for self-reflection and sharing through visual mementos of personal experiences.
Thudt, A.;Baur, D.;Huron, S.;Carpendale, S.
;;;;;;Thudt, A.;Baur, D.;Huron, S.;Carpendale, S.
10.1109/TVCG.2010.206;10.1109/TVCG.2007.70541;10.1109/TVCG.2014.2352953;10.1109/INFVIS.2004.8
Visual Memento, Memories, Personal Visualization, Movement Data, World Wide Web71927085613450;4376134;6888482;1382897
36
InfoVis2015Visualization, Selection, and Analysis of Traffic Flows10.1109/TVCG.2015.2467112
http://dx.doi.org/10.1109/TVCG.2015.2467112
379388J
Visualization of the trajectories of moving objects leads to dense and cluttered images, which hinders exploration and understanding. It also hinders adding additional visual information, such as direction, and makes it difficult to interactively extract traffic flows, i.e., subsets of trajectories. In this paper we present our approach to visualize traffic flows and provide interaction tools to support their exploration. We show an overview of the traffic using a density map. The directions of traffic flows are visualized using a particle system on top of the density map. The user can extract traffic flows using a novel selection widget that allows for the intuitive selection of an area, and filtering on a range of directions and any additional attributes. Using simple, visual set expressions, the user can construct more complicated selections. The dynamic behaviors of selected flows may then be shown in annotation windows in which they can be interactively explored and compared. We validate our approach through use cases where we explore and analyze the temporal behavior of aircraft and vessel trajectories, e.g., landing and takeoff sequences, or the evolution of flight route density. The aircraft use cases have been developed and validated in collaboration with domain experts.
Scheepens, R.;Hurter, C.;van de Wetering, H.;van Wijk, J.J.
Dept. of Math. & Comput. Sci., Eindhoven Univ. of Technol., Eindhoven, Netherlands|c|;;;
;;;Scheepens, R.;Hurter, C.;van de Wetering, H.;van Wijk, J.J.
10.1109/TVCG.2011.185;10.1109/TVCG.2011.261;10.1109/VISUAL.1999.809905;10.1109/VISUAL.1998.745294
Moving Object Visualization, traffic flows, interaction71927016064996;6064975;809905;745294
37
InfoVis2015
Visualizing Multiple Variables Across Scale and Geography
10.1109/TVCG.2015.2467199
http://dx.doi.org/10.1109/TVCG.2015.2467199
599608J
Comparing multiple variables to select those that effectively characterize complex entities is important in a wide variety of domains - geodemographics for example. Identifying variables that correlate is a common practice to remove redundancy, but correlation varies across space, with scale and over time, and the frequently used global statistics hide potentially important differentiating local variation. For more comprehensive and robust insights into multivariate relations, these local correlations need to be assessed through various means of defining locality. We explore the geography of this issue, and use novel interactive visualization to identify interdependencies in multivariate data sets to support geographically informed multivariate analysis. We offer terminology for considering scale and locality, visual techniques for establishing the effects of scale on correlation and a theoretical framework through which variation in geographic correlation with scale and locality are addressed explicitly. Prototype software demonstrates how these contributions act together. These techniques enable multiple variables and their geographic characteristics to be considered concurrently as we extend visual parameter space analysis (vPSA) to the spatial domain. We find variable correlations to be sensitive to scale and geography to varying degrees in the context of energy-based geodemographics. This sensitivity depends upon the calculation of locality as well as the geographical and statistical structure of the variable.
Goodwin, S.;Dykes, J.;Slingsby, A.;Turkay, C.
;;;;;;Goodwin, S.;Dykes, J.;Slingsby, A.;Turkay, C.
10.1109/TVCG.2007.70558;10.1109/TVCG.2013.145;10.1109/TVCG.2007.70539;10.1109/TVCG.2014.2346482;10.1109/VAST.2011.6102448;10.1109/TVCG.2013.125;10.1109/TVCG.2014.2346321;10.1109/TVCG.2009.128;10.1109/TVCG.2011.197;10.1109/TVCG.2012.256;10.1109/TVCG.2014.2346265
Scale, Geography, Multivariate, Sensitivity Analysis, Variable Selection, Local Statistics, Geodemographics, Energy7192660
4376135;6634166;4376146;6876047;6102448;6634169;6876043;5290702;6065022;6327268;6875987
38
InfoVis2015Visually Comparing Weather Features in Forecasts10.1109/TVCG.2015.2467754
http://dx.doi.org/10.1109/TVCG.2015.2467754
389398J
Meteorologists process and analyze weather forecasts using visualization in order to examine the behaviors of and relationships among weather features. In this design study conducted with meteorologists in decision support roles, we identified and attempted to address two significant common challenges in weather visualization: the employment of inconsistent and often ineffective visual encoding practices across a wide range of visualizations, and a lack of support for directly visualizing how different weather features relate across an ensemble of possible forecast outcomes. In this work, we present a characterization of the problems and data associated with meteorological forecasting, we propose a set of informed default encoding choices that integrate existing meteorological conventions with effective visualization practice, and we extend a set of techniques as an initial step toward directly visualizing the interactions of multiple features over an ensemble forecast. We discuss the integration of these contributions into a functional prototype tool, and also reflect on the many practical challenges that arise when working with weather data.
Quinan, P.S.;Meyer, M.;;Quinan, P.S.;Meyer, M.
10.1109/VISUAL.1990.146361;10.1109/VISUAL.2002.1183788;10.1109/TVCG.2011.209;10.1109/TVCG.2010.181;10.1109/TVCG.2012.213;10.1109/TVCG.2013.143
Design study, weather, geographic/geospatial visualization, ensemble data7192710146361;1183788;6065017;5613483;6327248;6634129
39
InfoVis2015
Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations
10.1109/TVCG.2015.2467191
http://dx.doi.org/10.1109/TVCG.2015.2467191
649658J
General visualization tools typically require manual specification of views: analysts must select data variables and then choose which transformations and visual encodings to apply. These decisions often involve both domain and visualization design expertise, and may impose a tedious specification process that impedes exploration. In this paper, we seek to complement manual chart construction with interactive navigation of a gallery of automatically-generated visualizations. We contribute Voyager, a mixed-initiative system that supports faceted browsing of recommended charts chosen according to statistical and perceptual measures. We describe Voyager's architecture, motivating design principles, and methods for generating and interacting with visualization recommendations. In a study comparing Voyager to a manual visualization specification tool, we find that Voyager facilitates exploration of previously unseen data and leads to increased data variable coverage. We then distill design implications for visualization tools, in particular the need to balance rapid exploration and targeted question-answering.
Wongsuphasawat, K.;Moritz, D.;Anand, A.;Mackinlay, J.;Howe, B.;Heer, J.
;;;;;;;;;;Wongsuphasawat, K.;Moritz, D.;Anand, A.;Mackinlay, J.;Howe, B.;Heer, J.
10.1109/TVCG.2014.2346297;10.1109/TVCG.2009.174;10.1109/TVCG.2011.185;10.1109/TVCG.2007.70594;10.1109/TVCG.2014.2346291;10.1109/INFVIS.2000.885086
User interfaces, information visualization, exploratory analysis, visualization recommendation, mixed-initiative systems71927286875927;5290720;6064996;4376133;6876042;885086
40
SciVis2015
3D superquadric glyphs for visualizing myocardial motion
10.1109/SciVis.2015.7429504
http://dx.doi.org/10.1109/SciVis.2015.7429504
143144M
Various cardiac diseases can be diagnosed by the analysis of myocardial motion. Relevant biomarkers are radial, longitudinal, and rotational velocities of the cardiac muscle computed locally from MR images. We designed a visual encoding that maps these three attributes to glyph shapes according to a barycentric space formed by 3D superquadric glyphs. The glyphs show aggregated myocardial motion information following the AHA model and are displayed in a respective 3D layout.
T. Chitiboi;M. Neugebauer;S. Schnell;M. Markl;L. Linsen
FraunhoferMEVIS, Jacobs University Bremen|c|;;;;
Chitiboi, T.;Neugebauer, M.;Schnell, S.;Markl, M., Linsen, L.7429504
41
SciVis2015
A bottom-up scheme for user-defined feature exploration in vector field ensembles
10.1109/SciVis.2015.7429510
http://dx.doi.org/10.1109/SciVis.2015.7429510
155156M
Most of the existing approaches to visualize vector field ensembles are achieved by visualizing the uncertainty of individual variables from different simulation runs. However, the comparison of the derived feature or user-defined feature, such as the vortex in ensemble flow is also of vital significance since they often make more sense according to the domain knowledge. In this work, we present a framework to extract user-defined feature from different simulation runs. Specially, we use a bottom-up searching scheme to help to extract vortex with a user-defined shape, and further compute the geometry information including the size, and the geo-spatial location of the extracted vortex. Finally we design some linked views to compare the feature between different runs.
R. Liu;H. Guo;X. Yuan
Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University|c|;;
Liu, R.;Guo, H.;Xiaoru Yuan7429510
42
SciVis2015
A Classification of User Tasks in Visual Analysis of Volume Data
10.1109/SciVis.2015.7429485
http://dx.doi.org/10.1109/SciVis.2015.7429485
18C
Empirical findings from studies in one scientific domain have very limited applicability to other domains, unless we formally establish deeper insights on the generalizability of task types. We present a domain-independent classification of visual analysis tasks with volume visualizations. This taxonomy will help researchers design experiments, ensure coverage, and generate hypotheses in empirical studies with volume datasets. To develop our taxonomy, we first interviewed scientists working with spatial data in disparate domains. We then ran a survey to evaluate the design participants in which were scientists and professionals from around the world, working with volume data in various scientific domains. Respondents agreed substantially with our taxonomy design, but also suggested important refinements. We report the results in the form of a goal-based generic categorization of visual analysis tasks with volume visualizations. Our taxonomy covers tasks performed with a wide variety of volume datasets.
B. Laha;D. A. Bowman;D. H. Laidlaw;J. J. Socha
Stanford University|c|;;;Laha, B.;Bowman, D.A.;Laidlaw, D.H.;Socha, J.J.
10.1109/INFVIS.2004.10;10.1109/TVCG.2013.124;10.1109/TVCG.2012.216;10.1109/TVCG.2009.126;10.1109/TVCG.2013.130;10.1109/TVCG.2013.120;10.1109/TVCG.2014.2346321;10.1109/INFVIS.2004.59
Task Taxonomy, Empirical Evaluation, Volume Visualization, Scientific Visualization, Virtual Reality, 3D Interaction74294851382902;6634168;6327218;5290732;6634181;6634156;6876043;1382903
43
SciVis2015
A proposed multivariate visualization taxonomy from user data
10.1109/SciVis.2015.7429511
http://dx.doi.org/10.1109/SciVis.2015.7429511
157158M
We revisited past user study data on multivariate visualizations, looking at whether image processing measures offer any insight into user performance. While we find statistically significant correlations, some of the greatest insights into user performance came from variables that have strong ties to two key properties of multivariate representations. We discuss our analysis and propose a taxonomy of multivariate visualizations that arises.
M. A. Livingston;J. W. Decker;Z. Ai
;;Livingston, M.A.;Decker, J.W.;Ai, Z.7429511
44
SciVis2015
A Visual Voting Framework for Weather Forecast Calibration
10.1109/SciVis.2015.7429488
http://dx.doi.org/10.1109/SciVis.2015.7429488
2532C
Numerical weather predictions have been widely used for weather forecasting. Many large meteorological centers are producing highly accurate ensemble forecasts routinely to provide effective weather forecast services. However, biases frequently exist in forecast products because of various reasons, such as the imperfection of the weather forecast models. Failure to identify and neutralize the biases would result in unreliable forecast products that might mislead analysts; consequently, unreliable weather predictions are produced. The analog method has been commonly used to overcome the biases. Nevertheless, this method has some serious limitations including the difficulties in finding effective similar past forecasts, the large search space for proper parameters and the lack of support for interactive, real-time analysis. In this study, we develop a visual analytics system based on a novel voting framework to circumvent the problems. The framework adopts the idea of majority voting to combine judiciously the different variants of analog methods towards effective retrieval of the proper analogs for calibration. The system seamlessly integrates the analog methods into an interactive visualization pipeline with a set of coordinated views that characterizes the different methods. Instant visual hints are provided in the views to guide users in finding and refining analogs. We have worked closely with the domain experts in the meteorological research to develop the system. The effectiveness of the system is demonstrated using two case studies. An informal evaluation with the experts proves the usability and usefulness of the system.
H. Liao;Y. Wu;L. Chen;T. M. Hamill;Y. Wang;K. Dai;H. Zhang;W. Chen
School of Software, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University|c|;;;;;;;
Liao, H.;Wu, Y.;Chen, L.;Hamill, T.M.;Wang, Y.;Dai, K.;Zhang, H.;Chen, W.
10.1109/TVCG.2013.131;10.1109/TVCG.2013.138;10.1109/TVCG.2013.144;10.1109/TVCG.2009.197;10.1109/TVCG.2008.139;10.1109/TVCG.2014.2346755;10.1109/TVCG.2010.181;10.1109/VISUAL.1994.346298;10.1109/TVCG.2013.143
Weather forecast, analog method, calibration, majority voting, visual analytics7429488
6634109;6634123;6634188;5290751;4658178;6876041;5613483;346298;6634129
45
SciVis2015
Accurate Interactive Visualization of Large Deformations and Variability in Biomedical Image Ensembles
10.1109/TVCG.2015.2467198
http://dx.doi.org/10.1109/TVCG.2015.2467198
708717J
Large image deformations pose a challenging problem for the visualization and statistical analysis of 3D image ensembles which have a multitude of applications in biology and medicine. Simple linear interpolation in the tangent space of the ensemble introduces artifactual anatomical structures that hamper the application of targeted visual shape analysis techniques. In this work we make use of the theory of stationary velocity fields to facilitate interactive non-linear image interpolation and plausible extrapolation for high quality rendering of large deformations and devise an efficient image warping method on the GPU. This does not only improve quality of existing visualization techniques, but opens up a field of novel interactive methods for shape ensemble analysis. Taking advantage of the efficient non-linear 3D image warping, we showcase four visualizations: 1) browsing on-the-fly computed group mean shapes to learn about shape differences between specific classes, 2) interactive reformation to investigate complex morphologies in a single view, 3) likelihood volumes to gain a concise overview of variability and 4) streamline visualization to show variation in detail, specifically uncovering its component tangential to a reference surface. Evaluation on a real world dataset shows that the presented method outperforms the state-of-the-art in terms of visual quality while retaining interactive frame rates. A case study with a domain expert was performed in which the novel analysis and visualization methods are applied on standard model structures, namely skull and mandible of different rodents, to investigate and compare influence of phylogeny, diet and geography on shape. The visualizations enable for instance to distinguish (population-)normal and pathological morphology, assist in uncovering correlation to extrinsic factors and potentially support assessment of model quality.
Hermann, M.;Schunke, A.C.;Schultz, T.;Klein, R.
Inst. fur Inf. II, Univ. Bonn, Bonn, Germany|c|;;;;;;Hermann, M.;Schunke, A.C.;Schultz, T.;Klein, R.
10.1109/TVCG.2006.140;10.1109/VISUAL.2002.1183754;10.1109/TVCG.2014.2346591;10.1109/TVCG.2014.2346405;10.1109/TVCG.2006.123
Statistical deformation model, stationary velocity fields, image warping, interactive visual analysis71926784015467;1183754;6876009;6876018;4015468
46
SciVis2015Adaptive Multilinear Tensor Product Wavelets10.1109/TVCG.2015.2467412
http://dx.doi.org/10.1109/TVCG.2015.2467412
985994J
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.
Weiss, K.;Lindstrom, P.Lawrence Livermore Nat. Lab., Livermore, CA, USA|c|;;Weiss, K.;Lindstrom, P.
10.1109/TVCG.2010.145;10.1109/VISUAL.1997.663860;10.1109/VISUAL.2002.1183810;10.1109/TVCG.2011.252;10.1109/VISUAL.1996.568127;10.1109/TVCG.2009.186
Multilinear interpolation, adaptive wavelets, multiresolution models, octrees, continuous reconstruction71927345613492;663860;1183810;6064949;568127;5290779
47
SciVis2015
An evaluation of three methods for visualizing uncertainty in architecture and archaeology
10.1109/SciVis.2015.7429507
http://dx.doi.org/10.1109/SciVis.2015.7429507
149150M
This project explores the representation of uncertainty in visualizations for archaeological research and provides insights obtained from user feedback. Our 3D models brought together information from standing architecture and excavated remains, surveyed plans, ground penetrating radar (GPR) data from the Carthusian monastery of Bourgfontaine in northern France. We also included information from comparative Carthusian sites and a bird's eye representation of the site in an early modern painting. Each source was assigned a certainty value which was then mapped to a color or texture for the model. Certainty values between one and zero were assigned by one subject matter expert and should be considered qualitative. Students and faculty from the fields of architectural history and archaeology at two institutions interacted with the models and answered a short survey with four questions about each. We discovered equal preference for color and transparency and a strong dislike for the texture model. Discoveries during model building also led to changes of the excavation plans for summer 2015.
S. Houde;S. Bonde;D. H. Laidlaw
Brown University|c|;;Houde, S.;Bonde, S.;Laidlaw, D.H.7429507
48
SciVis2015
AnimoAminoMiner: Exploration of Protein Tunnels and their Properties in Molecular Dynamics
10.1109/TVCG.2015.2467434
http://dx.doi.org/10.1109/TVCG.2015.2467434
747756J
In this paper we propose a novel method for the interactive exploration of protein tunnels. The basic principle of our approach is that we entirely abstract from the 3D/4D space the simulated phenomenon is embedded in. A complex 3D structure and its curvature information is represented only by a straightened tunnel centerline and its width profile. This representation focuses on a key aspect of the studied geometry and frees up graphical estate to key chemical and physical properties represented by surrounding amino acids. The method shows the detailed tunnel profile and its temporal aggregation. The profile is interactively linked with a visual overview of all amino acids which are lining the tunnel over time. In this overview, each amino acid is represented by a set of colored lines depicting the spatial and temporal impact of the amino acid on the corresponding tunnel. This representation clearly shows the importance of amino acids with respect to selected criteria. It helps the biochemists to select the candidate amino acids for mutation which changes the protein function in a desired way. The AnimoAminoMiner was designed in close cooperation with domain experts. Its usefulness is documented by their feedback and a case study, which are included.
Byska, J.;Le Muzic, M.;Gröller, M.E.;Viola, I.;Kozlikova, B.
Masaryk Univ., Brno, Czech Republic|c|;;;;;;;;Byska, J.;Le Muzic, M.;Groller, E.;Viola, I.;Kozlikova, B.
10.1109/VISUAL.2002.1183754;10.1109/TVCG.2009.136;10.1109/TVCG.2011.259;10.1109/VISUAL.2001.964540
Protein, tunnel, molecular dynamics, aggregation, interaction71948351183754;5290734;6064966;964540
49
SciVis2015Anisotropic Ambient Volume Shading10.1109/TVCG.2015.2467963
http://dx.doi.org/10.1109/TVCG.2015.2467963
10151024J
We present a novel method to compute anisotropic shading for direct volume rendering to improve the perception of the orientation and shape of surface-like structures. We determine the scale-aware anisotropy of a shading point by analyzing its ambient region. We sample adjacent points with similar scalar values to perform a principal component analysis by computing the eigenvectors and eigenvalues of the covariance matrix. In particular, we estimate the tangent directions, which serve as the tangent frame for anisotropic bidirectional reflectance distribution functions. Moreover, we exploit the ratio of the eigenvalues to measure the magnitude of the anisotropy at each shading point. Altogether, this allows us to model a data-driven, smooth transition from isotropic to strongly anisotropic volume shading. In this way, the shape of volumetric features can be enhanced significantly by aligning specular highlights along the principal direction of anisotropy. Our algorithm is independent of the transfer function, which allows us to compute all shading parameters once and store them with the data set. We integrated our method in a GPU-based volume renderer, which offers interactive control of the transfer function, light source positions, and viewpoint. Our results demonstrate the benefit of anisotropic shading for visualization to achieve data-driven local illumination for improved perception compared to isotropic shading.
Ament, M.;Dachsbacher, C.Karlsruhe Inst. of Technol., Karlsruhe, Germany|c|;;Ament, M.;Dachsbacher, C.
10.1109/TVCG.2014.2346333;10.1109/TVCG.2013.129;10.1109/TVCG.2014.2346411;10.1109/TVCG.2012.232;10.1109/VISUAL.1999.809886;10.1109/VISUAL.2003.1250414;10.1109/TVCG.2011.161;10.1109/VISUAL.2005.1532772;10.1109/VISUAL.1994.346331;10.1109/VISUAL.2002.1183771;10.1109/TVCG.2011.198;10.1109/VISUAL.2004.5;10.1109/TVCG.2012.267;10.1109/VISUAL.1996.567777
Direct volume rendering, volume illumination, anisotropic shading7194844
6875905;6634150;6875910;6327241;809886;1250414;6064955;1532772;346331;1183771;6064942;1372186;6327242;567777
50
SciVis2015
Association Analysis for Visual Exploration of Multivariate Scientific Data Sets
10.1109/TVCG.2015.2467431
http://dx.doi.org/10.1109/TVCG.2015.2467431
955964J
The heterogeneity and complexity of multivariate characteristics poses a unique challenge to visual exploration of multivariate scientific data sets, as it requires investigating the usually hidden associations between different variables and specific scalar values to understand the data's multi-faceted properties. In this paper, we present a novel association analysis method that guides visual exploration of scalar-level associations in the multivariate context. We model the directional interactions between scalars of different variables as information flows based on association rules. We introduce the concepts of informativeness and uniqueness to describe how information flows between scalars of different variables and how they are associated with each other in the multivariate domain. Based on scalar-level associations represented by a probabilistic association graph, we propose the Multi-Scalar Informativeness-Uniqueness (MSIU) algorithm to evaluate the informativeness and uniqueness of scalars. We present an exploration framework with multiple interactive views to explore the scalars of interest with confident associations in the multivariate spatial domain, and provide guidelines for visual exploration using our framework. We demonstrate the effectiveness and usefulness of our approach through case studies using three representative multivariate scientific data sets.
Xiaotong Liu;Han-Wei Shen;;Xiaotong Liu;Han-Wei Shen
10.1109/TVCG.2013.133;10.1109/TVCG.2007.70519;10.1109/TVCG.2008.116;10.1109/TVCG.2007.70615;10.1109/VISUAL.1995.485139;10.1109/TVCG.2006.165;10.1109/VAST.2012.6400488;10.1109/TVCG.2011.178;10.1109/VAST.2007.4389000
Multivariate data, association analysis, visual exploration, multiple views7192697
6634187;4376167;4658163;4376165;485139;4015447;6400488;6065027;4389000
51
SciVis2015
Auto-Calibration of Multi-Projector Displays with a Single Handheld Camera
10.1109/SciVis.2015.7429493
http://dx.doi.org/10.1109/SciVis.2015.7429493
6572C
We present a novel approach that utilizes a simple handheld camera to automatically calibrate multi-projector displays. Most existing studies adopt active structured light patterns to verify the relationship between the camera and the projectors. The utilized camera is typically expensive and requires an elaborate installation process depending on the scalability of its applications. Moreover, the observation of the entire area by the camera is almost impossible for a small space surrounded by walls as there is not enough distance for the camera to capture the entire scene. We tackle these issues by requiring only a portion of the walls to be visible to a handheld camera that is widely used these days. This becomes possible by the introduction of our new structured light pattern scheme based on a perfect submap and a geometric calibration that successfully utilizes the geometric information of multi-planar environments. We demonstrate that immersive display in a small space such as an ordinary room can be effectively created using images captured by a handheld camera.
S. Park;H. Seo;S. Cha;J. NohKAIST|c|;;;Park, S.;Seo, H.;Cha, S.;Noh, J.
10.1109/VISUAL.2002.1183793;10.1109/VISUAL.2000.885685;10.1109/VISUAL.1999.809883
74294931183793;885685;809883
52
SciVis2015
Automated visualization workflow for simulation experiments
10.1109/SciVis.2015.7429509
http://dx.doi.org/10.1109/SciVis.2015.7429509
153154M
Modeling and simulation is often used to predict future events and plan accordingly. Experiments in this domain often produce thousands of results from individual simulations, based on slightly varying input parameters. Geo-spatial visualizations can be a powerful tool to help health researchers and decision-makers to take measures during catastrophic and epidemic events such as Ebola outbreaks. The work produced a web-based geo-visualization tool to visualize and compare the spread of Ebola in the West African countries Ivory Coast and Senegal based on multiple simulation results. The visualization is not Ebola specific and may visualize any time-varying frequencies for given geo-locations.
J. P. Leidig;S. Dharmapuri
School of Computing and Information Systems, Grand Valley State University|c|;
Leidig, J.P.;Dharmapuri, S.7429509
53
SciVis2015
CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds
10.1109/TVCG.2015.2467202
http://dx.doi.org/10.1109/TVCG.2015.2467202
886895J
We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.
Lingyun Yu;Efstathiou, K.;Isenberg, P.;Isenberg, T.
Hangzhou Dianzi Univ., Hangzhou, China|c|;;;;;;Lingyun Yu;Efstathiou, K.;Isenberg, P.;Isenberg, T.
10.1109/TVCG.2008.153;10.1109/VISUAL.1999.809932;10.1109/TVCG.2013.126;10.1109/TVCG.2012.292;10.1109/INFVIS.1996.559216;10.1109/TVCG.2012.217;10.1109/TVCG.2010.157
Selection, spatial selection, structure-aware selection, context-aware selection, exploratory data visualization and analysis, 3D interaction, user interaction71927264658123;809932;6634108;6327228;559216;6327229;5613504
54
SciVis2015
Cluster Analysis of Vortical Flow in Simulations of Cerebral Aneurysm Hemodynamics
10.1109/TVCG.2015.2467203
http://dx.doi.org/10.1109/TVCG.2015.2467203
757766J
Computational fluid dynamic (CFD) simulations of blood flow provide new insights into the hemodynamics of vascular pathologies such as cerebral aneurysms. Understanding the relations between hemodynamics and aneurysm initiation, progression, and risk of rupture is crucial in diagnosis and treatment. Recent studies link the existence of vortices in the blood flow pattern to aneurysm rupture and report observations of embedded vortices - a larger vortex encloses a smaller one flowing in the opposite direction - whose implications are unclear. We present a clustering-based approach for the visual analysis of vortical flow in simulated cerebral aneurysm hemodynamics. We show how embedded vortices develop at saddle-node bifurcations on vortex core lines and convey the participating flow at full manifestation of the vortex by a fast and smart grouping of streamlines and the visualization of group representatives. The grouping result may be refined based on spectral clustering generating a more detailed visualization of the flow pattern, especially further off the core lines. We aim at supporting CFD engineers researching the biological implications of embedded vortices.
Oeltze-Jafra, S.;Cebral, J.R.;Janiga, G.;Preim, B.
Dept. of Simulation & Graphics, Univ. of Magdeburg, Magdeburg, Germany|c|;;;
;;;Oeltze-Jafra, S.;Cebral, J.R.;Janiga, G.;Preim, B.
10.1109/TVCG.2009.138;10.1109/TVCG.2012.202;10.1109/TVCG.2014.2346406;10.1109/TVCG.2006.201;10.1109/VISUAL.2002.1183789;10.1109/TVCG.2013.189;10.1109/VISUAL.2004.59;10.1109/TVCG.2006.199;10.1109/VISUAL.2005.1532830;10.1109/VISUAL.2005.1532859
Blood Flow, Aneurysm, Clustering, Vortex Dynamics, Embedded Vortices7192711
5290742;6327222;6877722;4015452;1183789;6634153;1372179;4015451;1532830;1532859
55
SciVis2015
Correlation analysis in multidimensional multivariate time-varying datasets
10.1109/SciVis.2015.7429502
http://dx.doi.org/10.1109/SciVis.2015.7429502
139140M
One of the most vital challenges for weather forecasters is the correlation between two geographical phenomena that are distributed continuously in multidimensional multivariate time-varying datasets. In this research, we have visualized the correlation between Pressure and Temperature in the climate datasets. Pearson correlation is used in this study to measure the major linear relationship between two variables in the dataset. Using glyphs in the spatial location, we highlighted the significant association between variables. Based on the positive or negative slope of correlation lines, we can conclude how much they are correlated. The principal of this research is visualizing the local trend of variables versus each other in multidimensional multivariate time-varying datasets, which needs to be visualized with their spatial locations in meteorological datasets. Using glyphs, not only can we visualize the correlation between two variables in the coordinate system, but we can also discern whether any of these variables is separately increasing or decreasing. Moreover, we can visualize the background color as another variable and see the correlation lines around of a particular zone such as storm area.
N. AbedzadehMississippi State University|c|Abedzadeh, N.7429502
56
SciVis2015
CPU Ray Tracing Large Particle Data with Balanced P-k-d Trees
10.1109/SciVis.2015.7429492
http://dx.doi.org/10.1109/SciVis.2015.7429492
5764C
We present a novel approach to rendering large particle data sets from molecular dynamics, astrophysics and other sources. We employ a new data structure adapted from the original balanced k-d tree, which allows for representation of data with trivial or no overhead. In the OSPRay visualization framework, we have developed an efficient CPU algorithm for traversing, classifying and ray tracing these data. Our approach is able to render up to billions of particles on a typical workstation, purely on the CPU, without any approximations or level-of-detail techniques, and optionally with attribute-based color mapping, dynamic range query, and advanced lighting models such as ambient occlusion and path tracing.
I. Wald;A. Knoll;G. P. Johnson;W. Usher;V. Pascucci;M. E. Papka
Intel Corporation|c|;;;;;Wald, I.;Knoll, A.;Johnson, G.P.;Usher, W.;Pascucci, V.;Papka, M.E.
10.1109/TVCG.2010.148;10.1109/TVCG.2009.142;10.1109/TVCG.2012.282
Ray tracing, Visualization, Particle Data, k-d Trees74294925613495;5290736;6327210
57
SciVis2015
Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis
10.1109/TVCG.2015.2467449
http://dx.doi.org/10.1109/TVCG.2015.2467449
867876J
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
Kindlmann, G.;Chiw, C.;Seltzer, N.;Samuels, L.;Reppy, J.
;;;;;;;;Kindlmann, G.;Chiw, C.;Seltzer, N.;Samuels, L.;Reppy, J.
10.1109/TVCG.2009.174;10.1109/TVCG.2011.185;10.1109/VISUAL.2005.1532856;10.1109/TVCG.2014.2346322;10.1109/TVCG.2012.240;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.1999.809896;10.1109/TVCG.2007.70534;10.1109/TVCG.2014.2346318;10.1109/VISUAL.1998.745290;10.1109/TVCG.2008.148;10.1109/TVCG.2008.163
Domain specific language, portable parallel programming, scientific visualization, tensor fields7192663
5290720;6064996;1532856;6875916;6327233;1250414;809896;4376190;6876040;745290;4658184;4658155
58
SciVis2015
Distribution Driven Extraction and Tracking of Features for Time-varying Data Analysis
10.1109/TVCG.2015.2467436
http://dx.doi.org/10.1109/TVCG.2015.2467436
837846J
Effective analysis of features in time-varying data is essential in numerous scientific applications. Feature extraction and tracking are two important tasks scientists rely upon to get insights about the dynamic nature of the large scale time-varying data. However, often the complexity of the scientific phenomena only allows scientists to vaguely define their feature of interest. Furthermore, such features can have varying motion patterns and dynamic evolution over time. As a result, automatic extraction and tracking of features becomes a non-trivial task. In this work, we investigate these issues and propose a distribution driven approach which allows us to construct novel algorithms for reliable feature extraction and tracking with high confidence in the absence of accurate feature definition. We exploit two key properties of an object, motion and similarity to the target feature, and fuse the information gained from them to generate a robust feature-aware classification field at every time step. Tracking of features is done using such classified fields which enhances the accuracy and robustness of the proposed algorithm. The efficacy of our method is demonstrated by successfully applying it on several scientific data sets containing a wide range of dynamic time-varying features.
Dutta, S.;Han-Wei Shen;;Dutta, S.;Han-Wei Shen
10.1109/TVCG.2007.70599;10.1109/VISUAL.1993.398877;10.1109/VISUAL.2004.107;10.1109/TVCG.2011.246;10.1109/TVCG.2007.70615;10.1109/VISUAL.2003.1250374;10.1109/TVCG.2013.152;10.1109/TVCG.2014.2346423;10.1109/TVCG.2007.70579;10.1109/VISUAL.1996.567807;10.1109/VISUAL.1998.745288;10.1109/TVCG.2008.163;10.1109/TVCG.2008.140
Gaussian mixture model (GMM), Incremental learning, Feature extraction and tracking, Time-varying data analysis7192664
4376176;398877;1372214;6064965;4376165;1250374;6634159;6875975;4376210;567807;745288;4658155;4658174
59
SciVis2015Effective Visualization of Temporal Ensembles10.1109/TVCG.2015.2468093
http://dx.doi.org/10.1109/TVCG.2015.2468093
787796J
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.
Lihua Hao;Healey, C.G.;Bass, S.A.;;;;Lihua Hao;Healey, C.;Bass, S.A.
10.1109/TVCG.2014.2346448;10.1109/VISUAL.2005.1532839;10.1109/VISUAL.2005.1532838;10.1109/TVCG.2014.2346751;10.1109/TVCG.2009.155;10.1109/TVCG.2014.2346455;10.1109/TVCG.2010.181;10.1109/TVCG.2013.143
Ensemble visualization71948526875990;1532839;1532838;6876007;5290748;6875964;5613483;6634129
60
SciVis2015
Effectiveness of Structured Textures on Dynamically Changing Terrain-like Surfaces
10.1109/TVCG.2015.2467962
http://dx.doi.org/10.1109/TVCG.2015.2467962
926934J
Previous perceptual research and human factors studies have identified several effective methods for texturing 3D surfaces to ensure that their curvature is accurately perceived by viewers. However, most of these studies examined the application of these techniques to static surfaces. This paper explores the effectiveness of applying these techniques to dynamically changing surfaces. When these surfaces change shape, common texturing methods, such as grids and contours, induce a range of different motion cues, which can draw attention and provide information about the size, shape, and rate of change. A human factors study was conducted to evaluate the relative effectiveness of these methods when applied to dynamically changing pseudo-terrain surfaces. The results indicate that, while no technique is most effective for all cases, contour lines generally perform best, and that the pseudo-contour lines induced by banded color scales convey the same benefits.
Butkiewicz, T.;Stevens, A.H.
Center for Coastal & Ocean Mapping, Univ. of New Hampshire, Durham, NH, USA|c|;
;Butkiewicz, T.;Stevens, A.H.Structured textures, terrain, deformation, dynamic surfaces7194846
61
SciVis2015
Explicit Frequency Control for High-Quality Texture-Based Flow Visualization
10.1109/SciVis.2015.7429490
http://dx.doi.org/10.1109/SciVis.2015.7429490
4148C
In this work we propose an effective method for frequency-controlled dense flow visualization derived from a generalization of the Line Integral Convolution (LIC) technique. Our approach consists in considering the spectral properties of the dense flow visualization process as an integral operator defined in a local curvilinear coordinate system aligned with the flow. Exploring LIC from this point of view, we suggest a systematic way to design a flow visualization process with particular local spatial frequency properties of the resulting image. Our method is efficient, intuitive, and based on a long-standing model developed as a result of numerous perception studies. The method can be described as an iterative application of line integral convolution, followed by a one-dimensional Gabor filtering orthogonal to the flow. To demonstrate the utility of the technique, we generated novel adaptive multi-frequency flow visualizations, that according to our evaluation, feature a higher level of frequency control and higher quality scores than traditional approaches in texture-based flow visualization.
V. Matvienko;J. Kr��ger
Saarland University|c|;Matvienko, V.;Kruger, J.
10.1109/VISUAL.2005.1532853;10.1109/TVCG.2007.70595;10.1109/TVCG.2006.161;10.1109/VISUAL.1994.346313;10.1109/VISUAL.1996.567784;10.1109/VISUAL.2001.964505;10.1109/TVCG.2009.126;10.1109/VISUAL.1999.809892;10.1109/VISUAL.2003.1250362;10.1109/VISUAL.2005.1532781
flow visualization, texture-based visualization, LIC, Gabor filter, spatial frequency, image contrast7429490
1532853;4376173;4015421;346313;567784;964505;5290732;809892;1250362;1532781
62
SciVis2015
Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data
10.1109/TVCG.2015.2466838
http://dx.doi.org/10.1109/TVCG.2015.2466838
827836J
We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.
Hanqi Guo;Phillips, C.L.;Peterka, T.;Karpeyev, D.;Glatz, A.
Math. & Comput. Sci. Div., Argonne Nat. Lab., Argonne, IL, USA|c|;;;;
;;;;Hanqi Guo;Phillips, C.L.;Peterka, T.;Karpeyev, D.;Glatz, A.
10.1109/VISUAL.1994.346327;10.1109/VISUAL.2005.1532795;10.1109/TVCG.2011.249;10.1109/VISUAL.1999.809896;10.1109/VISUAL.1996.568137;10.1109/VISUAL.1998.745288;10.1109/VISUAL.2004.3;10.1109/TVCG.2012.212;10.1109/VISUAL.2005.1532851;10.1109/TVCG.2007.70545
Superconductor, Vortex extraction, Feature tracking, Unstructured grid7192679
346327;1532795;6064972;809896;568137;745288;1372197;6327274;1532851;4376212
63
SciVis2015
Feature-Based Tensor Field Visualization for Fiber Reinforced Polymers
10.1109/SciVis.2015.7429491
http://dx.doi.org/10.1109/SciVis.2015.7429491
4956C
Virtual testing is an integral part of modern product development in mechanical engineering. Numerical structure simulations allow the computation of local stresses which are given as tensor fields. For homogeneous materials, the tensor information is usually reduced to a scalar field like the von Mises stress. A material-dependent threshold defines the material failure answering the key question of engineers. This leads to a rather simple feature-based visualisation. For composite materials like short fiber reinforced polymers, the situation is much more complex. The material property is determined by the fiber distribution at every position, often described as fiber orientation tensor field. Essentially, the material's ability to cope with stress becomes anisotropic and inhomogeneous. We show how to combine the stress field and the fiber orientation field in such cases, leading to a feature-based visualization of tensor fields for composite materials. The resulting features inform the engineer about potential improvements in the product development.
V. Zobel;M. Stommel;G. Scheuermann
Leipzig University|c|;;Zobel, V.;Stommel, M.;Scheuermann, G.
10.1109/VISUAL.1994.346326;10.1109/TVCG.2009.184;10.1109/VISUAL.1995.485141;10.1109/TVCG.2010.199;10.1109/VISUAL.2004.105
tensor visualization, feature-based visualisation, composite materials, structural mechanics7429491346326;5290754;485141;5613502;1372212
64
SciVis2015
Gaze Stripes: Image-Based Visualization of Eye Tracking Data
10.1109/TVCG.2015.2468091
http://dx.doi.org/10.1109/TVCG.2015.2468091
10051014J
We present a new visualization approach for displaying eye tracking data from multiple participants. We aim to show the spatio-temporal data of the gaze points in the context of the underlying image or video stimulus without occlusion. Our technique, denoted as gaze stripes, does not require the explicit definition of areas of interest but directly uses the image data around the gaze points, similar to thumbnails for images. A gaze stripe consists of a sequence of such gaze point images, oriented along a horizontal timeline. By displaying multiple aligned gaze stripes, it is possible to analyze and compare the viewing behavior of the participants over time. Since the analysis is carried out directly on the image data, expensive post-processing or manual annotation are not required. Therefore, not only patterns and outliers in the participants' scanpaths can be detected, but the context of the stimulus is available as well. Furthermore, our approach is especially well suited for dynamic stimuli due to the non-aggregated temporal mapping. Complementary views, i.e., markers, notes, screenshots, histograms, and results from automatic clustering, can be added to the visualization to display analysis results. We illustrate the usefulness of our technique on static and dynamic stimuli. Furthermore, we discuss the limitations and scalability of our approach in comparison to established visualization techniques.
Kurzhals, K.;Hlawatsch, M.;Heimerl, F.;Burch, M.;Ertl, T.;Weiskopf, D.
;;;;;;;;;;Kurzhals, K.;Hlawatsch, M.;Heimerl, F.;Burch, M.;Ertl, T.;Weiskopf, D.
10.1109/TVCG.2011.232;10.1109/TVCG.2012.276;10.1109/INFVIS.2002.1173156;10.1109/TVCG.2013.194;10.1109/TVCG.2008.125
Eye tracking, time-dependent data, spatio-temporal visualization71948516065006;6327295;1173156;6634139;4658146
65
SciVis2015
Glyph-Based Comparative Visualization for Diffusion Tensor Fields
10.1109/TVCG.2015.2467435
http://dx.doi.org/10.1109/TVCG.2015.2467435
797806J
Diffusion Tensor Imaging (DTI) is a magnetic resonance imaging modality that enables the in-vivo reconstruction and visualization of fibrous structures. To inspect the local and individual diffusion tensors, glyph-based visualizations are commonly used since they are able to effectively convey full aspects of the diffusion tensor. For several applications it is necessary to compare tensor fields, e.g., to study the effects of acquisition parameters, or to investigate the influence of pathologies on white matter structures. This comparison is commonly done by extracting scalar information out of the tensor fields and then comparing these scalar fields, which leads to a loss of information. If the glyph representation is kept, simple juxtaposition or superposition can be used. However, neither facilitates the identification and interpretation of the differences between the tensor fields. Inspired by the checkerboard style visualization and the superquadric tensor glyph, we design a new glyph to locally visualize differences between two diffusion tensors by combining juxtaposition and explicit encoding. Because tensor scale, anisotropy type, and orientation are related to anatomical information relevant for DTI applications, we focus on visualizing tensor differences in these three aspects. As demonstrated in a user study, our new glyph design allows users to efficiently and effectively identify the tensor differences. We also apply our new glyphs to investigate the differences between DTI datasets of the human brain in two different contexts using different b-values, and to compare datasets from a healthy and HIV-infected subject.
Changgong Zhang;Schultz, T.;Lawonn, K.;Eisemann, E.;Vilanova, A.
;;;;;;;;Changgong Zhang;Schultz, T.;Lawonn, K.;Eisemann, E.;Vilanova, A.
10.1109/TVCG.2015.2467031;10.1109/TVCG.2006.134;10.1109/TVCG.2010.134;10.1109/VISUAL.1998.745294;10.1109/VAST.2014.7042491;10.1109/TVCG.2010.199
Glyph Design, Comparative Visualization, Diffusion Tensor Field71927227192624;4015499;5613480;745294;7042491;5613502
66
SciVis2015
High performance flow field visualization with high-order access dependencies
10.1109/SciVis.2015.7429515
http://dx.doi.org/10.1109/SciVis.2015.7429515
165166M
We present a novel model based on high-order access dependencies for high performance pathline computation in flow field. The high-order access dependencies are defined as transition probabilities from one data block to other blocks based on a few historical data accesses. Compared with existing methods which employed first-order access dependencies, our approach takes the advantages of high order access dependencies with higher accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing densely-seeded pathlines. The efficiency of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method can achieve higher data locality than the first-order access dependencies based method, thereby reducing the I/O requests and improving the efficiency of pathline computation in various applications.
J. Zhang;H. Guo;X. Yuan
Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University|c|;;
Zhang, J.;Guo, H.;Xiaoru Yuan7429515
67
SciVis2015
In Situ Eddy Analysis in a High-Resolution Ocean Climate Model
10.1109/TVCG.2015.2467411
http://dx.doi.org/10.1109/TVCG.2015.2467411
857866J
An eddy is a feature associated with a rotating body of fluid, surrounded by a ring of shearing fluid. In the ocean, eddies are 10 to 150 km in diameter, are spawned by boundary currents and baroclinic instabilities, may live for hundreds of days, and travel for hundreds of kilometers. Eddies are important in climate studies because they transport heat, salt, and nutrients through the world's oceans and are vessels of biological productivity. The study of eddies in global ocean-climate models requires large-scale, high-resolution simulations. This poses a problem for feasible (timely) eddy analysis, as ocean simulations generate massive amounts of data, causing a bottleneck for traditional analysis workflows. To enable eddy studies, we have developed an in situ workflow for the quantitative and qualitative analysis of MPAS-Ocean, a high-resolution ocean climate model, in collaboration with the ocean model research and development process. Planned eddy analysis at high spatial and temporal resolutions will not be possible with a postprocessing workflow due to various constraints, such as storage size and I/O time, but the in situ workflow enables it and scales well to ten-thousand processing elements.
Woodring, J.;Petersen, M.;Schmeisser, A.;Patchett, J.;Ahrens, J.;Hagen, H.
Los Alamos Nat. Lab., Los Alamos, NM, USA|c|;;;;;;;;;;Woodring, J.;Petersen, M.;Schmeisser, A.;Patchett, J.;Ahrens, J.;Hagen, H.
10.1109/TVCG.2008.143;10.1109/VISUAL.2005.1532830;10.1109/TVCG.2010.215;10.1109/TVCG.2011.162
In situ analysis, online analysis, mesoscale eddies, ocean modeling, climate modeling, simulation, feature extraction,feature analysis, high performance computing, supercomputing, software engineering, collaborative development, revision control71927234658165;1532830;5613497;6064973
68
SciVis2015
Interactive Visualization for Singular Fibers of Functions f : R3 -> R2
10.1109/TVCG.2015.2467433
http://dx.doi.org/10.1109/TVCG.2015.2467433
945954J
Scalar topology in the form of Morse theory has provided computational tools that analyze and visualize data from scientific and engineering tasks. Contracting isocontours to single points encapsulates variations in isocontour connectivity in the Reeb graph. For multivariate data, isocontours generalize to fibers-inverse images of points in the range, and this area is therefore known as fiber topology. However, fiber topology is less fully developed than Morse theory, and current efforts rely on manual visualizations. This paper presents how to accelerate and semi-automate this task through an interface for visualizing fiber singularities of multivariate functions R3->R2. This interface exploits existing conventions of fiber topology, but also introduces a 3D view based on the extension of Reeb graphs to Reeb spaces. Using the Joint Contour Net, a quantized approximation of the Reeb space, this accelerates topological visualization and permits online perturbation to reduce or remove degeneracies in functions under study. Validation of the interface is performed by assessing whether the interface supports the mathematical workflow both of experts and of less experienced mathematicians.
Sakurai, D.;Saeki, O.;Carr, H.;Hsiang-Yun Wu;Yamamoto, T.;Duke, D.;Takahashi, S.
Univ. of Tokyo & Japan Atomic Energy Agency, Kashiwa, Japan|c|;;;;;;
;;;;;;Sakurai, D.;Saeki, O.;Carr, H.;Hsiang-Yun Wu;Yamamoto, T.;Duke, D.;Takahashi, S.
10.1109/TVCG.2008.119;10.1109/VISUAL.1997.663875;10.1109/TVCG.2012.287;10.1109/TVCG.2010.213;10.1109/TVCG.2014.2346447;10.1109/TVCG.2010.146;10.1109/VISUAL.2002.1183774;10.1109/TVCG.2008.143;10.1109/TVCG.2009.119;10.1109/TVCG.2007.70601
Singular fibers, fiber topology, mathematical visualization, design study7192700
4658159;663875;6327207;5613467;6875963;5613469;1183774;4658165;5290728;4376169
69
SciVis2015
Interstitial and Interlayer Ion Diffusion Geometry Extraction in Graphitic Nanosphere Battery Materials
10.1109/TVCG.2015.2467432
http://dx.doi.org/10.1109/TVCG.2015.2467432
916925J
Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermally annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.
Gyulassy, A.;Knoll, A.;Chun Lau;Bei Wang
;;;;;;Gyulassy, A.;Knoll, A.;Chun Lau;Bei Wang
10.1109/VISUAL.2005.1532795;10.1109/TVCG.2011.244;10.1109/TVCG.2014.2346403;10.1109/VISUAL.2005.1532839;10.1109/TVCG.2011.259
materials science, morse-smale, topology, Delaunay, computational geometry71926741532795;6064947;6875922;1532839;6064966
70
SciVis2015
Intuitive Exploration of Volumetric Data Using Dynamic Galleries
10.1109/TVCG.2015.2467294
http://dx.doi.org/10.1109/TVCG.2015.2467294
896905J
In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.
Jonsson, D.;Falk, M.;Ynnerman, A.Linkoping Univ., Linkoping, Sweden|c|;;;;Jonsson, D.;Falk, M.;Ynnerman, A.
10.1109/TVCG.2008.162;10.1109/TVCG.2011.261;10.1109/VISUAL.1996.568113;10.1109/TVCG.2012.231;10.1109/TVCG.2010.195;10.1109/TVCG.2011.224;10.1109/TVCG.2006.148;10.1109/TVCG.2011.218
Transfer function, scalar fields, volume rendering, touch interaction, visualization, user interfaces71926824658153;6064975;568112;6327240;5613470;6064940;4015460;6064959
71
SciVis2015
Inviwo ??? An extensible, multi-purpose visualization framework
10.1109/SciVis.2015.7429514
http://dx.doi.org/10.1109/SciVis.2015.7429514
163164M
To enable visualization research impacting other scientific domains, the availability of easy-to-use visualization frameworks is essential. Nevertheless, an easy-to-use system also has to be adapted to the capabilities of modern hardware architectures, as only this allows for realizing interactive visualizations. With this trade-off in mind, we have designed and realized the cross-platform Inviwo (Interactive Visualization Workshop) visualization framework, that supports both interactive visualization research as well as efficient visualization application development and deployment. In this poster we give an overview of the architecture behind Inviwo, and show how its design enables us and other researchers to realize their visualization ideas efficiently. Inviwo consists of a modern and lightweight, graphics independent core, which is extended by optional modules that encapsulate visualization algorithms, well-known utility libraries and commonly used parallel-processing APIs (such as OpenGL and OpenCL). The core enables a simplistic structure for creating bridges between the different modules regarding data transfer across architecture and devices with an easy-to-use screen graph and minimalistic programming. Making the base structures in a modern way while providing intuitive methods of extending the functionality and creating modules based on other modules, we hope that Inviwo can help the visualization community to perform research through a rapid-prototyping design and GUI, while at the same time allowing users to take advantage of the results implemented in the system in any way they desire later on. Inviwo is publicly available at www.inviwo.org, and can be used freely by anyone under a permissive free software license (Simplified BSD).
E. Sund��n;P. Steneteg;S. Kottravel;D. J��nsson;R. Englund;M. Falk;T. Ropinski
Linkoping University|c|;;;;;;Sunden, E.;Steneteg, P.;Kottravel, S.;Jonsson, D.;Englund, R.;Falk, M.;Ropinski, T.7429514
72
SciVis2015
Isosurface Visualization of Data with Nonparametric Models for Uncertainty
10.1109/TVCG.2015.2467958
http://dx.doi.org/10.1109/TVCG.2015.2467958
777786J
The problem of isosurface extraction in uncertain data is an important research problem and may be approached in two ways. One can extract statistics (e.g., mean) from uncertain data points and visualize the extracted field. Alternatively, data uncertainty, characterized by probability distributions, can be propagated through the isosurface extraction process. We analyze the impact of data uncertainty on topology and geometry extraction algorithms. A novel, edge-crossing probability based approach is proposed to predict underlying isosurface topology for uncertain data. We derive a probabilistic version of the midpoint decider that resolves ambiguities that arise in identifying topological configurations. Moreover, the probability density function characterizing positional uncertainty in isosurfaces is derived analytically for a broad class of nonparametric distributions. This analytic characterization can be used for efficient closed-form computation of the expected value and variation in geometry. Our experiments show the computational advantages of our analytic approach over Monte-Carlo sampling for characterizing positional uncertainty. We also show the advantage of modeling underlying error densities in a nonparametric statistical framework as opposed to a parametric statistical framework through our experiments on ensemble datasets and uncertain scalar fields.
Athawale, T.;Sakhaee, E.;Entezari, A.
Dept. of Comput. & Inf. Sci. & Eng., Univ. of Florida, Gainesville, FL, USA|c|;;
;;Athawale, T.;Sakhaee, E.;Entezari, A.
10.1109/TVCG.2013.208;10.1109/VISUAL.2002.1183769;10.1109/TVCG.2013.152;10.1109/TVCG.2007.70518;10.1109/TVCG.2012.249;10.1109/TVCG.2013.143
Uncertainty quantification, linear interpolation, isosurface extraction, marching cubes, nonparametric statistics71926296634171;1183769;6634159;4376198;6327235;6634129
73
SciVis2015
JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure
10.1109/TVCG.2015.2467331
http://dx.doi.org/10.1109/TVCG.2015.2467331
10251034J
Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.
Labschütz, M.;Bruckner, S.;Gröller, M.E.;Hadwiger, M.;Rautek, P.
;;;;;;;;Labschütz, M.;Bruckner, S.;Groller, E.;Hadwiger, M.;Rautek, P.10.1109/TVCG.2012.240Data Transformation and Representation, GPUs and Multi-core Architectures, Volume Rendering71926866327233
74
SciVis2015
Mining Graphs for Understanding Time-Varying Volumetric Data
10.1109/TVCG.2015.2468031
http://dx.doi.org/10.1109/TVCG.2015.2468031
965974J
A notable recent trend in time-varying volumetric data analysis and visualization is to extract data relationships and represent them in a low-dimensional abstract graph view for visual understanding and making connections to the underlying data. Nevertheless, the ever-growing size and complexity of data demands novel techniques that go beyond standard brushing and linking to allow significant reduction of cognition overhead and interaction cost. In this paper, we present a mining approach that automatically extracts meaningful features from a graph-based representation for exploring time-varying volumetric data. This is achieved through the utilization of a series of graph analysis techniques including graph simplification, community detection, and visual recommendation. We investigate the most important transition relationships for time-varying data and evaluate our solution with several time-varying data sets of different sizes and characteristics. For gaining insights from the data, we show that our solution is more efficient and effective than simply asking users to extract relationships via standard interaction techniques, especially when the data set is large and the relationships are complex. We also collect expert feedback to confirm the usefulness of our approach.
Yi Gu;Chaoli Wang;Peterka, T.;Jacob, R.;Seung Hyun Kim
Dept. Comput. Sci. & Eng., Univ. of Notre Dame, Notre Dame, IN, USA|c|;;;;
;;;;Yi Gu;Chaoli Wang;Peterka, T.;Jacob, R.;Seung Hyun Kim
10.1109/TVCG.2009.122;10.1109/TVCG.2013.151;10.1109/TVCG.2011.246;10.1109/TVCG.2008.116;10.1109/VISUAL.1999.809871;10.1109/TVCG.2006.165;10.1109/TVCG.2009.165;10.1109/TVCG.2006.159
Time-varying data visualization, graph simplification, community detection, visual recommendation71948535290706;6634098;6064965;4658163;809871;4015447;5290726;4015461
75
SciVis2015
Multi-field Pattern Matching based on Sparse Feature Sampling
10.1109/TVCG.2015.2467292
http://dx.doi.org/10.1109/TVCG.2015.2467292
807816J
We present an approach to pattern matching in 3D multi-field scalar data. Existing pattern matching algorithms work on single scalar or vector fields only, yet many numerical simulations output multi-field data where only a joint analysis of multiple fields describes the underlying phenomenon fully. Our method takes this into account by bundling information from multiple fields into the description of a pattern. First, we extract a sparse set of features for each 3D scalar field using the 3D SIFT algorithm (Scale-Invariant Feature Transform). This allows for a memory-saving description of prominent features in the data with invariance to translation, rotation, and scaling. Second, the user defines a pattern as a set of SIFT features in multiple fields by e.g. brushing a region of interest. Third, we locate and rank matching patterns in the entire data set. Experiments show that our algorithm is efficient in terms of required memory and computational efforts.
Zhongjie Wang;Seidel, H.-P.;Weinkauf, T.
MPI for Inf., Saarbrucken, Germany|c|;;;;Zhongjie Wang;Seidel, H.-P.;Weinkauf, T.
10.1109/VISUAL.2003.1250372;10.1109/TVCG.2009.141;10.1109/TVCG.2006.165;10.1109/TVCG.2007.70579;10.1109/TVCG.2014.2346332;10.1109/TVCG.2011.236
Pattern matching, multi-field visualization71927211250372;5290760;4015447;4376210;6875976;6064967
76
SciVis2015
Multiresolution visualization of digital earth data via hexagonal box-spline wavelets
10.1109/SciVis.2015.7429508
http://dx.doi.org/10.1109/SciVis.2015.7429508
151152M
Multiresolution analysis is an important tool for exploring large-scale data sets. Such analysis provides facilities to visualize data at different levels of detail while providing the advantages of efficient data compression and transmission. In this work, an approach is presented to apply multiresolution analysis to digital Earth data where each resolution describes data at a specific level of detail. Geospatial data at a fine level is taken as the input and a hierarchy of approximation and detail coefficients is built by applying a hexagonal discrete wavelet transform. Multiresolution filters are designed for hexagonal cells based on the three directional linear box spline which is natively supported by modern GPUs.
M. I. Jubair;U. Alim;N. Roeber;J. Clyne;A. Mahdavi-Amiri;F. Samavati
University of Calgary|c|;;;;;Jubair, M.I.;Alim, U.;Roeber, N.;Clyne, J.;Mahdavi-Amiri, A.;Samavati, F.7429508
77
SciVis2015
NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects
10.1109/TVCG.2015.2467441
http://dx.doi.org/10.1109/TVCG.2015.2467441
738746J
In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.
Al-Awami, A.K.;Beyer, J.;Haehn, D.;Kasthuri, N.;Lichtman, J.W.;Pfister, H.;Hadwiger, M.
;;;;;;;;;;;;Al-Awami, A.;Beyer, J.;Haehn, D.;Kasthuri, N.;Lichtman, J.;Pfister, H.;Hadwiger, M.
10.1109/TVCG.2014.2346312;10.1109/VISUAL.2005.1532788;10.1109/TVCG.2013.142;10.1109/TVCG.2009.121;10.1109/TVCG.2012.240;10.1109/TVCG.2014.2346371;10.1109/TVCG.2013.174;10.1109/TVCG.2014.2346249;10.1109/TVCG.2007.70584
Neuroscience, Segmentation, Proofreading, Data and Provenance Tracking7192653
6875935;1532788;6634132;5290766;6327233;6875931;6634144;6876026;4376187
78
SciVis2015
Occlusion-free Blood Flow Animation with Wall Thickness Visualization
10.1109/TVCG.2015.2467961
http://dx.doi.org/10.1109/TVCG.2015.2467961
728737J
We present the first visualization tool that combines pathlines from blood flow and wall thickness information. Our method uses illustrative techniques to provide occlusion-free visualization of the flow. We thus offer medical researchers an effective visual analysis tool for aneurysm treatment risk assessment. Such aneurysms bear a high risk of rupture and significant treatment-related risks. Therefore, to get a fully informed decision it is essential to both investigate the vessel morphology and the hemodynamic data. Ongoing research emphasizes the importance of analyzing the wall thickness in risk assessment. Our combination of blood flow visualization and wall thickness representation is a significant improvement for the exploration and analysis of aneurysms. As all presented information is spatially intertwined, occlusion problems occur. We solve these occlusion problems by dynamic cutaway surfaces. We combine this approach with a glyph-based blood flow representation and a visual mapping of wall thickness onto the vessel surface. We developed a GPU-based implementation of our visualizations which facilitates wall thickness analysis through real-time rendering and flexible interactive data exploration mechanisms. We designed our techniques in collaboration with domain experts, and we provide details about the evaluation of the technique and tool.
Lawonn, K.;Glaßer, S.;Vilanova, A.;Preim, B.;Isenberg, T.
Univ. of Magdeburg, Magdeburg, Germany|c|;;;;;;;;Lawonn, K.;Glaßer, S.;Vilanova, A.;Preim, B.;Isenberg, T.
10.1109/TVCG.2009.138;10.1109/TVCG.2011.243;10.1109/TVCG.2014.2346406;10.1109/TVCG.2010.153;10.1109/TVCG.2011.215;10.1109/VISUAL.2004.48
Medical visualization, aneurysms, blood flow, wall thickness, illustrative visualization71948395290742;6064983;6877722;5613474;6064980;1372190
79
SciVis2015
OpenSpace: Public dissemination of space mission profiles
10.1109/SciVis.2015.7429503
http://dx.doi.org/10.1109/SciVis.2015.7429503
141142M
This work presents a visualization system and its application to space missions. The system allows the public to disseminate the scientific findings of space craft and gain a greater understanding thereof. Instruments' field-of-views and their measurements are embedded in an accurate 3 dimensional rendering of the solar system to provide context to past measurements or the planning of future events. We tested our system with NASA's New Horizons at the Pluto Pallooza event in New York and will expose it to the greater public on the upcoming July 14th Pluto flyby.
A. Bock;M. Marcinkowski;J. Kilby;C. Emmart;A. Ynnerman
Link��ping University|c|;;;;Bock, A.;Marcinkowsk, M.i;Kilby, J.;Emmart, C.;Ynnerman, A.7429503
80
SciVis2015
PathlinesExplorer ??? Image-based exploration of large-scale pathline fields
10.1109/SciVis.2015.7429512
http://dx.doi.org/10.1109/SciVis.2015.7429512
159160M
PathlinesExplorer is a novel image-based tool, which has been designed to visualize large scale pathline fields on a single computer [7]. PathlinesExplorer integrates explorable images (EI) technique [4] with order-independent transparency (OIT) method [2]. What makes this method different is that it allows users to handle large data on a single workstation. Although it is a view-dependent method, PathlinesExplorer combines both exploration and modification of visual aspects without re-accessing the original huge data. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathline segments. With this view-dependent method, it is possible to filter, color-code, and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.
O. H. Nagoor;M. Hadwiger;M. Srinivasan
KAUST|c|;;Nagoor, O.H.;Hadwiger, M.;Srinivasan, M.7429512
81
SciVis2015Planar Visualization of Treelike Structures10.1109/TVCG.2015.2467413
http://dx.doi.org/10.1109/TVCG.2015.2467413
906915J
We present a novel method to create planar visualizations of treelike structures (e.g., blood vessels and airway trees) where the shape of the object is well preserved, allowing for easy recognition by users familiar with the structures. Based on the extracted skeleton within the treelike object, a radial planar embedding is first obtained such that there are no self-intersections of the skeleton which would have resulted in occlusions in the final view. An optimization procedure which adjusts the angular positions of the skeleton nodes is then used to reconstruct the shape as closely as possible to the original, according to a specified view plane, which thus preserves the global geometric context of the object. Using this shape recovered embedded skeleton, the object surface is then flattened to the plane without occlusions using harmonic mapping. The boundary of the mesh is adjusted during the flattening step to account for regions where the mesh is stretched over concavities. This parameterized surface can then be used either as a map for guidance during endoluminal navigation or directly for interrogation and decision making. Depth cues are provided with a grayscale border to aid in shape understanding. Examples are presented using bronchial trees, cranial and lower limb blood vessels, and upper aorta datasets, and the results are evaluated quantitatively and with a user study.
Marino, J.;Kaufman, A.;;Marino, J.;Kaufman, A.
10.1109/TVCG.2011.235;10.1109/VISUAL.2001.964540;10.1109/TVCG.2011.192;10.1109/TVCG.2014.2346406;10.1109/VISUAL.2001.964538;10.1109/VISUAL.2004.75;10.1109/VISUAL.2002.1183754;10.1109/VISUAL.2003.1250353;10.1109/TVCG.2011.182;10.1109/TVCG.2006.172
Transfer function, scalar fields, volume rendering, touch interaction, visualization, user interfaces7192698
6064970;964540;6065015;6877722;964538;1372206;1183754;1250353;6064963;4015442
82
SciVis2015
Real-time interactive time correction on the GPU
10.1109/SciVis.2015.7429505
http://dx.doi.org/10.1109/SciVis.2015.7429505
145146M
The study of physical phenomena and their dynamic evolution is supported by the analysis and visualization of time-enabled data. In many applications, available data are sparsely distributed in the space-time domain, which leads to incomprehensible visualizations. We present an interactive approach for the dynamic tracking and visualization of measured data particles through advection in a simulated flow. We introduce a fully GPU-based technique for efficient spatio-temporal interpolation, using a kd-tree forest for acceleration. As the user interacts with the system using a time slider, particle positions are reconstructed for the time selected by the user. Our results show that the proposed technique achieves highly accurate parallel tracking for thousands of particles. The rendering performance is mainly affected by the size of the query set.
M. Elshehaly;D. Gra��anin;M. Gad;J. Wang;H. G. Elmongui
Virginia Tech|c|;;;;Elshehaly, M.;Gracanin, D.;Gad, M.;Wang, J.;Elmongui, H.G.7429505
83
SciVis2015
Real-Time Molecular Visualization Supporting Diffuse Interreflections and Ambient Occlusion
10.1109/TVCG.2015.2467293
http://dx.doi.org/10.1109/TVCG.2015.2467293
718727J
Today molecular simulations produce complex data sets capturing the interactions of molecules in detail. Due to the complexity of this time-varying data, advanced visualization techniques are required to support its visual analysis. Current molecular visualization techniques utilize ambient occlusion as a global illumination approximation to improve spatial comprehension. Besides these shadow-like effects, interreflections are also known to improve the spatial comprehension of complex geometric structures. Unfortunately, the inherent computational complexity of interreflections would forbid interactive exploration, which is mandatory in many scenarios dealing with static and time-varying data. In this paper, we introduce a novel analytic approach for capturing interreflections of molecular structures in real-time. By exploiting the knowledge of the underlying space filling representations, we are able to reduce the required parameters and can thus apply symbolic regression to obtain an analytic expression for interreflections. We show how to obtain the data required for the symbolic regression analysis, and how to exploit our analytic solution to enhance interactive molecular visualizations.
Skanberg, R.;Vazquez, P.-P.;Guallar, V.;Ropinski, T.
;;;;;;Skanberg, R.;Vazquez, P.-P.;Guallar, V.;Ropinski, T.
10.1109/TVCG.2007.70578;10.1109/TVCG.2009.168;10.1109/TVCG.2007.70517;10.1109/TVCG.2012.282;10.1109/TVCG.2009.157;10.1109/TVCG.2014.2346404;10.1109/TVCG.2006.115
Molecular visualization, diffuse interreflections, ambient occlusion71927024376193;5290730;4376194;6327210;5290753;6876051;4015487
84
SciVis2015
Real-time Uncertainty Visualization for B-Mode Ultrasound
10.1109/SciVis.2015.7429489
http://dx.doi.org/10.1109/SciVis.2015.7429489
3340C
B-mode ultrasound is a very well established imaging modality and is widely used in many of today's clinical routines. However, acquiring good images and interpreting them correctly is a challenging task due to the complex ultrasound image formation process depending on a large number of parameters. To facilitate ultrasound acquisitions, we introduce a novel framework for real-time uncertainty visualization in B-mode images. We compute real-time per-pixel ultrasound Confidence Maps, which we fuse with the original ultrasound image in order to provide the user with an interactive feedback on the quality and credibility of the image. In addition to a standard color overlay mode, primarily intended for educational purposes, we propose two perceptional visualization schemes to be used in clinical practice. Our mapping of uncertainty to chroma uses the perceptionally uniform L*a*b* color space to ensure that the perceived brightness of B-mode ultrasound remains the same. The alternative mapping of uncertainty to fuzziness keeps the B-mode image in its original grayscale domain and locally blurs or sharpens the image based on the uncertainty distribution. An elaborate evaluation of our system and user studies on both medical students and expert sonographers demonstrate the usefulness of our proposed technique. In particular for ultrasound novices, such as medical students, our technique yields powerful visual cues to evaluate the image quality and thereby learn the ultrasound image formation process. Furthermore, seeing the distribution of uncertainty adjust to the transducer positioning in real-time, provides also expert clinicians with a strong visual feedback on their actions. This helps them to optimize the acoustic window and can improve the general clinical value of ultrasound.
C. S. Z. Berge;D. Declara;C. Hennersperger;M. Baust;N. Navab
;;;;Berge, C.S.Z.;Declara, D.;Hennersperger, C.;Baust, M.;Navab, N.
10.1109/VISUAL.2001.964550;10.1109/TVCG.2006.134;10.1109/TVCG.2007.70518;10.1109/TVCG.2012.279;10.1109/TVCG.2009.114
Ultrasound, Uncertainty Visualization, Confidence Maps, Real-time7429489964550;4015499;4376198;6327255;5290731
85
SciVis2015
Reconstruction and Visualization of Coordinated 3D Cell Migration Based on Optical Flow
10.1109/TVCG.2015.2467291
http://dx.doi.org/10.1109/TVCG.2015.2467291
9951004J
Animal development is marked by the repeated reorganization of cells and cell populations, which ultimately determine form and shape of the growing organism. One of the central questions in developmental biology is to understand precisely how cells reorganize, as well as how and to what extent this reorganization is coordinated. While modern microscopes can record video data for every cell during animal development in 3D+t, analyzing these videos remains a major challenge: reconstruction of comprehensive cell tracks turned out to be very demanding especially with decreasing data quality and increasing cell densities. In this paper, we present an analysis pipeline for coordinated cellular motions in developing embryos based on the optical flow of a series of 3D images. We use numerical integration to reconstruct cellular long-term motions in the optical flow of the video, we take care of data validation, and we derive a LIC-based, dense flow visualization for the resulting pathlines. This approach allows us to handle low video quality such as noisy data or poorly separated cells, and it allows the biologists to get a comprehensive understanding of their data by capturing dynamic growth processes in stills. We validate our methods using three videos of growing fruit fly embryos.
Kappe, C.P.;Schutz, L.;Gunther, S.;Hufnagel, L.;Lemke, S.;Leitte, H.
IWR, Heidelberg Univ., Heidelberg, Germany|c|;;;;;;;;;;Kappe, C.P.;Schutz, L.;Gunther, S.;Hufnagel, L.;Lemke, S.;Leitte, H.
10.1109/TVCG.2010.169;10.1109/VISUAL.1996.567784;10.1109/TVCG.2009.190;10.1109/VISUAL.2003.1250364;10.1109/VISUAL.1997.663898;10.1109/VISUAL.2003.1250363
Cell migration, vector field, 3D, timedependent,LIC, tracking, validation72102135613499;567784;5290738;1250364;663898;1250363
86
SciVis2015Rotation Invariant Vortices for Flow Visualization10.1109/TVCG.2015.2467200
http://dx.doi.org/10.1109/TVCG.2015.2467200
817826J
We propose a new class of vortex definitions for flows that are induced by rotating mechanical parts, such as stirring devices, helicopters, hydrocyclones, centrifugal pumps, or ventilators. Instead of a Galilean invariance, we enforce a rotation invariance, i.e., the invariance of a vortex under a uniform-speed rotation of the underlying coordinate system around a fixed axis. We provide a general approach to transform a Galilean invariant vortex concept to a rotation invariant one by simply adding a closed form matrix to the Jacobian. In particular, we present rotation invariant versions of the well-known Sujudi-Haimes, Lambda-2, and Q vortex criteria. We apply them to a number of artificial and real rotating flows, showing that for these cases rotation invariant vortices give better results than their Galilean invariant counterparts.
Gunther, T.;Schulze, M.;Theisel, H.;;;;Gunther, T.;Schulze, M.;Theisel, H.
10.1109/TVCG.2014.2346415;10.1109/VISUAL.2002.1183789;10.1109/TVCG.2014.2346412;10.1109/TVCG.2011.249;10.1109/TVCG.2013.189;10.1109/VISUAL.1999.809917;10.1109/VISUAL.1999.809896;10.1109/VISUAL.1998.745296;10.1109/VISUAL.2005.1532851;10.1109/TVCG.2007.70545;10.1109/TVCG.2010.198
Vortex cores, rotation invariance, Galilean invariance, scientific visualization, flow visualization, line fields7192689
6875993;1183789;6875965;6064972;6634153;809917;809896;745296;1532851;4376212;5613462
87
SciVis2015
Streamline Variability Plots for Characterizing the Uncertainty in Vector Field Ensembles
10.1109/TVCG.2015.2467204
http://dx.doi.org/10.1109/TVCG.2015.2467204
767776J
We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.
Ferstl, F.;Bürger, K.;Westermann, R.
Comput. Graphics & Visualization Group, Tech. Univ. Munchen, Munich, Germany|c|;;
;;Ferstl, F.;Bürger, K.;Westermann, R.
10.1109/TVCG.2007.70595;10.1109/VISUAL.2000.885715;10.1109/VISUAL.1999.809863;10.1109/TVCG.2013.141;10.1109/TVCG.2007.70518;10.1109/TVCG.2014.2346455;10.1109/VISUAL.2005.1532779;10.1109/TVCG.2010.181;10.1109/VISUAL.1999.809865;10.1109/TVCG.2013.143
Ensemble visualization, uncertainty visualization, flow visualization, streamlines, statistical modeling7192675
4376173;885715;809863;6634122;4376198;6875964;1532779;5613483;809865;6634129
88
SciVis2015
TelCoVis: Visual Exploration of Co-occurrence in Urban Human Mobility Based on Telco Data
10.1109/TVCG.2015.2467194
http://dx.doi.org/10.1109/TVCG.2015.2467194
935944J
Understanding co-occurrence in urban human mobility (i.e. people from two regions visit an urban place during the same time span) is of great value in a variety of applications, such as urban planning, business intelligence, social behavior analysis, as well as containing contagious diseases. In recent years, the widespread use of mobile phones brings an unprecedented opportunity to capture large-scale and fine-grained data to study co-occurrence in human mobility. However, due to the lack of systematic and efficient methods, it is challenging for analysts to carry out in-depth analyses and extract valuable information. In this paper, we present TelCoVis, an interactive visual analytics system, which helps analysts leverage their domain knowledge to gain insight into the co-occurrence in urban human mobility based on telco data. Our system integrates visualization techniques with new designs and combines them in a novel way to enhance analysts' perception for a comprehensive exploration. In addition, we propose to study the correlations in co-occurrence (i.e. people from multiple regions visit different places during the same time span) by means of biclustering techniques that allow analysts to better explore coordinated relationships among different regions and identify interesting patterns. The case studies based on a real-world dataset and interviews with domain experts have demonstrated the effectiveness of our system in gaining insights into co-occurrence and facilitating various analytical tasks.
Wenchao Wu;Jiayi Xu;Haipeng Zeng;Yixian Zheng;Huamin Qu;Bing Ni;Mingxuan Yuan;Ni, L.M.
;;;;;;;;;;;;;;Wenchao Wu;Jiayi Xu;Haipeng Zeng;Yixian Zheng;Huamin Qu;Bing Ni;Mingxuan Yuan;Ni, L.M.
10.1109/VAST.2010.5652478;10.1109/TVCG.2013.193;10.1109/TVCG.2014.2346276;10.1109/TVCG.2013.226;10.1109/TVCG.2011.166;10.1109/TVCG.2013.173;10.1109/TVCG.2014.2346271;10.1109/VAST.2011.6102455;10.1109/INFVIS.2000.885091;10.1109/TVCG.2014.2346665;10.1109/TVCG.2012.265;10.1109/TVCG.2013.228;10.1109/VAST.2014.7042490;10.1109/TVCG.2014.2346922
Co-occurrence, human mobility, telco data, bicluster, visual analytics7192730
5652478;6634194;6876012;6634127;6065025;6634146;6875983;6102455;885091;6875974;6327262;6634174;7042490;6876013
89
SciVis2015
Using Maximum Topology Matching to Explore Differences in Species Distribution Models
10.1109/SciVis.2015.7429486
http://dx.doi.org/10.1109/SciVis.2015.7429486
916C
Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.
J. Poco;H. Doraiswamy;M. Talbert;J. Morisette;C. T. Silva
New York University|c|;;;;Poco, J.;Doraiswamy, H.;Talbert, M.;Morisette, J.;Silva, C.T.
10.1109/TVCG.2011.244;10.1109/TVCG.2010.213;10.1109/TVCG.2008.145;10.1109/TVCG.2009.155;10.1109/TVCG.2013.125;10.1109/TVCG.2008.143;10.1109/TVCG.2011.236;10.1109/TVCG.2013.148;10.1109/TVCG.2014.2346332;10.1109/TVCG.2011.248;10.1109/TVCG.2007.70601
Function similarity, computational topology, species distribution models, persistence, high dimensional visualization7429486
6064947;5613467;4658193;5290748;6634169;4658165;6064967;6634095;6875976;6064952;4376169
90
SciVis2015
Visual Verification of Space Weather Ensemble Simulations
10.1109/SciVis.2015.7429487
http://dx.doi.org/10.1109/SciVis.2015.7429487
1724C
We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel.
A. Bock;A. Pembroke;M. L. Mays;L. Rastaetter;T. Ropinski;A. Ynnerman
Linkoping University|c|;;;;;Bock, A.;Pembroke, A.;Mays, M.L.;Rastaetter, L.;Ropinski, T.;Ynnerman, A.
10.1109/TVCG.2010.190;10.1109/TVCG.2010.181;10.1109/TVCG.2013.143
Visual Verification, Space Weather, Coronal Mass Ejections, Ensemble74294875613488;5613483;6634129
91
SciVis2015
Visualization and Analysis of Rotating Stall for Transonic Jet Engine Simulation
10.1109/TVCG.2015.2467952
http://dx.doi.org/10.1109/TVCG.2015.2467952
847856J
Identification of early signs of rotating stall is essential for the study of turbine engine stability. With recent advancements of high performance computing, high-resolution unsteady flow fields allow in depth exploration of rotating stall and its possible causes. Performing stall analysis, however, involves significant effort to process large amounts of simulation data, especially when investigating abnormalities across many time steps. In order to assist scientists during the exploration process, we present a visual analytics framework to identify suspected spatiotemporal regions through a comparative visualization so that scientists are able to focus on relevant data in more detail. To achieve this, we propose efficient stall analysis algorithms derived from domain knowledge and convey the analysis results through juxtaposed interactive plots. Using our integrated visualization system, scientists can visually investigate the detected regions for potential stall initiation and further explore these regions to enhance the understanding of this phenomenon. Positive feedback from scientists demonstrate the efficacy of our system in analyzing rotating stall.
Chun-Ming Chen;Dutta, S.;Xiaotong Liu;Heinlein, G.;Han-Wei Shen;Jen-Ping Chen
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA|c|;;;;;
;;;;;Chun-Ming Chen;Dutta, S.;Xiaotong Liu;Heinlein, G.;Han-Wei Shen;Jen-Ping Chen
10.1109/VISUAL.1991.175794;10.1109/TVCG.2007.70599;10.1109/VISUAL.2000.885739;10.1109/TVCG.2013.122;10.1109/TVCG.2013.189;10.1109/VISUAL.2004.128;10.1109/VISUAL.2005.1532830;10.1109/TVCG.2014.2346265
Turbine flow visualization, vortex extraction, anomaly detection, juxtaposition, brushing and linking, time series7192672175794;4376176;885739;6634111;6634153;1372195;1532830;6875987
92
SciVis2015
Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations
10.1109/TVCG.2015.2467153
http://dx.doi.org/10.1109/TVCG.2015.2467153
877885J
We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data ΓÇ£underΓÇ¥ the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay ΓÇ£in the creative zoneΓÇ¥ as they work.
Schroeder, D.;Keefe, D.F.;;Schroeder, D.;Keefe, D.F.
10.1109/VAST.2008.4677356;10.1109/TVCG.2009.181;10.1109/TVCG.2013.124;10.1109/TVCG.2011.202;10.1109/TVCG.2008.153;10.1109/TVCG.2013.226;10.1109/TVCG.2014.2346271;10.1109/INFVIS.2002.1173157;10.1109/TVCG.2009.145;10.1109/TVCG.2010.162;10.1109/INFVIS.2001.963286;10.1109/TVCG.2011.181;10.1109/TVCG.2012.265;10.1109/TVCG.2014.2346441
Visualization design, multivariate, art, sketch, color map, glyph7185456
4677356;5290701;6634168;6065021;4658123;6634127;6875983;1173157;5290707;5613429;963286;6065019;6327262;6875972
93
SciVis2015Visualizing 3D flow through cutting planes
10.1109/SciVis.2015.7429513
http://dx.doi.org/10.1109/SciVis.2015.7429513
161162M
Studies have found conflicting results regarding the effectiveness of tube-like structures for representing 3D flow data. This paper presents the findings of a small-scale pilot study contrasting static monoscopic depth cues to ascertain their importance in perceiving the orientation of a three-dimensional glyph with respect to a cutting plane. A simple striped texture and shading were found to reduce judgement errors when used with a 3D tube glyph as compared to plain or shaded line glyphs. A discussion of considerations for a full-scale study and possible future work follows.
C. Ware;A. H. StevensUniversity of New Hampshire|c|;Ware, C.;Stevens, A.H.7429513
94
SciVis2015Visualizing crossing probabilistic tracts
10.1109/SciVis.2015.7429506
http://dx.doi.org/10.1109/SciVis.2015.7429506
147148M
Diffusion weighted magnetic resonance imaging (dMRI) together with tractography algorithms allow to probe for principal white matter tracts in the living human brain. Specifically, probabilistic tractography quantifies the existence of physical connections to a given seed region as a 3D scalar map of confidence scores. Fiber-Stippling is a visualization for probabilistic tracts that effectively communicates the diffusion pattern, connectivity score, and anatomical context. Unfortunately, it cannot handle multiple diffusion orientations per voxel, which exist in high angular resolution diffusion imaging (HARDI) data. Such data is needed to resolve tracts in complex configurations, such as crossings. In this work, we suggest a visualization based on Fiber-Stippling but sensible to multiple diffusion orientations from HARDI-based diffusion models. With such a technique, it is now possible to visualize probabilistic tracts from HARDI-based tractography algorithms. This implies that tract crossings may now be visualized as crossing stipples, which is an essential step towards an accurate visualization of the neuroanatomy, as crossing tracts are widespread phenomena in the brain.
M. Goldau;A. Reichenbach;M. Hlawitschka
Leipzig University|c|;;Goldau, M.;Reichenbach, A.;Hlawitschka, M.7429506
95
SciVis2015
Visualizing Tensor Normal Distributions at Multiple Levels of Detail
10.1109/TVCG.2015.2467031
http://dx.doi.org/10.1109/TVCG.2015.2467031
975984J
Despite the widely recognized importance of symmetric second order tensor fields in medicine and engineering, the visualization of data uncertainty in tensor fields is still in its infancy. A recently proposed tensorial normal distribution, involving a fourth order covariance tensor, provides a mathematical description of how different aspects of the tensor field, such as trace, anisotropy, or orientation, vary and covary at each point. However, this wealth of information is far too rich for a human analyst to take in at a single glance, and no suitable visualization tools are available. We propose a novel approach that facilitates visual analysis of tensor covariance at multiple levels of detail. We start with a visual abstraction that uses slice views and direct volume rendering to indicate large-scale changes in the covariance structure, and locations with high overall variance. We then provide tools for interactive exploration, making it possible to drill down into different types of variability, such as in shape or orientation. Finally, we allow the analyst to focus on specific locations of the field, and provide tensor glyph animations and overlays that intuitively depict confidence intervals at those points. Our system is demonstrated by investigating the effects of measurement noise on diffusion tensor MRI, and by analyzing two ensembles of stress tensor fields from solid mechanics.
Abbasloo, A.;Wiens, V.;Hermann, M.;Schultz, T.
Univ. of Bonn, Bonn, Germany|c|;;;;;;Abbasloo, A.;Wiens, V.;Hermann, M.;Schultz, T.
10.1109/TVCG.2009.170;10.1109/TVCG.2009.184;10.1109/VISUAL.2005.1532773;10.1109/TVCG.2006.181;10.1109/TVCG.2006.134;10.1109/TVCG.2010.199;10.1109/TVCG.2008.128;10.1109/TVCG.2007.70602;10.1109/TVCG.2015.2467435
Uncertainty visualization, tensor visualization, direct volume rendering, interaction, glyph based visualization7192624
5290759;5290754;1532773;4015482;4015499;5613502;4658185;4376179;7192722
96
VAST2015
3D Regression Heat Map Analysis of Population Study Data
10.1109/TVCG.2015.2468291
http://dx.doi.org/10.1109/TVCG.2015.2468291
8190J
Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease.
Klemm, P.; Lawonn, K.; Glaßer, S.;Niemann, U.;Hegenscheid,K.;Völzke,H.;Preim,B.
Otto-von-Guericke Univ. Magdeburg, Magdeburg, Germany
;;;;;;Klemm, P.;Lawonn, K.;Glaßer, S.;Niemann, U.;Hegenscheid, K.;Völzke, H.;Preim, B.
10.1109/TVCG.2011.229;10.1109/TVCG.2011.185;10.1109/VAST.2009.5333431;10.1109/TVCG.2013.160;10.1109/TVCG.2014.2346591;10.1109/TVCG.2013.161;10.1109/TVCG.2013.125;10.1109/TVCG.2014.2346321
Interactive Visual Analysis, Regression Analysis, Heat Map, Epidemiology, Breast Cancer, Hepatic Steatosis71948476064985;6064996;5333431;6634192;6876009;6634119;6634169;6876043
97
VAST2015
A Case Study Using Visualization Interaction Logs and Insight Metrics to Understand How Analysts Arrive at Insights
10.1109/TVCG.2015.2467613
http://dx.doi.org/10.1109/TVCG.2015.2467613
5160J
We present results from an experiment aimed at using logs of interactions with a visual analytics application to better understand how interactions lead to insight generation. We performed an insight-based user study of a visual analytics application and ran post hoc quantitative analyses of participants' measured insight metrics and interaction logs. The quantitative analyses identified features of interaction that were correlated with insight characteristics, and we confirmed these findings using a qualitative analysis of video captured during the user study. Results of the experiment include design guidelines for the visual analytics application aimed at supporting insight generation. Furthermore, we demonstrated an analysis method using interaction logs that identified which interaction patterns led to insights, going beyond insight-based evaluations that only quantify insight characteristics. We also discuss choices and pitfalls encountered when applying this analysis method, such as the benefits and costs of applying an abstraction framework to application-specific actions before further analysis. Our method can be applied to evaluations of other visualization tools to inform the design of insight-promoting interactions and to better understand analyst behaviors.
Hua Guo;Gomez, S.R.;Ziemkiewicz, C.;Laidlaw, D.H.
;;;;;;Hua Guo;Gomez, S.R.;Ziemkiewicz, C.;Laidlaw, D.H.
10.1109/INFVIS.2005.1532136;10.1109/TVCG.2014.2346575;10.1109/VAST.2014.7042482;10.1109/VAST.2008.4677365;10.1109/TVCG.2008.137;10.1109/VAST.2009.5333878;10.1109/TVCG.2014.2346452;10.1109/TVCG.2012.221;10.1109/TVCG.2007.70515
Evaluation, visual analytics, interaction, intelligence analysis, insight-based evaluation7192662
1532136;6875913;7042482;4677365;4658129;5333878;6876022;6327280;4376144
98
VAST2015
A software developer's guide to informal evaluation of Visual Analytics environments using VAST Challenge information
10.1109/VAST.2015.7347674
http://dx.doi.org/10.1109/VAST.2015.7347674
193194xM
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. In this poster we describe how developers can perform informal evaluations of various aspects of their visual analytics environments using VAST Challenge information.
Cook, K.A.;Scholtz, J.;Whiting, M.A.;;;;Cook, K.A.;Scholtz, J.;Whiting, M.7347674
99
VAST2015
A System for visual exploration of caution spots from vehicle recorder data
10.1109/VAST.2015.7347677
http://dx.doi.org/10.1109/VAST.2015.7347677
199200xM
It is vital for the transportation industry, which performs most of its work by automobiles, to reduce its accident rate. This paper proposes a 3D visual interaction method for exploring caution areas from large-scale vehicle recorder data. Our method provides (i) a flexible filtering interface for driving operations such as braking or handling operations by various combinations of their attribute values such as velocity and acceleration, and (ii) a 3D visual environment for spatio-temporal exploration of caution areas. The proposed method was able to extract caution areas where some accidents have actually occurred or that are on very narrow roads with bad visibility by using real data given by one of the biggest transportation companies in Japan.
Itoh, M.;Yokoyama, D.;Toyoda, M.;Kitsuregawa, M.
;;;;;;Itoh, M.;Yokoyama, D.;Toyoda, M.;Kitsuregawa, M.7347677
100
VAST2015
An Uncertainty-Aware Approach for Exploratory Microblog Retrieval
10.1109/TVCG.2015.2467554
http://dx.doi.org/10.1109/TVCG.2015.2467554
250259J
Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.
Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan
;;;;;;;;;;Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan
10.1109/TVCG.2013.186;10.1109/TVCG.2012.291;10.1109/VAST.2009.5332611;10.1109/TVCG.2013.223;10.1109/TVCG.2011.233;10.1109/VAST.2014.7042494;10.1109/VISUAL.1996.568116;10.1109/INFVIS.2005.1532150;10.1109/VAST.2010.5652931;10.1109/TVCG.2011.197;10.1109/TVCG.2014.2346919;10.1109/TVCG.2013.232;10.1109/TVCG.2011.202;10.1109/TVCG.2014.2346920;10.1109/TVCG.2010.183;10.1109/TVCG.2012.285;10.1109/TVCG.2013.221;10.1109/TVCG.2014.2346922
microblog data, mutual reinforcement model, uncertainty modeling, uncertainty visualization, uncertainty propagation7192694
6634195;6327271;5332611;6634091;6065003;7042494;568116;1532150;5652931;6065022;6875992;6634179;6065021;6876032;5613449;6327258;6634134;6876013
Loading...
 
 
 
Main dataset
MAINTENANCE COPY: Main dataset
Paper Number
Deduped Authors
HelperSheet-AuthorSplit
Sheet12
BACKUP: Keyword Set 2013 - 2008