-
pdf Integrating Annotations into Multidimensional Visual Dashboards ↗
Click to read abstract
Multidimensional data is often visualized using coordinated multiple views in an interactive dashboard. However, unlike in infographics where text is often a central part of the presentation, there is currently little knowledge of how to best integrate text and annotations in a visualization dashboard. In this paper, we explore a technique called FacetNotes for presenting these textual annotations on top of any visualization within a dashboard irrespective of the scale of data shown or the design of visual representation itself. FacetNotes does so by grouping and ordering the textual annotations based on properties of (1) the individual data points associated with the annotations, and (2) the target visual representation on which they should be shown. We present this technique along with a set of user interface features and guidelines to apply it to visualization interfaces. We also demonstrate FacetNotes in a custom visual dashboard interface. Finally, results from a user study of FacetNotes show that the technique improves the scope and complexity of insights developed during visual exploration.
-
pdf Topology-Aware Space Distortion for Structured Visualization Spaces ↗
Click to read abstract
We propose topology-aware space distortion (TASD), a family of interactive layout techniques for non-linearly distorting geometric space based on user attention and on the structure of the visual representation. TASD seamlessly adapts the visual substrate of any visualization to give more screen real estate to important regions of the representation at the expense of less important regions. In this paper, we present a concrete TASD technique that we call ZoomHalo for interactively distorting a two-dimensional space based on a degree-of-interest (DOI) function defined for the space. Using this DOI function, ZoomHalo derives several areas of interest, computes the available space around each area in relation to other areas and the current viewport extents, and then dynamically expands (or shrinks) each area given user input. We use our prototype to evaluate the technique in two user studies, as well as showcase examples of TASD for node-link diagrams, word clouds, and geographical maps.
-
pdf Effects of Screen-Responsive Visualization on Data Comprehension ↗
Sriram Karthik BadamClick to read abstract
Visualization interfaces designed for heterogeneous devices such as wall displays and mobile screens must be responsive to varying display dimensions, resolution, and interaction capabilities. In this paper, we report on two user studies of visual representations for large versus small displays. The goal of our experiments was to investigate differences between a large vertical display and a mobile hand-held display in terms of the data comprehension and the quality of resulting insights. To this end, we developed a visual interface with a coordinated multiple view layout for the large display and two alternative designs of the same interface---a space-saving boundary visualization layout and an overview layout---for the mobile condition. The first experiment was a controlled laboratory study designed to evaluate the effect of display size on the perception of changes in a visual representation, and yielded significant correctness differences even while completion time remained similar. The second evaluation was a qualitative study in a practical setting and showed that participants were able to easily associate and work with the responsive visualizations. Based on the results, we conclude the paper by providing new guidelines for screen-responsive visualization interfaces.
-
pdf Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading ↗
Click to read abstract
Today's data-rich documents are often complex datasets in themselves, consisting of information in different formats such as text, gures, and data tables. These additional media augment the textual narrative in the document. However, the static layout of a traditional for-print document often impedes deep understanding of its content because of the need to navigate to access content scattered throughout the text. In this paper, we seek to facilitate enhanced comprehension of such documents through a contextual visualization technique that couples text content with data tables contained in the document. We parse the text content and data tables, cross-link the components using a keyword-based matching algorithm, and generate on-demand visualizations based on the reader's current focus within a document. We evaluate this technique in a user study comparing our approach to a traditional reading experience. Results from our study show that (1) participants comprehend the content better with tighter coupling of text and data, (2) the contextual visualizations enable participants to develop better summaries that capture the main data-rich insights within the document, and (3) overall, our method enables participants to develop a more detailed understanding of the document content.
-
pdf Vistrates: A Component Model for Ubiquitous Analytics ↗
Click to read abstract
Visualization tools are often specialized for specic tasks, which turns the user's analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic "anytime" and "anywhere" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices.
-
pdf DataSite: Proactive Visual Data Exploration with Computation of Insight-based Recommendations ↗
Click to read abstract
Effective data analysis ideally requires the analyst to have high expertise as well as high knowledge of the data. Even with such familiarity, manually pursuing all potential hypotheses and exploring all possible views is impractical. We present DataSite, a proactive visual analytics system where the burden of selecting and executing appropriate computations is shared by an automatic server-side computation engine. Salient features identified by these automatic background processes are surfaced as notifications in a feed timeline. DataSite effectively turns data analysis into a conversation between analyst and computer, thereby reducing the cognitive load and domain knowledge requirements. We validate the system with a user study comparing it to a recent visualization recommendation system, yielding significant improvement, particularly for complex analyses that existing analytics systems do not support well.
-
pdf VisHive: Supporting Web-based Visualization through Ad-hoc Computational Clusters of Mobile Devices ↗
Click to read abstract
Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad-hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices---called cells---into such ad-hoc clusters---called a hive---for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master-slave architecture, where the master provides the visual interface to the user and controls the slaves, and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a
-
pdf When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data Exploration ↗
Click to read abstract
We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics—display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study.
-
pdf Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization ↗
Sriram Karthik BadamClick to read abstract
Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework.
-
pdf Merging Sketches for Creative Design Exploration: An Evaluation of Physical and Cognitive Operations ↗
Sriram Karthik BadamClick to read abstract
Despite its grounding in creativity techniques, merging multiple source sketches to create new ideas has received scant attention in design literature. In this paper, we identify the physical operations that in merging sketch components. We also introduce cognitive operations of reuse, repurpose, refactor, and reinterpret, and explore their relevance to creative design. To examine the relationship of cognitive operations, physical techniques, and creative sketch outcomes, we conducted a qualitative user study where student designers merged existing sketches to generate either an alternative design, or an unrelated new design. We compared two digital selection techniques: freeform selection, and a stroke-cluster-based "object select" technique. The resulting merge sketches were subjected to crowdsourced evaluation of these sketches, and manual coding for the use of cognitive operations. Our findings establish a firm connection between the proposed cognitive operations and the context and outcome of creative tasks. Key findings indicate that reinterpret cognitive operations correlate strongly with creativity in merged sketches, while reuse operations correlate negatively with creativity. Furthermore, freeform selection techniques are preferred significantly by designers. We discuss the empirical contributions of understanding the use of cognitive operations during design exploration, and the practical implications for designing interfaces in digital tools that facilitate creativity in merging sketches.
-
pdf Supporting Team-First Visual Analytics through Group Activity Representations ↗
Click to read abstract
Collaborative visual analytics (CVA) involves sensemaking activities within teams of analysts based on coordination of work across team members, awareness of team activity, and communication of hypotheses, observations, and insights. We introduce a new type of CVA tools based on the notion of "team-first" visual analytics, where supporting the analytical process and needs of the entire team is the primary focus of the graphical user interface before that of the individual analysts. To this end, we present the design space and guidelines for team-first tools in terms of conveying analyst presence, focus, and activity within the interface. We then introduce InsightsDrive, a CVA tool for multidimensional data, that contains team-first features into the interface through group activity visualizations. This includes (1) in-situ representations that show the focus regions of all users integrated in the data visualizations themselves using color-coded selection shadows, as well as (2) ex-situ representations showing the data coverage of each analyst using multidimensional visual representations. We conducted two user studies, one with individual analysts to identify the affordances of different visual representations to inform data coverage, and the other to evaluate the performance of our team-first design with exsitu and in-situ awareness for visual analytic tasks. Our results give an understanding of the performance of our team-first features and unravel their advantages for team coordination.
-
pdf Steering the Craft: UI Elements and Visualizations for Supporting Progressive Visual Analytics ↗
Click to read abstract
Progressive visual analytics (PVA) has emerged in recent years to manage the latency of data analysis systems. When analysis is performed progressively, rough estimates of the results are generated quickly and are then improved over time. Analysts can therefore monitor the progression of the results, steer the analysis algorithms, and make early decisions if the estimates provide a convincing picture. In this article, we describe interface design guidelines for helping users understand progressively updating results and make early decisions based on progressive estimates. To illustrate our ideas, we present a prototype PVA tool called InsightsFeed for exploring Twitter data at scale. As validation, we investigate the tradeoffs of our tool when exploring a Twitter dataset in a user study. We report the usage patterns in making early decisions using the user interface, guiding computational methods, and exploring different subsets of the dataset, compared to sequential analysis without progression.
-
pdf Integrating Visual Analytics Support for Grounded Theory Practice in Qualitative Text Analysis ↗
Click to read abstract
We present an argument for using visual analytics to aid Grounded Theory methodologies in qualitative data analysis. Grounded theory methods involve the inductive analysis of data to generate novel insights and theoretical constructs. Making sense of unstructured text data is uniquely suited for visual analytics. Using natural language processing techniques such as parts-of-speech tagging, retrieving information content, and topic modeling, different parts of the data can be structured and semantically associated, and interactively explored, thereby providing conceptual depth to the guided discovery process. We review grounded theory methods and identify processes that can be enhanced through visual analytic techniques. Next, we develop an interface for qualitative text analysis, and evaluate our design with qualitative research practitioners who analyze texts with and without visual analytics support. The results of our study suggest how visual analytics can be incorporated into qualitative data analysis tools, and the analytic and interpretive benefits that can result.
-
pdf VizScribe: A Visual Analytics Approach to Understand Designer Behavior ↗
Sriram Karthik BadamClick to read abstract
Design protocol analysis is a technique to understand designers’ cognitive processes by analyzing sequences of observations on their behavior. These observations typically use audio, video, and transcript data in order to gain insights into the designer's behavior and the design process. The recent availability of sophisticated sensing technology has made such data highly multimodal, requiring more flexible protocol analysis tools. To address this need, we present VizScribe, a visual analytics framework that employs multiple coordinated multiple views that enable the viewing of such data from different perspectives. VizScribe allows designers to create, customize, and extend interactive visualizations for design protocol data such as video, transcripts, sketches, sensor data, and user logs. User studies where design researchers used VizScribe for protocol analysis indicated that the linked views and interactive navigation offered by VizScribe afforded the researchers multiple, useful ways to approach and interpret such multimodal data.
-
Conference Paper#40
pdf Supporting Visual Exploration for Multiple Users in Large Display Environments ↗
Click to read abstract
We present a design space exploration of interaction techniques for supporting multiple collaborators exploring data on a shared large display. Our proposed solution is based on users controlling individual lenses using both explicit gestures as well as proxemics: the spatial relations between people and physical artifacts such as their distance, orientation, and movement. We discuss different design considerations for implicit and explicit interactions through the lens, and evaluate the user experience to find a balance between the implicit and explicit interaction styles. Our findings indicate that users favor implicit interaction through proxemics for navigation and collaboration, but prefer using explicit mid-air gestures to perform actions that are perceived to be direct, such as terminating a lens composition. Based on these results, we propose a hybrid technique utilizing both proxemics and mid-air gestures, along with examples applying this technique to other datasets. Finally, we performed a usability evaluation of the hybrid technique and observed user performance improvements in the presence of both implicit and explicit interaction styles.
-
pdf TimeFork: Interactive Prediction of Time Series ↗
Click to read abstract
We present TimeFork, an interactive prediction technique to support users predicting the future of time-series data, such as in financial, scientific, or medical domains. TimeFork combines visual representations of multiple time series with prediction information generated by computational models. Using this method, analysts engage in a back-and-forth dialogue with the computational model by alternating between manually predicting future changes through interaction and letting the model automatically determine the most likely outcomes, to eventually come to a common prediction using the model. This computer-supported prediction approach allows for harnessing the user’s knowledge of factors influencing future behavior, as well as sophisticated computational models drawing on past performance. To validate the TimeFork technique, we conducted a user study in a stock market prediction game. We present evidence of improved performance for participants using TimeFork compared to fully manual or fully automatic predictions, and characterize qualitative usage patterns observed during the user study.
-
pdf Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces ↗
Click to read abstract
We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin's general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations.
-
pdf Visualization Beyond the Desktop --- The Next Big Thing ↗
Sriram Karthik BadamClick to read abstract
Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today's new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop.
-
pdf Tracing and Sketching Performance using Blunt-Tipped Styli on Direct-Touch Tablets ↗
Click to read abstract
Direct-touch tablets are quickly replacing traditional pen-and-paper tools in many applications, but not in case of the designer’s sketchbook. In this paper, we explore the tradeoffs inherent in replacing such paper sketchbooks with digital tablets in terms of two major tasks: tracing and free-hand sketching. Given the importance of the pen for sketching, we also study the impact of using a blunt-and-soft-tipped capacitive stylus in tablet settings. We thus conducted experiments to evaluate three sketch media: pen-paper, finger-tablet, and stylus-tablet based on the above tasks. We analyzed the tracing data with respect to speed and accuracy, and the quality of the free-hand sketches through a crowdsourced survey. The pen-paper and stylus-tablet media both performed significantly better than the finger-tablet medium in accuracy, while the pen-paper sketches were significantly rated higher quality compared to both tablet interfaces. A follow-up study comparing the performance of this stylus with a sharp, hard-tip version showed no significant difference in tracing performance, though participants preferred the sharp tip for sketching.
-
pdf PolyChrome: A Cross-Device Framework for Collaborative Web Visualization ↗
Sriram Karthik BadamClick to read abstract
We present PolyChrome, an application framework for creating web-based collaborative visualizations that can span multiple devices. The framework supports (1) co-browsing new web applications as well as legacy websites with no migration costs (i.e., a distributed web browser); (2) an API to develop new web applications that can synchronize the UI state on multiple devices to support synchronous and asynchronous collaboration; and (3) maintenance of state and input events on a server to handle common issues with distributed applications such as consistency management, conflict resolution, and undo operations. We describe PolyChrome's general design, architecture, and implementation followed by application examples showcasing collaborative web visualizations created using the framework. Finally, we present performance results that suggest that PolyChrome adds minimal overhead compared to single-device applications.
-
pdf skWiki: A Multimedia Sketching System for Collaborative Creativity ↗
Sriram Karthik BadamClick to read abstract
We present skWiki, a web application framework for collaborative creativity in digital multimedia projects, including text, hand-drawn sketches, and photographs. skWiki overcomes common drawbacks of existing wiki software by providing a rich viewer/editor architecture for all media types that is integrated into the web browser itself, thus avoiding dependence on client-side editors. Instead of files, skWiki uses the concept of paths as trajectories of persistent state over time. This model has intrinsic support for collaborative editing, including cloning, branching, and merging paths edited by multiple contributors. We demonstrate skWiki's utility using a qualitative, sketching-based user study.
-
pdf Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications ↗
Click to read abstract
Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures.