-
Conference Paper#51
pdf Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records ↗
Click to read abstract
In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and "Symbolic Aggregate approXimation" (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data–short actions of potential interest called "micro-gestures." A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pretrained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset.
-
pdf Clinical Concept Value Sets and Interoperability in Health Data Analytics ↗
Click to read abstract
This paper focuses on value sets as an essential component in the health analytics ecosystem. We discuss shared repositories of reusable value sets and offer recommendations for their further development and adoption. In order to motivate these contributions, we explain how value sets fit into specific analytic tasks and the health analytics landscape more broadly; their growing importance and ubiquity with the advent of Common Data Models, Distributed Research Networks, and the availability of higher order, reusable analytic resources like electronic phenotypes and electronic clinical quality measures; the formidable barriers to value set reuse; and our introduction of a concept-agnostic orientation to vocabulary collections. The costs of ad hoc value set management and the benefits of value set reuse are described or implied throughout. Our standards, infrastructure, and design recommendations are not systematic or comprehensive but invite further work to support value set reuse for health analytics
-
pdf How Do Sketching and Non-Sketching Actions Convey Design Intent? ↗
Click to read abstract
Sketches are much more than marks on paper; they play a key role for designers both in ideation and problem-solving as well as in communication with other designers. Thus, the act of sketching is often enriched with annotations, references, and physical actions, such as gestures or speech—all of which constitute meta-data about the designer’s reasoning. Conventional paper-based design notebooks cannot capture this rich metadata, but digital design notebooks can. To understand how and what data to capture, we conducted an observational study of design practitioners where they explore design solutions for a set of problems. We recorded and coded their sketching and non-sketching actions that reflect their exploration of the design space. We then categorized the captured meta-data and mapped observed physical actions to design intent. These findings inform the creation of future digital design notebooks that can better capture designers’ reasoning during sketching.
-
pdf Observations and Reflections on Visualization Literacy at the Elementary School Level ↗
Click to read abstract
In this article, we share our reflections on visualization literacy and how it might be better developed in early education. We base this on lessons we learned while studying how teachers instruct, and how members acquire basic visualization principles and skills in elementary school. We use these findings to propose directions for future research on visualization literacy.
-
pdf Metaviz: interactive statistical and visual analysis of metagenomic data ↗
Click to read abstract
Large studies profiling microbial communities and their association with healthy or disease phenotypes are now commonplace. Processed data from many of these studies are publicly available but significant effort is required for users to effectively organize, explore and integrate it, limiting the utility of these rich data resources. Effective integrative and interactive visual and statistical tools to analyze many metagenomic samples can greatly increase the value of these data for researchers. We present Metaviz, a tool for interactive exploratory data analysis of annotated microbiome taxonomic community profiles derived from marker gene or whole metagenome shotgun sequencing. Metaviz is uniquely designed to address the challenge of browsing the hierarchical structure of metagenomic data features while rendering visualizations of data values that are dynamically updated in response to user navigation. We use Metaviz to provide the UMD Metagenome Browser web service, allowing users to browse and explore data for more than 7000 microbiomes from published studies. Users can also deploy Metaviz as a web service, or use it to analyze data through the metavizr package to interoperate with state-of-the-art analysis tools available through Bioconductor. Metaviz is free and open source with the code, documentation and tutorials publicly accessible.
-
pdf VisHive: Supporting Web-based Visualization through Ad-hoc Computational Clusters of Mobile Devices ↗
Click to read abstract
Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad-hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices---called cells---into such ad-hoc clusters---called a hive---for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master-slave architecture, where the master provides the visual interface to the user and controls the slaves, and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a
-
pdf ATOM: A Grammar for Unit Visualization ↗
Click to read abstract
Unit visualizations are a family of visualizations where every data item is represented by a unique visual mark---a visual unit---during visual encoding. For certain datasets and tasks, unit visualizations can provide more information, better match the user's mental model, and enable novel interactions compared to traditional aggregated visualizations. Current visualization grammars cannot fully describe the unit visualization family. In this paper, we characterize the design space of unit visualizations to derive a grammar that can express them. The resulting grammar is called ATOM, and is based on passing data through a series of layout operations that divide the output of previous operations recursively until the size and position of every data point can be determined. We evaluate the expressive power of the grammar by both using it to describe existing unit visualizations, as well as to suggest new unit visualizations.
-
pdf TopoText: Context-Preserving Semantic Exploration Across Multiple Spatial Scales ↗
Click to read abstract
TopoText is a context-preserving technique for visualizing semantic data for multi-scale spatial aggregates to gain insight into spatial phenomena. Conventional exploration requires users to navigate across multiple scales but only presents the information related to the current scale. This limitation potentially adds more steps of interaction and cognitive overload to the users. TopoText renders multi-scale aggregates into a single visual display combining novel text-based encoding and layout methods that draw labels along the boundary or filled within the aggregates. The text itself not only summarizes the semantics at each individual scale, but also indicates the spatial coverage of the aggregates and their underlying hierarchical
-
pdf When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data Exploration ↗
Click to read abstract
We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics—display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study.