from members of the Electronic Visualization Laboratory ...
2025
Sai Priya Jyothula, and Andrew E. Johnson,
Enhancing Consumer Insights through VR Metaphor Elicitation,
IEEE Transactions on Visualization and Computer Graphics,
2025
@article{JyJo25,title={Enhancing Consumer Insights through VR Metaphor Elicitation},author={Jyothula, Sai Priya and Johnson, Andrew E.},year={2025},journal={IEEE Transactions on Visualization and Computer Graphics},pages={1--10},doi={10.1109/tvcg.2025.3549905},keywords={Interviews;Virtual environments;Cognitive science;Visualization;Three-dimensional displays;Cognition;Probes;Faces;Affordances;Training;Human computer interaction (HCI);virtual reality (VR);collaborative virtual environments;asymmetric VR;VR applications;embodied interaction;presence;immersion;perception and cognition;deep metaphors;consumer research;Zaltman's metaphor elicitation technique (ZMET)},}
Luc Renambot, G. Elisabeta Marai, Daria Tsoupikova, Jonas Talandis, Michael E. Papka, Fabio Miranda, Nikita Soni, Dana Plepys, Lance Long, Maxine Brown, Andrew Johnson, Daniel Sandin, Thomas DeFanti, and Jason Leigh,
Immersive Analytics at the Electronic Visualization Laboratory,
In IEEE VR 2025 Workshop: Immersive Visualization Laboratory (IVL),
Mar,
2025
@inproceedings{ReMaTs25,author={Renambot, Luc and Marai, G. Elisabeta and Tsoupikova, Daria and Talandis, Jonas and Papka, Michael E. and Miranda, Fabio and Soni, Nikita and Plepys, Dana and Long, Lance and Brown, Maxine and Johnson, Andrew and Sandin, Daniel and DeFanti, Thomas and Leigh, Jason},title={Immersive Analytics at the Electronic Visualization Laboratory},booktitle={IEEE VR 2025 Workshop: Immersive Visualization Laboratory (IVL)},address={Saint-Malo, France},year={2025},month=mar,url={https://sites.google.com/view/ivl-workshop3},}
Andrew Wentzel, Serageldin Attia, Xinhua Zhang, Guadalupe Canahuate, Clifton David Fuller, and G. Elisabeta Marai,
DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer,
IEEE Transactions on Visualization & Computer Graphics,
Jan,
2025
Digital twin models are of high interest to Head and Neck Cancer (HNC) oncologists, who have to navigate a series of complex treatment decisions that weigh the efficacy of tumor control against toxicity and mortality risks. Evaluating individual risk profiles necessitates a deeper understanding of the interplay between different factors such as patient health, spatial tumor location and spread, and risk of subsequent toxicities that can not be adequately captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze detailed risk profiles for each patient, and decide on a treatment plan. DITTO relies on a sequential Deep Reinforcement Learning digital twin (DT) to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several visual explainability methods to promote clinical trust and encourage healthy skepticism when using our system. We evaluate the efficacy of DITTO through quantitative evaluation of performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.
@article{WeAtZh25,title={{DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer}},author={Wentzel, Andrew and Attia, Serageldin and Zhang, Xinhua and Canahuate, Guadalupe and Fuller, Clifton David and Marai, G. Elisabeta},year={2025},month=jan,journal={IEEE Transactions on Visualization \& Computer Graphics},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},volume={31},number={01},pages={65--75},doi={10.1109/tvcg.2024.3456160},issn={1941-0506},url={https://doi.ieeecomputersociety.org/10.1109/TVCG.2024.3456160},keywords={Digital twins;Visualization;Computational modeling;Data visualization;Data models;Tumors;Chemotherapy},}
Dian Jia, Xiaoqian Ruan, Kun Xia, Zhiming Zou, Le Wang, and Wei Tang,
Analysis-by-Synthesis Transformer for Single-View 3D Reconstruction,
In Computer Vision – ECCV 2024,
2025
Deep learning approaches have made significant success in single-view 3D reconstruction, but they often rely on expensive 3D annotations for training. Recent efforts tackle this challenge by adopting an analysis-by-synthesis paradigm to learn 3D reconstruction with only 2D annotations. However, existing methods face limitations in both shape reconstruction and texture generation. This paper introduces an innovative Analysis-by-Synthesis Transformer that addresses these limitations in a unified framework by effectively modeling pixel-to-shape and pixel-to-texture relationships. It consists of a Shape Transformer and a Texture Transformer. The Shape Transformer employs learnable shape queries to fetch pixel-level features from the image, thereby achieving high-quality mesh reconstruction and recovering occluded vertices. The Texture Transformer employs texture queries for non-local gathering of texture information and thus eliminates the incorrect inductive bias. Experimental results on CUB-200-2011 and ShapeNet datasets demonstrate superior performance in shape reconstruction and texture generation compared to previous methods. The code is available at https://github.com/DianJJ/AST.
@inproceedings{JiRuXi24,title={Analysis-by-Synthesis Transformer for Single-View 3D Reconstruction},author={Jia, Dian and Ruan, Xiaoqian and Xia, Kun and Zou, Zhiming and Wang, Le and Tang, Wei},year={2025},booktitle={Computer Vision -- ECCV 2024},publisher={Springer Nature Switzerland},address={Cham},pages={259--277},isbn={978-3-031-72664-4},editor={Leonardis, Ale{\v{s}} and Ricci, Elisa and Roth, Stefan and Russakovsky, Olga and Sattler, Torsten and Varol, G{\"u}l},}
Gustavo Moreira, Maryam Hosseini, Carolina Veiga, Lucas Alexandre, Nicola Colaninno, Daniel Oliveira, Nivan Ferreira, Marcos Lage, and Fabio Miranda,
Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics,
IEEE Transactions on Visualization and Computer Graphics,
Jan,
2025
Over the past decade, several urban visual analytics systems and tools have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these tools have been designed through collaborations with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. However, the design, implementation, and practical use of these tools still rely on siloed approaches, resulting in bespoke systems that are difficult to reproduce and extend. At the design level, these tools undervalue rich data workflows from urban experts, typically treating them only as data providers and evaluators. At the implementation level, they lack interoperability with other technical frameworks. At the practical use level, they tend to be narrowly focused on specific fields, inadvertently creating barriers to cross-domain collaboration. To address these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine data preprocessing, management, and visualization stages while tracking the provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse set of usage scenarios targeting urban accessibility, urban microclimate, and sunlight access. These scenarios use different types of data and domain methodologies to illustrate Curio’s flexibility in tackling pressing societal challenges. Curio is available at urbantk.org/curio.
@article{MoHoVe24,title={Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics},author={Moreira, Gustavo and Hosseini, Maryam and Veiga, Carolina and Alexandre, Lucas and Colaninno, Nicola and de Oliveira, Daniel and Ferreira, Nivan and Lage, Marcos and Miranda, Fabio},year={2025},month=jan,journal={IEEE Transactions on Visualization and Computer Graphics},volume={31},number={1},pages={1224--1234},doi={10.1109/tvcg.2024.3456353},issn={1941-0506},keywords={Data visualization;Visual analytics;Collaboration;Three-dimensional displays;Urban areas;Codes;Pipelines;Urban analytics;urban data;spatial data;dataflow;provenance;visualization framework;visualization system},}
Haoyue Shi, Le Wang, Sanping Zhou, Gang Hua, and Wei Tang,
Learning Anomalies with Normality Prior for Unsupervised Video Anomaly Detection,
In Computer Vision – ECCV 2024,
2025
Unsupervised video anomaly detection (UVAD) aims to detect abnormal events in videos without any annotations. It remains challenging because anomalies are rare, diverse, and usually not well-defined. Existing UVAD methods are purely data-driven and perform unsupervised learning by identifying various abnormal patterns in videos. Since these methods largely rely on the feature representation and data distribution, they can only learn salient anomalies that are substantially different from normal events but ignore the less distinct ones. To address this challenge, this paper pursues a different approach that leverages data-irrelevant prior knowledge about normal and abnormal events for UVAD. We first propose a new normality prior for UVAD, suggesting that the start and end of a video are predominantly normal. We then propose normality propagation, which propagates normal knowledge based on relationships between video snippets to estimate the normal magnitudes of unlabeled snippets. Finally, unsupervised learning of abnormal detection is performed based on the propagated labels and a new loss re-weighting method. These components are complementary to normality propagation and mitigate the negative impact of incorrectly propagated labels. Extensive experiments on the ShanghaiTech and UCF-Crime benchmarks demonstrate the superior performance of our method. The code is available at https://github.com/shyern/LANP-UVAD.git.
@inproceedings{SiWaZh24,title={Learning Anomalies with Normality Prior for Unsupervised Video Anomaly Detection},author={Shi, Haoyue and Wang, Le and Zhou, Sanping and Hua, Gang and Tang, Wei},year={2025},booktitle={Computer Vision -- ECCV 2024},publisher={Springer Nature Switzerland},address={Cham},pages={163--180},isbn={978-3-031-72658-3},editor={Leonardis, Ale{\v{s}} and Ricci, Elisa and Roth, Stefan and Russakovsky, Olga and Sattler, Torsten and Varol, G{\"u}l},}
2024
Idunnuoluwa A. Adeniji, Joseph A. Insley, David Joiner, Victor A. Mateevitsi, Michael E. Papka, and Silvio Rizzi,
Exploring Large-Scale Scientific Data in Virtual Reality,
In 2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV),
2024
@inproceedings{AdInJo24,title={Exploring Large-Scale Scientific Data in Virtual Reality},author={Adeniji, Idunnuoluwa A. and Insley, Joseph A. and Joiner, David and Mateevitsi, Victor A. and Papka, Michael E. and Rizzi, Silvio},year={2024},booktitle={2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV)},pages={75--76},doi={10.1109/ldav64567.2024.00019},keywords={Three-dimensional displays;Pipelines;Data visualization;Focusing;Virtual reality;Games;Software;Hardware;Streams;Engines},}
Gautham Dharuman, Kyle Hippe, Alexander Brace, Sam Foreman, Väinä Hatanpää, Varuni K. Sastry, Huihuo Zheng, Logan Ward, Servesh Muralidharan, Archit Vasan, Bharat Kale, Carla M. Mann, Heng Ma, Yun-Hsuan Cheng, Yuliana Zamora, Shengchao Liu, Chaowei Xiao, Murali Emani, Tom Gibbs, Mahidhar Tatineni, Deepak Canchi, Jerome Mitchell, Koichi Yamada, Maria Garzaran, Michael E. Papka, Ian Foster, Rick Stevens, Anima Anandkumar, Venkatram Vishwanath, and Arvind Ramanathan,
MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design Workflows with Direct Preference Optimization,
In 2024 SC24: International Conference for High Performance Computing, Networking, Storage and Analysis SC,
Nov,
2024
We present a scalable, end-to-end workflow for protein design. By augmenting protein sequences with natural language descriptions of their biochemical properties, we train generative models that can be preferentially aligned with protein fitness landscapes. Through complex experimental- and simulationbased observations, we integrate these measures as preferred parameters for generating new protein variants and demonstrate our workflow on five diverse supercomputers. We achieve >1 ExaFLOPS sustained performance in mixed precision on each supercomputer and a maximum sustained performance of 4.11 ExaFLOPS and peak performance of 5.57 ExaFLOPS. We establish the scientific performance of our model on two tasks: (1) across a predetermined benchmark dataset of deep mutational scanning experiments to optimize the fitness-determining mutations in the yeast protein HIS7, and (2) in optimizing the design of the enzyme malate dehydrogenase to achieve lower activation barriers (and therefore increased catalytic rates) using simulation data. Our implementation thus sets high watermarks for multimodal protein design workflows.
@inproceedings{DhHiBr24,title={{MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design Workflows with Direct Preference Optimization}},author={Dharuman, Gautham and Hippe, Kyle and Brace, Alexander and Foreman, Sam and Hatanpää, Väinä and Sastry, Varuni K. and Zheng, Huihuo and Ward, Logan and Muralidharan, Servesh and Vasan, Archit and Kale, Bharat and Mann, Carla M. and Ma, Heng and Cheng, Yun-Hsuan and Zamora, Yuliana and Liu, Shengchao and Xiao, Chaowei and Emani, Murali and Gibbs, Tom and Tatineni, Mahidhar and Canchi, Deepak and Mitchell, Jerome and Yamada, Koichi and Garzaran, Maria and Papka, Michael E. and Foster, Ian and Stevens, Rick and Anandkumar, Anima and Vishwanath, Venkatram and Ramanathan, Arvind},year={2024},month=nov,booktitle={2024 SC24: International Conference for High Performance Computing, Networking, Storage and Analysis SC},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},pages={74--86},doi={10.1109/sc41406.2024.00013},url={https://doi.ieeecomputersociety.org/10.1109/SC41406.2024.00013},}
Leonardo Ferreira, Gustavo Moreira, Maryam Hosseini, Marcos Lage, Nivan Ferreira, and Fabio Miranda,
Assessing the landscape of toolkits, frameworks, and authoring tools for urban visual analytics systems,
Computers & Graphics,
2024
Over the past decade, there has been a significant increase in the development of visual analytics systems dedicated to addressing urban issues. These systems distill intricate urban analysis workflows into intuitive, interactive visual representations and interfaces, enabling users to explore, understand, and derive insights from large and complex data, including street-level imagery, street networks, and building geometries. Developing urban visual analytics systems, however, is a challenging endeavor that requires considerable programming expertise and interaction between various multidisciplinary stakeholders. This situation often leads to monolithic and isolated prototypes that are hard to reproduce, combine, or extend. Concurrently, there has been an increase in the availability of general and urban-specific toolkits, frameworks, and authoring tools that are open source and abstract away the need to implement low-level visual analytics functionalities. This paper provides a hierarchical taxonomy of urban visual analytics systems to contextualize how they are usually designed, implemented, and evaluated. We develop this taxonomy across three distinct levels (i.e., dimensions, categories, and tags), juxtaposing visualization with analytics, data, and system dimensions. We then assess the extent to which current open-source toolkits, frameworks, and authoring tools can effectively support the development of components tailored to urban visual analytics, identifying their strengths and limitations in addressing the unique challenges posed by urban data. In doing so, we offer a roadmap that can guide the effective employment of existing resources and chart a pathway for developing and refining future systems.
@article{FeMoHo24,title={Assessing the landscape of toolkits, frameworks, and authoring tools for urban visual analytics systems},author={Ferreira, Leonardo and Moreira, Gustavo and Hosseini, Maryam and Lage, Marcos and Ferreira, Nivan and Miranda, Fabio},year={2024},journal={Computers & Graphics},volume={123},pages={104013},doi={https://doi.org/10.1016/j.cag.2024.104013},issn={0097-8493},url={https://www.sciencedirect.com/science/article/pii/S0097849324001481},keywords={Visual analytics, Visualization toolkits, Visualization grammars, Visualization authoring, Urban visual analytics},}
Colleen Heinemann, Jefferson Amstutz, Joseph A. Insley, Victor A. Mateevitsi, Michael E. Papka, and Silvio Rizzi,
Graphical Representation Through a User Interface for In Situ Scientific Visualization with Ascent,
In 2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV),
2024
@inproceedings{HeJeIn24,title={Graphical Representation Through a User Interface for In Situ Scientific Visualization with Ascent},author={Heinemann, Colleen and Amstutz, Jefferson and Insley, Joseph A. and Mateevitsi, Victor A. and Papka, Michael E. and Rizzi, Silvio},year={2024},booktitle={2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV)},pages={71--72},doi={10.1109/ldav64567.2024.00017},keywords={Data analysis;Pipelines;Data visualization;Organizations;Software;Graphical user interfaces;In situ visualization;graphical user interface (GUI);HPC simulations;scientific visualization;Ascent},}
Andrew E. Johnson, Luc Renambot, G. Elisabeta Marai, Daria Tsoupikova, Michael E. Papka, Lance Long, Dana Plepys, Jonas Talandis, Maxine D. Brown, Jason Leigh, Daniel J. Sandin, and Thomas A. DeFanti,
Electronic Visualization Laboratory’s 50th Anniversary Retrospective: Look to the Future, Build on the Past,
PRESENCE: Virtual and Augmented Reality,
Jul,
2024
_eprint: https://direct.mit.edu/pvar/article-pdf/doi/10.1162/pres_a_00421/2467669/pres_a_00421.pdf
September 2023 marks the 50th anniversary of the Electronic Visualization Laboratory (EVL) at University of Illinois Chicago (UIC). EVL’s introduction of the CAVE Automatic Virtual Environment in 1992, the first widely replicated, projection-based, walk-in, virtual-reality (VR) system in the world, put EVL at the forefront of collaborative, immersive data exploration and analytics. However, the journey did not begin then. Since its founding in 1973, EVL has been developing tools and techniques for real-time, interactive visualizations—pillars of VR. But EVL’s culture is also relevant to its successes, as it has always been an interdisciplinary lab that fosters teamwork, where each person’s expertise contributes to the development of the necessary tools, hardware, system software, applications, and human interface models to solve problems. Over the years, as multidisciplinary collaborations evolved and advanced scientific instruments and data resources were distributed globally, the need to access and share data and visualizations while working with colleagues, local and remote, synchronous and asynchronous, also became important fields of study. This paper is a retrospective of EVL’s past 50 years that surveys the many networked, immersive, collaborative visualization and VR systems and applications it developed and deployed, as well as lessons learned and future plans.
@article{JoReMa24,title={Electronic {Visualization} {Laboratory}'s 50th {Anniversary} {Retrospective}: {Look} to the {Future}, {Build} on the {Past}},author={Johnson, Andrew E. and Renambot, Luc and Marai, G. Elisabeta and Tsoupikova, Daria and Papka, Michael E. and Long, Lance and Plepys, Dana and Talandis, Jonas and Brown, Maxine D. and Leigh, Jason and Sandin, Daniel J. and DeFanti, Thomas A.},year={2024},month=jul,journal={PRESENCE: Virtual and Augmented Reality},volume={33},pages={77--127},doi={10.1162/pres_a_00421},issn={1054-7460},url={https://doi.org/10.1162/pres\_a\_00421},note={\_eprint: https://direct.mit.edu/pvar/article-pdf/doi/10.1162/pres\_a\_00421/2467669/pres\_a\_00421.pdf},}
Ratanond Koonchanok, Michael E. Papka, and Khairi Reda,
Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations,
IEEE Transactions on Visualization and Computer Graphics,
2024
People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Prior research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals’ accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.
@article{KoPaRe24,title={Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations},author={Koonchanok, Ratanond and Papka, Michael E. and Reda, Khairi},year={2024},journal={IEEE Transactions on Visualization and Computer Graphics},pages={1--11},doi={10.1109/tvcg.2024.3456182},keywords={Bayes methods;Data visualization;Uncertainty;Correlation;Accuracy;Decision making;Biological system modeling;Visual inference;statistical rationality;human-machine collaboration},}
Mingyuan Liu, Jicong Zhang, and Wei Tang,
Imbalance-Aware Discriminative Clustering for Unsupervised Semantic Segmentation,
International Journal of Computer Vision,
Oct,
2024
Unsupervised semantic segmentation (USS) aims at partitioning an image into semantically meaningful segments by learning from a collection of unlabeled images. The effectiveness of current approaches is plagued by difficulties in coordinating representation learning and pixel clustering, modeling the varying feature distributions of different classes, handling outliers and noise, and addressing the pixel class imbalance problem. This paper introduces a novel approach, termed Imbalance-Aware Dense Discriminative Clustering (IDDC), for USS, which addresses all these difficulties in a unified framework. Different from existing approaches, which learn USS in two stages (i.e., generating and updating pseudo masks, or refining and clustering embeddings), IDDC learns pixel-wise feature representation and dense discriminative clustering in an end-to-end and self-supervised manner, through a novel objective function that transfers the manifold structure of pixels in the embedding space of a vision Transformer (ViT) to the label space while tolerating the noise in pixel affinities. During inference, the trained model directly outputs the classification probability of each pixel conditioned on the image. In addition, this paper proposes a new regularizer, based on the Weibull function, to handle pixel class imbalance and cluster degeneration in a single shot. Experimental results demonstrate that IDDC significantly outperforms all previous USS methods on three real-world datasets, COCO-Stuff-27, COCO-Stuff-171, and Cityscapes. Extensive ablation studies validate the effectiveness of each design. Our code is available at https://github.com/MY-LIU100101/IDDC.
@article{LiZhTa24,title={Imbalance-Aware Discriminative Clustering for Unsupervised Semantic Segmentation},author={Liu, Mingyuan and Zhang, Jicong and Tang, Wei},year={2024},month=oct,journal={International Journal of Computer Vision},volume={132},number={10},pages={4362--4378},doi={10.1007/s11263-024-02083-x},issn={1573-1405},url={https://doi.org/10.1007/s11263-024-02083-x},}
Thomas Marrinan, Ethan Honzik, Hal L. N. Brynteson, and Michael E. Papka,
Image Synthesis from a Collection of Depth Enhanced Panoramas: Creating Interactive Extended Reality Experiences from Static Images,
In Proceedings of the 2024 ACM International Conference on Interactive Media Experiences,
2024
Stereoscopic 360° panoramas are a popular modality for creating cinematic virtual reality experiences. However, media in this format are typically static entities that consumers passively view. This is because objects visible in the scene, and camera properties such as depth of field, must be determined at capture time. We propose a real-time technique for dynamically synthesizing stereoscopic 360° panoramas from a collection of depth enhanced monoscopic panoramas. Using depth information, pixels can be transformed into three-dimensional space and re-projected to a different camera location. Our technique allows for head-motion parallax, dynamic depth of field, and integration of properly occluded virtual objects into the captured scene. Our technique shows minimal discrepancies compared to ground truth captures and reduces error when compared to existing stereoscopic 360° panoramic synthesis techniques. Additionally, our technique makes creating interactive extended reality experiences more accessible since monoscopic 360° cameras are much more common than their stereoscopic counterparts.
@inproceedings{MaHoBr24,title={Image Synthesis from a Collection of Depth Enhanced Panoramas: Creating Interactive Extended Reality Experiences from Static Images},author={Marrinan, Thomas and Honzik, Ethan and Brynteson, Hal L. N. and Papka, Michael E.},year={2024},booktitle={Proceedings of the 2024 ACM International Conference on Interactive Media Experiences},location={Stockholm, Sweden},publisher={Association for Computing Machinery},address={New York, NY, USA},series={IMX '24},pages={64–74},doi={10.1145/3639701.3656312},isbn={9798400705038},url={https://doi.org/10.1145/3639701.3656312},numpages={11},keywords={360° panoramas, Cinematic virtual reality, Image-based rendering, Omni-directional stereo},}
Thomas Marrinan, Victor A. Mateevitsi, Madeleine Moeller, Alina Kanayinkal, and Michael E. Papka,
2023 IEEE Scientific Visualization Contest Winner: VisAnywhere: Developing Multiplatform Scientific Visualization Applications ,
IEEE Computer Graphics and Applications,
Sep,
2024
Scientists often explore and analyze large-scale scientific simulation data by leveraging 2-D and 3-D visualizations. The data and tasks can be complex and therefore best supported using myriad display technologies, from mobile devices to large high-resolution display walls to virtual reality headsets. Using a simulation of neuron connections in the human brain provided for the 2023 IEEE Scientific Visualization Contest, we present our work leveraging various web technologies to create a multiplatform scientific visualization application. Users can spread visualization and interaction across multiple devices to support flexible user interfaces and both colocated and remote collaboration. Drawing inspiration from responsive web design principles, this work demonstrates that a single codebase can be adapted to develop scientific visualization applications that operate everywhere.
@article{MaMaMo24,title={{ 2023 IEEE Scientific Visualization Contest Winner: VisAnywhere: Developing Multiplatform Scientific Visualization Applications }},author={Marrinan, Thomas and Mateevitsi, Victor A. and Moeller, Madeleine and Kanayinkal, Alina and Papka, Michael E.},year={2024},month=sep,journal={IEEE Computer Graphics and Applications},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},volume={44},number={05},pages={93--103},doi={10.1109/mcg.2024.3444460},issn={1558-1756},url={https://doi.ieeecomputersociety.org/10.1109/MCG.2024.3444460},keywords={Neurons;Data visualization;Collaboration;Three-dimensional displays;Calcium;Web design;Task analysis},}
Victor A. Mateevitsi, Michael E. Papka, and Khairi Reda,
Science in a Blink: Supporting Ensemble Perception in Scalar Fields,
In 2024 IEEE Visualization and Visual Analytics (VIS),
Oct,
2024
Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people’s ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowd- sourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people’s summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist- level judgments about color-coded visualizations.
@inproceedings{MaPaKh24,title={{Science in a Blink: Supporting Ensemble Perception in Scalar Fields}},author={Mateevitsi, Victor A. and Papka, Michael E. and Reda, Khairi},year={2024},month=oct,booktitle={2024 IEEE Visualization and Visual Analytics (VIS)},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},pages={216--220},doi={10.1109/vis55277.2024.00051},url={https://doi.ieeecomputersociety.org/10.1109/VIS55277.2024.00051},keywords={Image color analysis;Visual analytics;Aggregates;Data visualization;Spatial databases;Reliability},}
Victor A. Mateevitsi, Andres Sewell, Mathis Bode, Paul Fischer, Jens Henrik Göbbert, Joseph A. Insley, Ioannis Kavroulakis, Damaskinos Konioris, Yu-Hsiang Lan, Misun Min, Dimitrios Papageorgiou, Michael E. Papka, Steve Petruzza, Silvio Rizzi, and Ananias Tomboulides,
Visuals on the House: Optimizing HPC Workflows with No-Cost CPU Visualization,
In 2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV),
2024
@inproceedings{MaSeBo24,title={Visuals on the House: Optimizing HPC Workflows with No-Cost CPU Visualization},author={Mateevitsi, Victor A. and Sewell, Andres and Bode, Mathis and Fischer, Paul and Göbbert, Jens Henrik and Insley, Joseph A. and Kavroulakis, Ioannis and Konioris, Damaskinos and Lan, Yu-Hsiang and Min, Misun and Papageorgiou, Dimitrios and Papka, Michael E. and Petruzza, Steve and Rizzi, Silvio and Tomboulides, Ananias},year={2024},booktitle={2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV)},pages={69--70},doi={10.1109/ldav64567.2024.00016},keywords={Performance evaluation;Visualization;Leadership;Ion radiation effects;High performance computing;Data visualization;Propulsion;Rendering (computer graphics);Resource management;Magnetosphere;High Performance Computing (HPC);In Situ Visualization;Computational Fluid Dynamics (CFD);GPU Computing;Parallel Processing},}
Xiaolong Ma, Feng Yan, Lei Yang, Ian Foster, Michael E. Papka, Zhengchun Liu, and Rajkumar Kettimuthu,
MalleTrain: Deep Neural Networks Training on Unfillable Supercomputer Nodes,
In Proceedings of the 15th ACM/SPEC International Conference on Performance Engineering,
2024
First-come first-serve scheduling can result in substantial (up to 10%) of transiently idle nodes on supercomputers. Recognizing that such unfilled nodes are well-suited for deep neural network (DNN) training, due to the flexible nature of DNN training tasks, Liu et al. proposed that the re-scaling DNN training tasks to fit gaps in schedules be formulated as a mixed-integer linear programming (MILP) problem, and demonstrated via simulation the potential benefits of the approach. Here, we introduce MalleTrain, a system that provides the first practical implementation of this approach and that furthermore generalizes it by allowing it to be used even for DNN training applications for which model information is unknown before runtime. Key to this latter innovation is the use of a lightweight online job profiling advisor (JPA) to collect critical scalability information for DNN jobs—information that it then employs to optimize resource allocations dynamically, in real time. We describe the MalleTrain architecture and present the results of a detailed experimental evaluation on a supercomputer GPU cluster and several representative DNN training workloads, including neural architecture search and hyperparameter optimization. Our results not only confirm the practical feasibility of leveraging idle supercomputer nodes for DNN training but improve significantly on prior results, improving training throughput by up to 22.3% without requiring users to provide job scalability information.
@inproceedings{MaYaYa24,title={MalleTrain: Deep Neural Networks Training on Unfillable Supercomputer Nodes},author={Ma, Xiaolong and Yan, Feng and Yang, Lei and Foster, Ian and Papka, Michael E. and Liu, Zhengchun and Kettimuthu, Rajkumar},year={2024},booktitle={Proceedings of the 15th ACM/SPEC International Conference on Performance Engineering},location={London, United Kingdom},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Icpe '24},pages={190–200},doi={10.1145/3629526.3645035},isbn={9798400704444},url={https://doi.org/10.1145/3629526.3645035},keywords={deep neural network, distributed deep learning training, resource management, scheduling, supercomputer},numpages={11},}
Xiaoqian Ruan, and Wei Tang,
Fully Test-time Adaptation for Object Detection,
In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),
2024
@inproceedings{RuTa24,title={Fully Test-time Adaptation for Object Detection},author={Ruan, Xiaoqian and Tang, Wei},year={2024},booktitle={2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},pages={1038--1047},doi={10.1109/cvprw63382.2024.00110},keywords={Training;Computer vision;Codes;Conferences;Training data;Detectors;Object detection;test-time adaptation;pseudo-label selection;IoU-based indicators;duplicate detections removal},}
Amey Salvi, Kecheng Lu, Michael E. Papka, Yunhai Wang, and Khairi Reda,
Color Maker: a Mixed-Initiative Approach to Creating Accessible Color Maps,
In Proceedings of the CHI Conference on Human Factors in Computing Systems,
2024
Quantitative data is frequently represented using color, yet designing effective color mappings is a challenging task, requiring one to balance perceptual standards with personal color preference. Current design tools either overwhelm novices with complexity or offer limited customization options. We present ColorMaker, a mixed-initiative approach for creating colormaps. ColorMaker combines fluid user interaction with real-time optimization to generate smooth, continuous color ramps. Users specify their loose color preferences while leaving the algorithm to generate precise color sequences, meeting both designer needs and established guidelines. ColorMaker can create new colormaps, including designs accessible for people with color-vision deficiencies, starting from scratch or with only partial input, thus supporting ideation and iterative refinement. We show that our approach can generate designs with similar or superior perceptual characteristics to standard colormaps. A user study demonstrates how designers of varying skill levels can use this tool to create custom, high-quality colormaps. ColorMaker is available at: colormaker.org
@inproceedings{SaLuPa24,title={Color Maker: a Mixed-Initiative Approach to Creating Accessible Color Maps},author={Salvi, Amey and Lu, Kecheng and Papka, Michael E. and Wang, Yunhai and Reda, Khairi},year={2024},booktitle={Proceedings of the CHI Conference on Human Factors in Computing Systems},location={Honolulu, HI, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Chi '24},doi={10.1145/3613904.3642265},isbn={9798400703300},url={https://doi.org/10.1145/3613904.3642265},articleno={145},numpages={17},keywords={Mixed-initiative systems, color design, colormaps, simulated annealing.},}
Andres Sewell, Landon Dyken, Victor A. Mateevitsi, Will Usher, Jefferson Amstutz, Thomas Marrinan, Khairi Reda, Silvio Rizzi, Joseph A. lnsley, Michael E. Papka, Sidharth Kumar, and Steve Petruzza,
High-quality Approximation of Scientific Data using 3D Gaussian Splatting,
In 2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV),
2024
@inproceedings{SeDyMa24,title={High-quality Approximation of Scientific Data using 3D Gaussian Splatting},author={Sewell, Andres and Dyken, Landon and Mateevitsi, Victor A. and Usher, Will and Amstutz, Jefferson and Marrinan, Thomas and Reda, Khairi and Rizzi, Silvio and lnsley, Joseph A. and Papka, Michael E. and Kumar, Sidharth and Petruzza, Steve},year={2024},booktitle={2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV)},pages={73--74},doi={10.1109/ldav64567.2024.00018},keywords={Point cloud compression;Solid modeling;Technological innovation;Three-dimensional displays;Pipelines;Rendering (computer graphics);Data models;Real-time systems;Image reconstruction;Isosurfaces;3D Gaussian splatting;scientific data;data reconstruction;machine learning},}
Andres Sewell, Dimitrios K Fytanidis, Victor A Mateevitsi, Cyrus Harrison, Nicole Marsaglia, Thomas Marrinan, Silvio Rizzi, Joseph A Insley, Michael E. Papka, and Steve Petruzza,
Bridging Gaps in Simulation Analysis through a General Purpose, Bidirectional Steering Interface with Ascent,
In ISAV 2024: In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization,
Nov,
2024
@inproceedings{SeFyDi24,title={{Bridging Gaps in Simulation Analysis through a General Purpose, Bidirectional Steering Interface with Ascent}},author={Sewell, Andres and Fytanidis, Dimitrios K and Mateevitsi, Victor A and Harrison, Cyrus and Marsaglia, Nicole and Marrinan, Thomas and Rizzi, Silvio and Insley, Joseph A and Papka, Michael E. and Petruzza, Steve},year={2024},month=nov,booktitle={ISAV 2024: In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization},}
Shilpika Shilpika, Bethany Lusch, Murali Emani, Filippo Simini, Venkatram Vishwanath, Michael E. Papka, and Kwan-Liu Ma,
A Multi-Level, Multi-Scale Visual Analytics Approach to Assessment of Multifidelity HPC Systems,
In 2024 IEEE 24th International Symposium on Cluster, Cloud and Internet Computing (CCGrid),
May,
2024
The ability to monitor and interpret hardware system events and behaviors is crucial to improving the robustness and reliability of these systems, especially in a supercomputing facility. The growing complexity and scale of these systems demand an increase in monitoring data collected at multiple fidelity levels and varying temporal resolutions. In this work, we aim to build a holistic analytical system that helps make sense of such massive data, mainly the hardware logs, job logs, and environment logs collected from disparate subsystems and components of a supercomputer system. This end-to-end log analysis system, coupled with visual analytics support, allows users to glean and promptly extract supercomputer usage and error patterns at varying temporal and spatial resolutions. We use multi-resolution dynamic mode decomposition (mrDMD), a technique that depicts high-dimensional data as correlated spatial-temporal variations patterns or modes, to extract variation patterns isolated at specified frequencies. Our improvements to the mrDMD algorithm help promptly reveal useful information in the massive environment log dataset, which is then associated with the processed hardware and job log datasets using our visual analytics system. Furthermore, our system can identify the usage and error patterns filtered at user, project, and subcomponent levels. We exemplify the effectiveness of our approach with two use scenarios with the Cray XC40 supercomputer.
@inproceedings{ShLuEm24,title={{A Multi-Level, Multi-Scale Visual Analytics Approach to Assessment of Multifidelity HPC Systems}},author={Shilpika, Shilpika and Lusch, Bethany and Emani, Murali and Simini, Filippo and Vishwanath, Venkatram and Papka, Michael E. and Ma, Kwan-Liu},year={2024},month=may,booktitle={2024 IEEE 24th International Symposium on Cluster, Cloud and Internet Computing (CCGrid)},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},pages={478--488},doi={10.1109/CCGrid59990.2024.00060},url={https://doi.ieeecomputersociety.org/10.1109/CCGrid59990.2024.00060},keywords={Visual analytics;Heuristic algorithms;Time series analysis;Supercomputers;Hardware;Robustness;Real-time systems;Spatial resolution;Monitoring;Multiresolution analysis},}
Daria Tsoupikova, Sai Priya Jyothula, Arthur Nishimoto, Jo Cattell, Andrew Johnson, and Lance Long,
Hummingbird: Live Theater Adventure Empowering Collaboration in Virtual Reality,
In Proceedings of the 17th International Symposium on Visual Information Communication and Interaction,
2024
Hummingbird is an innovative, award-winning performance engaging participants in active storytelling that bridges live theater and collaborative interaction through virtual reality (VR). Hummingbird’s story celebrates courage and coming of age through the eyes of a gutsy teen who must outsmart her mother’s egotistic boss and survive a dangerous new technology in a live, immersive adventure. Developed at the University of Illinois Chicago by faculty and over 30 students from the departments of Computer Science and Design in partnership with professional theater producers, directors, actors, videographers and composers, this project advanced interdisciplinary collaboration, provided a unique learning environment and broadened the research experience for several cohorts of students. Over 500 people attended Hummingbird’s performances at the Tony Award-winning Goodman Theatre’s New Stages Festival, Chicago Children’s Theater and SIGGRAPH 2022 in Vancouver, Canada, with over 200 active VR participants. In each performance, five VR participants actively collaborate with each other and a lead actor within the VR adventure, contributing problem-solving, collaboration and teamwork, while a greater audience simultaneously follows the virtual performance aspects on a large video wall in real-time. Discussion sessions and audience evaluations followed each performance, informed the Hummingbird team on script, design and interactivity to improve future performances. Through qualitative analysis of audience experiences and insights from our collaboration, we discuss key considerations and design recommendations for integrating VR with live theater. Hummingbird demonstrates how VR can revolutionize theatrical storytelling by enabling traditional theater to narrate epic stories that were once considered too ambitious for traditional stage by extending live theater and making VR accessible to a broader audience. This project serves as a prototype for successful partnerships between nonprofit theater and interdisciplinary research institutions to increase opportunities for cross-disciplinary student education.
@inproceedings{TsJyNi24,title={Hummingbird: Live Theater Adventure Empowering Collaboration in Virtual Reality},author={Tsoupikova, Daria and Jyothula, Sai Priya and Nishimoto, Arthur and Cattell, Jo and Johnson, Andrew and Long, Lance},year={2024},booktitle={Proceedings of the 17th International Symposium on Visual Information Communication and Interaction},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Vinci '24},doi={10.1145/3678698.3687178},isbn={9798400709678},url={https://doi.org/10.1145/3678698.3687178},articleno={33},numpages={7},keywords={Design Process, Interdisciplinary Collaboration, Multi-user, Performance, Storytelling, Theater, Virtual reality},}
Juan Trelles, Andrew Wentzel, William Berrios, Hagit Shatkay, and G. Elisabeta Marai,
BI-LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics,
Computer Graphics Forum,
2024
In the biomedical domain, taxonomies organize the acquisition modalities of scientific images in hierarchical structures. Such taxonomies leverage large sets of correct image labels and provide essential information about the importance of a scientific publication, which could then be used in biocuration tasks. However, the hierarchical nature of the labels, the overhead of processing images, the absence or incompleteness of labelled data and the expertise required to label this type of data impede the creation of useful datasets for biocuration. From a multi-year collaboration with biocurators and text-mining researchers, we derive an iterative visual analytics and active learning (AL) strategy to address these challenges. We implement this strategy in a system called BI-LAVA—Biocuration with Hierarchical Image Labelling through Active Learning and Visual Analytics. BI-LAVA leverages a small set of image labels, a hierarchical set of image classifiers and AL to help model builders deal with incomplete ground-truth labels, target a hierarchical taxonomy of image modalities and classify a large pool of unlabelled images. BI-LAVA’s front end uses custom encodings to represent data distributions, taxonomies, image projections and neighbourhoods of image thumbnails, which help model builders explore an unfamiliar image dataset and taxonomy and correct and generate labels. An evaluation with machine learning practitioners shows that our mixed human–machine approach successfully supports domain experts in understanding the characteristics of classes within the taxonomy, as well as validating and improving data quality in labelled and unlabelled collections.
@article{TrWeBe24,title={BI-LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics},author={Trelles, Juan and Wentzel, Andrew and Berrios, William and Shatkay, Hagit and Marai, G. Elisabeta},year={2024},journal={Computer Graphics Forum},pages={e15261},doi={https://doi.org/10.1111/cgf.15261},url={https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.15261},keywords={visualization, visual analytics, active learning, image labeling, biomedical images},eprint={https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.15261},}
Qi Wu, Joseph A. Insley, Victor A. Mateevitsi, Silvio Rizzi, Michael E. Papka, and Kwan-Liu Ma,
Distributed Neural Representation for Reactive In Situ Visualization,
IEEE Transactions on Visualization and Computer Graphics,
2024
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data. This opens up new possibilities for in situ visualization. However, the efficient application of INRs to distributed data remains an underexplored area. In this work, we develop a distributed volumetric neural representation and optimize it for in situ visualization. Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios. Our technique also enables the implementation of an efficient strategy for caching large-scale simulation data in high temporal frequencies, further facilitating the use of reactive in situ visualization in a wider range of scientific problems. We integrate this system with the Ascent infrastructure and evaluate its performance and usability using real-world simulations.
@article{WuInMa24,title={Distributed Neural Representation for Reactive In Situ Visualization},author={Wu, Qi and Insley, Joseph A. and Mateevitsi, Victor A. and Rizzi, Silvio and Papka, Michael E. and Ma, Kwan-Liu},year={2024},journal={IEEE Transactions on Visualization and Computer Graphics},pages={1--15},doi={10.1109/tvcg.2024.3432710},keywords={Data visualization;Data models;Computational modeling;Training;Adaptation models;Neural networks;Programming;Implicit neural representation;scientific visualization;in situ visualization;reactive programming},}
2023
Mirko Mantovani, Andrew Wentzel, Juan Trelles Trabucco, Joseph Michaelis, and G. Elisabeta Marai,
Kiviat Defense: An Empirical Evaluation of Visual Encoding Effectiveness in Multivariate Data Similarity Detection,
Journal of Imaging Science and Technology,
2023
Similarity detection seeks to identify similar, but distinct items over multivariate datasets. Often, similarity cannot be defined computationally, leading to a need for visual analysis, such as in cases with ensemble, computational, patient cohort, or geospatial data. In this work, we empirically evaluate the effectiveness of common visual encodings for multivariate data in the context of visual similarity detection. We conducted a user study with 40 participants to measure similarity detection performance and response time under moderate scale (16 items) and large scale (36 items). Our analysis shows that there are significant differences in performance between encodings, especially as the number of items increases. Surprisingly, we found that juxtaposed star plots outperformed superposed parallel coordinate plots. Furthermore, color-cues significantly improved response time, and attenuated error at larger scales. In contrast to existing guidelines, we found that filled star plots (Kiviats) outperformed other encodings in terms of scalability and error.
@article{MaWeTr24,title={Kiviat Defense: An Empirical Evaluation of Visual Encoding Effectiveness in Multivariate Data Similarity Detection},author={Mantovani, Mirko and Wentzel, Andrew and Trabucco, Juan Trelles and Michaelis, Joseph and Marai, G. Elisabeta},year={2023},journal={Journal of Imaging Science and Technology},volume={67},number={6},pages={1--1},doi={10.2352/J.ImagingSci.Technol.2023.67.6.060406},url={https://library.imaging.org/jist/articles/67/6/060406},}
Mahdi Belcaid, Jason Leigh, Ryan Theriot, Nurit Kirshenbaum, Roderick Tabalba, Michael Rogers, Andrew Johnson, Maxine Brown, Luc Renambot, Lance Long, Arthur Nishimoto, Chris North, and Jesse Harden,
Reflecting on the Scalable Adaptive Graphics Environment Team’s 20-Year Translational Research Endeavor in Digital Collaboration Tools,
Computing in Science & Engineering,
Mar,
2023
Translational software research bridges the gap between scientific innovations and practical applications, driving impactful societal advancements. However, developing such software is challenging due to interdisciplinary collaboration, technology adoption, and postfunding sustainability. This article presents the experiences and insights of the Scalable Adaptive Graphics Environment (SAGE) team, which has spent two decades developing translational, cross-disciplinary, collaboration tools to benefit computational science research. With a focus on SAGE and its next-generation iterations, we explore the inherent challenges in translational research, such as fostering cross-disciplinary collaboration, motivating technology adoption, and ensuring postfunding product sustainability. We also discuss the roles of funding agencies, policymakers, and academic institutions in promoting translational research. Although the journey is fraught with challenges, the societal impact and satisfaction derived from translational research underscore its significance in the broader scientific landscape. This article aims to encourage further conversation and the development of effective models for translational software projects.
@article{BeLiTh23,title={{Reflecting on the Scalable Adaptive Graphics Environment Team’s 20-Year Translational Research Endeavor in Digital Collaboration Tools}},author={Belcaid, Mahdi and Leigh, Jason and Theriot, Ryan and Kirshenbaum, Nurit and Tabalba, Roderick and Rogers, Michael and Johnson, Andrew and Brown, Maxine and Renambot, Luc and Long, Lance and Nishimoto, Arthur and North, Chris and Harden, Jesse},year={2023},month=mar,journal={Computing in Science \& Engineering},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},volume={25},number={02},pages={50--56},doi={10.1109/mcse.2023.3297753},issn={1558-366x},url={https://doi.ieeecomputersociety.org/10.1109/MCSE.2023.3297753},keywords={Graphics;Translational research;Scientific computing;Computational modeling;Collaboration;Software packages;Technological innovation},}
Abbas Moradi Bilondi, Hal Brynteson, Parisa Mirbod, Michael E. Papka, Luca Brandt, and Nicolo Scapin,
Effects of employing liquid-liquid emulsions on heat transfer within a turbulent Rayleigh-Benard convection,
In 76th Annual Meeting of the APS Division of Fluid Dynamics,
2023
@inproceedings{BiBrMi23,title={Effects of employing liquid-liquid emulsions on heat transfer within a turbulent Rayleigh-Benard convection},author={Bilondi, Abbas Moradi and Brynteson, Hal and Mirbod, Parisa and Papka, Michael E. and Brandt, Luca and Scapin, Nicolo},year={2023},booktitle={76th Annual Meeting of the APS Division of Fluid Dynamics},doi={https://doi.org/10.1103/APS.DFD.2023.GFM.P0060},}
Murali Emani, Sam Foreman, Varuni Sastry, Zhen Xie, Siddhisanket Raskar, William Arnold, Rajeev Thakur, Venkatram Vishwanath, and Michael E. Papka,
A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators,
2023
Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (LLMs) are being considered as a promising approach to address some of the challenging problems because of their superior generalization capabilities across domains. The effectiveness of the models and the accuracy of the applications is contingent upon their efficient execution on the underlying hardware infrastructure. Specialized AI accelerator hardware systems have recently become available for accelerating AI applications. However, the comparative performance of these AI accelerators on large language models has not been previously studied. In this paper, we systematically study LLMs on multiple AI accelerators and GPUs and evaluate their performance characteristics for these models. We evaluate these systems with (i) a micro-benchmark using a core transformer block, (ii) a GPT- 2 model, and (iii) an LLM-driven science use case, GenSLM. We present our findings and analyses of the models’ performance to better understand the intrinsic capabilities of AI accelerators. Furthermore, our analysis takes into account key factors such as sequence lengths, scaling behavior, sparsity, and sensitivity to gradient accumulation steps.
@misc{EmFoSa23,title={A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators},author={Emani, Murali and Foreman, Sam and Sastry, Varuni and Xie, Zhen and Raskar, Siddhisanket and Arnold, William and Thakur, Rajeev and Vishwanath, Venkatram and Papka, Michael E.},year={2023},eprint={2310.04607},archiveprefix={arXiv},primaryclass={cs.PF},}
Jarrad Hampton-Marcell, Tasia Bryson, Jeffrey Larson, J. Taylor Childers, Spencer Pasero, Cortez Watkins, Thomas Reed, Dorletta Flucas-Payton, and Michael E. Papka,
Leveraging National Laboratories to Increase Black Representation in STEM: Recommendations within the Department of Energy,
International Journal of STEM Education,
2023
Increasing diversity in STEM disciplines has been a goal at scientific institutions for many decades. Black representation in STEM, however, has remained critically low at all levels (high school, undergraduate, graduate, and professional) for over 40 years, highlighting the need for innovative strategies that promote and retain Black students and professionals in STEM. We refocus efforts on increasing Black representation in STEM by promoting early exposure and continued engagement while leveraging national laboratories—an underutilized resource with immense potential to centralize diversity and inclusion efforts nationally.
@article{HaBrLa23,title={Leveraging National Laboratories to Increase Black Representation in STEM: Recommendations within the Department of Energy},author={Hampton-Marcell, Jarrad and Bryson, Tasia and Larson, Jeffrey and Childers, J. Taylor and Pasero, Spencer and Watkins, Cortez and Reed, Thomas and Flucas-Payton, Dorletta and Papka, Michael E.},year={2023},journal={International Journal of STEM Education},volume={10},number={1},pages={4},doi={10.1186/s40594-022-00394-4},isbn={2196-7822},url={https://doi.org/10.1186/s40594-022-00394-4},bdsk-url-1={https://doi.org/10.1186/s40594-022-00394-4},id={Hampton-Marcell2023},}
Jesse Harden, Nurit Kirshenbaum, Roderick S. Tabalba Jr., Jason Leigh, Luc Renambot, and Chris North,
SAGE3 for Interactive Collaborative Visualization, Analysis, and Storytelling,
In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces,
2023
SAGE3, the newest and most advanced generation of the Smart Amplified Group Environment, is an open-source software designed to facilitate collaboration among scientists, researchers, students, and professionals across various fields. This tutorial aims to introduce attendees to the capabilities of SAGE3, demonstrating its ability to enhance collaboration and productivity in diverse settings, from co-located office collaboration to remote collaboration to both at once, with diverse displays, from personal laptops to large-scale display walls. Participants will learn how to effectively use SAGE3 for brainstorming, data analysis, and presentation purposes, as well as installation of private collaboration servers and development of custom applications.
@inproceedings{HaKiTa23,title={SAGE3 for Interactive Collaborative Visualization, Analysis, and Storytelling},author={Harden, Jesse and Kirshenbaum, Nurit and Tabalba Jr., Roderick S. and Leigh, Jason and Renambot, Luc and North, Chris},year={2023},booktitle={Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces},location={Pittsburgh, PA, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={ISS Companion '23},pages={50–52},doi={10.1145/3626485.3626541},isbn={9798400704253},url={https://doi.org/10.1145/3626485.3626541},numpages={3},keywords={Collaboration, Computational Narratives, Data Analysis, Data Science, Large Displays, Space to Think, Visualization},}
Bharat Kale, Austin Clyde, Maoyuan Sun, Arvind Ramanathan, Rick Stevens, and Michael E. Papka,
ChemoGraph: Interactive Visual Exploration of the Chemical Space,
Computer Graphics Forum,
2023
Exploratory analysis of the chemical space is an important task in the field of cheminformatics. For example, in drug discovery research, chemists investigate sets of thousands of chemical compounds in order to identify novel yet structurally similar synthetic compounds to replace natural products. Manually exploring the chemical space inhabited by all possible molecules and chemical compounds is impractical, and therefore presents a challenge. To fill this gap, we present ChemoGraph, a novel visual analytics technique for interactively exploring related chemicals. In ChemoGraph, we formalize a chemical space as a hypergraph and apply novel machine learning models to compute related chemical compounds. It uses a database to find related compounds from a known space and a machine learning model to generate new ones, which helps enlarge the known space. Moreover, ChemoGraph highlights interactive features that support users in viewing, comparing, and organizing computationally identified related chemicals. With a drug discovery usage scenario and initial expert feedback from a case study, we demonstrate the usefulness of ChemoGraph.
@article{KaClSu23,title={ChemoGraph: Interactive Visual Exploration of the Chemical Space},author={Kale, Bharat and Clyde, Austin and Sun, Maoyuan and Ramanathan, Arvind and Stevens, Rick and Papka, Michael E.},year={2023},journal={Computer Graphics Forum},volume={42},number={3},pages={13--24},doi={https://doi.org/10.1111/cgf.14807},url={https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14807},eprint={https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14807},keywords={chemical space exploration, cheminformatics, multipartite graphs, data visualization, CCS Concepts, • Applied computing → Chemistry},}
Bharat Kale, Maoyuan Sun, and Michael E. Papka,
The State of the Art in Visualizing Dynamic Multivariate Networks,
Computer Graphics Forum,
2023
Most real-world networks are both dynamic and multivariate in nature, meaning that the network is associated with various attributes and both the network structure and attributes evolve over time. Visualizing dynamic multivariate networks is of great significance to the visualization community because of their wide applications across multiple domains. However, it remains challenging because the techniques should focus on representing the network structure, attributes and their evolution concurrently. Many real-world network analysis tasks require the concurrent usage of the three aspects of the dynamic multivariate networks. In this paper, we analyze current techniques and present a taxonomy to classify the existing visualization techniques based on three aspects: temporal encoding, topology encoding, and attribute encoding. Finally, we survey application areas and evaluation methods; and discuss challenges for future research.
@article{KaSuPa23,title={The State of the Art in Visualizing Dynamic Multivariate Networks},author={Kale, Bharat and Sun, Maoyuan and Papka, Michael E.},year={2023},journal={Computer Graphics Forum},volume={42},number={3},pages={471--490},doi={https://doi.org/10.1111/cgf.14856},url={https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14856},eprint={https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14856},keywords={CCS Concepts, • Human-centered computing —> Graph drawings},}
Boyang Li, Yuping Fan, Michael E. Papka, and Zhiling Lan,
Encoding for Reinforcement Learning Driven Scheduling,
In Job Scheduling Strategies for Parallel Processing,
2023
Reinforcement learning (RL) is exploited for cluster scheduling in the field of high-performance computing (HPC). One of the key challenges for RL driven scheduling is state representation for RL agent (i.e., capturing essential features of dynamic scheduling environment for decision making). Existing state encoding approaches either lack critical scheduling information or suffer from poor scalability. In this study, we present SEM (Scalable and Efficient encoding Model) for general RL driven scheduling in HPC. It captures system resource and waiting job state, both being critical information for scheduling. It encodes these pieces of information into a fixed-sized vector as an input to the agent. A typical agent is built on deep neural network, and its training/inference cost grows exponentially with the size of its input. Production HPC systems contain a large number of computer nodes. As such, a direct encoding of each of the system resources would lead to poor scalability of the RL agent. SEM uses two techniques to transform the system resource state into a small-sized vector, hence being capable of representing a large number of system resources in a vector of 100–200. Our trace-based simulations demonstrate that compared to the existing state encoding methods, SEM can achieve 9X training speedup and 6X inference speedup while maintaining comparable scheduling performance.
@inproceedings{LiFaPa23,title={Encoding for Reinforcement Learning Driven Scheduling},author={Li, Boyang and Fan, Yuping and Papka, Michael E. and Lan, Zhiling},year={2023},booktitle={Job Scheduling Strategies for Parallel Processing},publisher={Springer Nature Switzerland},address={Cham},pages={68--87},isbn={978-3-031-22698-4},editor={Klus{\'a}{\v{c}}ek, Dalibor and Julita, Corbal{\'a}n and Rodrigo, Gonzalo P.},}
Zhengchun Liu, Rajkumar Kettimuthu, Michael E. Papka, and Ian Foster,
FreeTrain: A Framework to Utilize Unused Supercomputer Nodes for Training Neural Networks,
In 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid),
2023
Supercomputer scheduling policies commonly result in many transient idle nodes, a phenomenon that is only partially alleviated by backfill scheduling methods that promote small jobs to run before large jobs. Here we describe how to realize a novel use for these otherwise wasted resources, namely, deep neural network (DNN) training. This important workload is easily organized as many small fragments that can be configured dynamically to fit essentially any node × time hole in a supercomputer’s schedule. We describe how the task of rescaling suitable DNN training tasks to fit dynamically changing holes can be formulated as a deterministic mixed integer linear programming (MILP)-based resource allocation algorithm, and show that this MILP problem can be solved efficiently at run time. We show further how this MILP problem can be adapted to optimize for administrator- or user-defined metrics. We validate our method with supercomputer scheduler logs and different DNN training scenarios, and demonstrate efficiencies of up to 93% compared with running the same training tasks on dedicated nodes. Our method thus enables substantial supercomputer resources to be allocated to DNN training with no impact on other applications.
@inproceedings{LiKePa23,title={FreeTrain: A Framework to Utilize Unused Supercomputer Nodes for Training Neural Networks},author={Liu, Zhengchun and Kettimuthu, Rajkumar and Papka, Michael E. and Foster, Ian},year={2023},booktitle={2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)},pages={299--310},doi={doi.org/10.1109/CCGrid57682.2023.00036},}
Boyang Li, Zhiling Lan, and Michael E. Papka,
Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling,
In 2023 31st International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS),
2023
@inproceedings{LiLaPa23,title={Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling},author={Li, Boyang and Lan, Zhiling and Papka, Michael E.},year={2023},booktitle={2023 31st International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)},pages={1--8},doi={10.1109/mascots59514.2023.10387651},keywords={Deep learning;Processor scheduling;Computational modeling;Closed box;Optimization methods;Reinforcement learning;Artificial neural networks;cluster scheduling;deep reinforcement learning;high-performance computing;interpretation;decision tree},}
Elsevier
Victor A. Mateevitsi, Mathis Bode, Nicola Ferrier, Paul Fischer, Jens Henrik Göbbert, Joseph A. Insley, Yu-Hsiang Lan, Misun Min, Michael E. Papka, Saumil Patel, Silvio Rizzi, and Jonathan Windgassen,
Scaling Computational Fluid Dynamics: In Situ Visualization of NekRS Using SENSEI,
In Proceedings of the SC ’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis,
2023
In the realm of Computational Fluid Dynamics (CFD), the demand for memory and computation resources is extreme, necessitating the use of leadership-scale computing platforms for practical domain sizes. This intensive requirement renders traditional checkpointing methods ineffective due to the significant slowdown in simulations while saving state data to disk. As we progress towards exascale and GPU-driven High-Performance Computing (HPC) and confront larger problem sizes, the choice becomes increasingly stark: to compromise data fidelity or to reduce resolution. To navigate this challenge, this study advocates for the use of in situ analysis and visualization techniques. These allow more frequent data "snapshots" to be taken directly from memory, thus avoiding the need for disruptive checkpointing. We detail our approach of instrumenting NekRS, a GPU-focused thermal-fluid simulation code employing the spectral element method (SEM), and describe varied in situ and in transit strategies for data rendering. Additionally, we provide concrete scientific use-cases and report on runs performed on Polaris, Argonne Leadership Computing Facility’s (ALCF) 44 Petaflop supercomputer and Jülich Wizard for European Leadership Science (JUWELS) Booster, Jülich Supercomputing Centre’s (JSC) 71 Petaflop High Performance Computing (HPC) system, offering practical insight into the implications of our methodology.
@inproceedings{MaBoFe23,title={Scaling Computational Fluid Dynamics: In Situ Visualization of NekRS Using SENSEI},author={Mateevitsi, Victor A. and Bode, Mathis and Ferrier, Nicola and Fischer, Paul and G\"{o}bbert, Jens Henrik and Insley, Joseph A. and Lan, Yu-Hsiang and Min, Misun and Papka, Michael E. and Patel, Saumil and Rizzi, Silvio and Windgassen, Jonathan},year={2023},booktitle={Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis},location={Denver, CO, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Sc-w '23},pages={862–867},doi={10.1145/3624062.3624159},isbn={9798400707858},url={https://doi.org/10.1145/3624062.3624159},numpages={6},}
Roberta Mota, Nivan Ferreira, Julio Daniel Silva, Marius Horga, Marcos Lage, Luis Ceferino, Usman Alim, Ehud Sharlin, and Fabio Miranda,
A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics,
IEEE Transactions on Visualization and Computer Graphics,
Jan,
2023
Recent technological innovations have led to an increase in the availability of 3D urban data, such as shadow, noise, solar potential, and earthquake simulations. These spatiotemporal datasets create opportunities for new visualizations to engage experts from different domains to study the dynamic behavior of urban spaces in this under explored dimension. However, designing 3D spatiotemporal urban visualizations is challenging, as it requires visual strategies to support analysis of time-varying data referent to the city geometry. Although different visual strategies have been used in 3D urban visual analytics, the question of how effective these visual designs are at supporting spatiotemporal analysis on building surfaces remains open. To investigate this, in this paper we first contribute a series of analytical tasks elicited after interviews with practitioners from three urban domains. We also contribute a quantitative user study comparing the effectiveness of four representative visual designs used to visualize 3D spatiotemporal urban data: spatial juxtaposition, temporal juxtaposition, linked view, and embedded view. Participants performed a series of tasks that required them to identify extreme values on building surfaces over time. Tasks varied in granularity for both space and time dimensions. Our results demonstrate that participants were more accurate using plot-based visualizations (linked view, embedded view) but faster using color-coded visualizations (spatial juxtaposition, temporal juxtaposition). Our results also show that, with increasing task complexity, plot-based visualizations perform better in preserving efficiency (time, accuracy) compared to color-coded visualizations. Based on our findings, we present a set of takeaways with design recommendations for 3D spatiotemporal urban visualizations for researchers and practitioners. Lastly, we report on a series of interviews with four practitioners, and their feedback and suggestions for further work on the visualizations to support 3D spatiotemporal urban data analysis.
@article{Mota2023,title={A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics},author={Mota, Roberta and Ferreira, Nivan and Silva, Julio Daniel and Horga, Marius and Lage, Marcos and Ceferino, Luis and Alim, Usman and Sharlin, Ehud and Miranda, Fabio},year={2023},month=jan,journal={IEEE Transactions on Visualization and Computer Graphics},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},volume={29},number={01},pages={1277--1287},doi={10.1109/tvcg.2022.3209474},issn={1941-0506},keywords={Data visualization;Three-dimensional displays;Urban areas;Spatiotemporal phenomena;Task analysis;Buildings;Spatial databases},}
Ashwini G. Naik, Robert V. Kenyon, Aynaz Taheri, Tanya Y. BergerWolf, Baher A. Ibrahim, Yoshitaka Shinagawa, and Daniel A. Llano,
V-NeuroStack: Open-source 3D time stack software for identifying patterns in neuronal data,
Journal of Neuroscience Research,
2023
Abstract Understanding functional correlations between the activities of neuron populations is vital for the analysis of neuronal networks. Analyzing large-scale neuroimaging data obtained from hundreds of neurons simultaneously poses significant visualization challenges. We developed V-NeuroStack, a novel network visualization tool to visualize data obtained using calcium imaging of spontaneous activity of neurons in a mouse brain slice as well as in vivo using two-photon imaging. V-NeuroStack creates 3D time stacks by stacking 2D time frames for a time-series dataset. It provides a web interface to explore and analyze data using both 3D and 2D visualization techniques. Previous attempts to analyze such data have been limited by the tools available to visualize large numbers of correlated activity traces. V-NeuroStack’s 3D view is used to explore patterns in dynamic large-scale correlations between neurons over time. The 2D view is used to examine any timestep of interest in greater detail. Furthermore, a dual-line graph provides the ability to explore the raw and first-derivative values of activity from an individual or a functional cluster of neurons. V-NeuroStack can scale to datasets with at least a few thousand temporal snapshots. It can potentially support future advancements in in vitro and in vivo data capturing techniques to bring forth novel hypotheses by allowing unambiguous visualization of massive patterns in neuronal activity data.
@article{Naik2023,title={V-NeuroStack: Open-source 3D time stack software for identifying patterns in neuronal data},author={Naik, Ashwini G. and Kenyon, Robert V. and Taheri, Aynaz and BergerWolf, Tanya Y. and Ibrahim, Baher A. and Shinagawa, Yoshitaka and Llano, Daniel A.},year={2023},journal={Journal of Neuroscience Research},volume={101},number={2},pages={217--231},doi={https://doi.org/10.1002/jnr.25139},url={https://onlinelibrary.wiley.com/doi/abs/10.1002/jnr.25139},keywords={3D visualization, calcium imaging, dynamic network, neural network, spatio-temporal data},eprint={https://onlinelibrary.wiley.com/doi/pdf/10.1002/jnr.25139},}
Nickolaus Saint, Ryan Chard, Rafael Vescovi, Jim Pruyne, Ben Blaiszik, Rachana Ananthakrishnan, Michael E. Papka, Rick Wagner, Kyle Chard, and Ian Foster,
Active Research Data Management with the Django Globus Portal Framework,
In Practice and Experience in Advanced Research Computing,
2023
Publishing and sharing data is critical to fostering collaboration and advancing scientific research. Data portals are commonly used to organize, publish, and securely disseminate data—a critical step toward making data findable, accessible, interoperable, and reusable (FAIR). However, the diversity of scientific data types, sizes, and their location present significant challenges, e.g., it is difficult for portals to accommodate heterogenous research products when using strict metadata schemas and rigid interfaces. Thus, there is a need for a user-customizable data portal solution that enables rapid creation of new portals that may be tailored to a researchers needs while accommodating distributed data sources and engaging advanced computing resources. In this paper, we present the Django Globus Portal Framework (DGPF), a tool designed to help users rapidly create secure, customizable, and extensible data portals. DGPF is a powerful and flexible framework that builds upon the Globus platform for authentication, data sharing, creation of automation flows, and search capabilities, allowing for seamless integration with existing research workflows. We present the design and implementation of the DGPF and describe our experiences operating the Argonne Community Data Co-op (ACDC)—a collection of DGPF portals with over 1 M records and over 100 TB of published data that has been accessed by more than 300 users.
@inproceedings{SaChVe23,title={Active Research Data Management with the Django Globus Portal Framework},author={Saint, Nickolaus and Chard, Ryan and Vescovi, Rafael and Pruyne, Jim and Blaiszik, Ben and Ananthakrishnan, Rachana and Papka, Michael E. and Wagner, Rick and Chard, Kyle and Foster, Ian},year={2023},booktitle={Practice and Experience in Advanced Research Computing},location={Portland, OR, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Pearc '23},pages={43–51},doi={10.1145/3569951.3593597},isbn={9781450399852},url={https://doi.org/10.1145/3569951.3593597},keywords={FAIR Data, Modern Research Data Portal, Globus},numpages={9},}
Shilpika, Bethany Lusch, Murali Emani, Filippo Simini, Venkatram Vishwanath, Michael E. Papka, and Kwan-Liu Ma,
A Multi-Level, Multi-Scale Visual Analytics Approach to Assessment of Multifidelity HPC Systems,
2023
The ability to monitor and interpret of hardware system events and behaviors are crucial to improving the robustness and reliability of these systems, especially in a supercomputing facility. The growing complexity and scale of these systems demand an increase in monitoring data collected at multiple fidelity levels and varying temporal resolutions. In this work, we aim to build a holistic analytical system that helps make sense of such massive data, mainly the hardware logs, job logs, and environment logs collected from disparate subsystems and components of a supercomputer system. This end-to-end log analysis system, coupled with visual analytics support, allows users to glean and promptly extract supercomputer usage and error patterns at varying temporal and spatial resolutions. We use multiresolution dynamic mode decomposition (mrDMD), a technique that depicts high-dimensional data as correlated spatial-temporal variations patterns or modes, to extract variation patterns isolated at specified frequencies. Our improvements to the mrDMD algorithm help promptly reveal useful information in the massive environment log dataset, which is then associated with the processed hardware and job log datasets using our visual analytics system. Furthermore, our system can identify the usage and error patterns filtered at user, project, and subcomponent levels. We exemplify the effectiveness of our approach with two use scenarios with the Cray XC40 supercomputer.
@misc{ShLuEm23,title={A Multi-Level, Multi-Scale Visual Analytics Approach to Assessment of Multifidelity HPC Systems},author={Shilpika and Lusch, Bethany and Emani, Murali and Simini, Filippo and Vishwanath, Venkatram and Papka, Michael E. and Ma, Kwan-Liu},year={2023},archiveprefix={arXiv},eprint={2306.09457},primaryclass={cs.HC},}
Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, J. Gregory Pauloski, Logan Ward, Valerie Hayot, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi Hanson, Thomas E Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin Aji, Angela Dalton, Michael Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, and Rick Stevens,
DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies,
2023
In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences. This could herald a new era of scientific exploration, bringing significant advancements across sectors from drug development to renewable energy. To answer this call, we present DeepSpeed4Science initiative which aims to build unique capabilities through AI system technology innovations to help domain experts to unlock today’s biggest science mysteries. By leveraging DeepSpeed’s current technology pillars (training, inference and compression) as base technology enablers, DeepSpeed4Science will create a new set of AI system technologies tailored for accelerating scientific discoveries by addressing their unique complexity beyond the common technical approaches used for accelerating generic large language models (LLMs). In this paper, we showcase the early progress we made with DeepSpeed4Science in addressing two of the critical system challenges in structural biology research.
@misc{SoKrZh23,title={DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies},author={Song, Shuaiwen Leon and Kruft, Bonnie and Zhang, Minjia and Li, Conglong and Chen, Shiyang and Zhang, Chengming and Tanaka, Masahiro and Wu, Xiaoxia and Rasley, Jeff and Awan, Ammar Ahmad and Holmes, Connor and Cai, Martin and Ghanem, Adam and Zhou, Zhongzhu and He, Yuxiong and Luferenko, Pete and Kumar, Divya and Weyn, Jonathan and Zhang, Ruixiong and Klocek, Sylwester and Vragov, Volodymyr and AlQuraishi, Mohammed and Ahdritz, Gustaf and Floristean, Christina and Negri, Cristina and Kotamarthi, Rao and Vishwanath, Venkatram and Ramanathan, Arvind and Foreman, Sam and Hippe, Kyle and Arcomano, Troy and Maulik, Romit and Zvyagin, Maxim and Brace, Alexander and Zhang, Bin and Bohorquez, Cindy Orozco and Clyde, Austin and Kale, Bharat and Perez-Rivera, Danilo and Ma, Heng and Mann, Carla M. and Irvin, Michael and Pauloski, J. Gregory and Ward, Logan and Hayot, Valerie and Emani, Murali and Xie, Zhen and Lin, Diangen and Shukla, Maulik and Foster, Ian and Davis, James J. and Papka, Michael E. and Brettin, Thomas and Balaprakash, Prasanna and Tourassi, Gina and Gounley, John and Hanson, Heidi and Potok, Thomas E and Pasini, Massimiliano Lupo and Evans, Kate and Lu, Dan and Lunga, Dalton and Yin, Junqi and Dash, Sajal and Wang, Feiyi and Shankar, Mallikarjun and Lyngaas, Isaac and Wang, Xiao and Cong, Guojing and Zhang, Pei and Fan, Ming and Liu, Siyan and Hoisie, Adolfy and Yoo, Shinjae and Ren, Yihui and Tang, William and Felker, Kyle and Svyatkovskiy, Alexey and Liu, Hang and Aji, Ashwin and Dalton, Angela and Schulte, Michael and Schulz, Karl and Deng, Yuntian and Nie, Weili and Romero, Josh and Dallago, Christian and Vahdat, Arash and Xiao, Chaowei and Gibbs, Thomas and Anandkumar, Anima and Stevens, Rick},year={2023},eprint={2310.04610},archiveprefix={arXiv},primaryclass={cs.AI},}
Maxim Zvyagin, Alexander Brace, Kyle Hippe, Yuntian Deng, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, Defne G. Ozgulbas, Natalia Vassilieva, James Gregory Pauloski, Logan Ward, Valerie Hayot-Sasson, Murali Emani, Sam Foreman, Zhen Xie, Diangen Lin, Maulik Shukla, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Rick Stevens, Anima Anandkumar, Venkatram Vishwanath, and Arvind Ramanathan,
GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics,
The International Journal of High Performance Computing Applications,
2023
We seek to transform how new and emergent variants of pandemic-causing viruses, specifically SARS-CoV-2, are identified and classified. By adapting large language models (LLMs) for genomic data, we build genome-scale language models (GenSLMs) which can learn the evolutionary landscape of SARS-CoV-2 genomes. By pre-training on over 110 million prokaryotic gene sequences and fine-tuning a SARS-CoV-2-specific model on 1.5 million genomes, we show that GenSLMs can accurately and rapidly identify variants of concern. Thus, to our knowledge, GenSLMs represents one of the first whole-genome scale foundation models which can generalize to other prediction tasks. We demonstrate scaling of GenSLMs on GPU-based supercomputers and AI-hardware accelerators utilizing 1.63 Zettaflops in training runs with a sustained performance of 121 PFLOPS in mixed precision and peak of 850 PFLOPS. We present initial scientific insights from examining GenSLMs in tracking evolutionary dynamics of SARS-CoV-2, paving the path to realizing this on large biological data.
@article{ZvBrHi23,title={GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics},author={Zvyagin, Maxim and Brace, Alexander and Hippe, Kyle and Deng, Yuntian and Zhang, Bin and Bohorquez, Cindy Orozco and Clyde, Austin and Kale, Bharat and Perez-Rivera, Danilo and Ma, Heng and Mann, Carla M. and Irvin, Michael and Ozgulbas, Defne G. and Vassilieva, Natalia and Pauloski, James Gregory and Ward, Logan and Hayot-Sasson, Valerie and Emani, Murali and Foreman, Sam and Xie, Zhen and Lin, Diangen and Shukla, Maulik and Nie, Weili and Romero, Josh and Dallago, Christian and Vahdat, Arash and Xiao, Chaowei and Gibbs, Thomas and Foster, Ian and Davis, James J. and Papka, Michael E. and Brettin, Thomas and Stevens, Rick and Anandkumar, Anima and Vishwanath, Venkatram and Ramanathan, Arvind},year={2023},journal={The International Journal of High Performance Computing Applications},volume={37},number={6},pages={683--705},doi={10.1177/10943420231201154},url={https://doi.org/10.1177/10943420231201154},eprint={https://doi.org/10.1177/10943420231201154},}
2022
Maryam Hosseini, Fabio Miranda, Jianzhe Lin, and Claudio T. Silva,
CitySurfaces: City-scale semantic segmentation of sidewalk materials,
Sustainable Cities and Society,
2022
While designing sustainable and resilient urban built environment is increasingly promoted around the world, significant data gaps have made research on pressing sustainability issues challenging to carry out. Pavements are known to have strong economic and environmental impacts; however, most cities lack a spatial catalog of their surfaces due to the cost-prohibitive and time-consuming nature of data collection. Recent advancements in computer vision, together with the availability of street-level images, provide new opportunities for cities to extract large-scale built environment data with lower implementation costs and higher accuracy. In this paper, we propose CitySurfaces, an active learning-based framework that leverages computer vision techniques for classifying sidewalk materials using widely available street-level images. We trained the framework on images from New York City and Boston and the evaluation results show a 90.5% mIoU score. Furthermore, we evaluated the framework using images from six different cities, demonstrating that it can be applied to regions with distinct urban fabrics, even outside the domain of the training data. CitySurfaces can provide researchers and city agencies with a low-cost, accurate, and extensible method to collect sidewalk material data which plays a critical role in addressing major sustainability issues, including climate change and surface water management.
@article{Hosseini2022,title={CitySurfaces: City-scale semantic segmentation of sidewalk materials},author={Hosseini, Maryam and Miranda, Fabio and Lin, Jianzhe and Silva, Claudio T.},year={2022},journal={Sustainable Cities and Society},volume={79},pages={103630},doi={https://doi.org/10.1016/j.scs.2021.103630},issn={2210-6707},url={https://www.sciencedirect.com/science/article/pii/S2210670721008933},keywords={Sustainable built environment, Surface materials, Urban heat island, Semantic segmentation, Sidewalk assessment, Urban analytics, Computer vision},}
Carolina Veiga Ferreira de Souza, Priscila Cunha Luz Barcellos, Lhaylla Crissaff, Marcio Cataldi, Fabio Miranda, and Marcos Lage,
Visualizing simulation ensembles of extreme weather events,
Computers & Graphics,
2022
In the last 20 years, extreme weather-related events like floods, landslides, droughts, and wildfires have caused the death of 1.23 million people and a loss of 2.97 trillion dollars. Studies show that low and lower-middle income countries are the most impacted ones given the lack of investment in disaster risk management. To reduce the impact of these events, weather researchers have been developing numerical weather models that inform public agencies about the impending extreme events in advance. Despite being powerful tools, these models can suffer from several sources of uncertainty, ranging from the approximation of micro-scale physical processes to the location-dependent calibration of parameters, which is especially critical in developing countries. To minimize uncertainty effects, researchers generate several different weather scenarios to compose an ensemble of simulations that typically are inspected using manual, laborious, and error-prone approaches. In this paper, we propose an interactive visual analytics system, called X-Weather, developed in close collaboration with weather researchers from Brazil. Our system contributes a set of statistics and probability-based visualizations that allows the assessment of extreme weather events by effortlessly navigating through and comparing ensemble members. We demonstrate the effectiveness of the system through two case studies analyzing tragic events that happened in the mountain region of Rio de Janeiro in Brazil.
@article{Veiga2022,title={Visualizing simulation ensembles of extreme weather events},author={{de Souza}, Carolina Veiga Ferreira and da Cunha {Luz Barcellos}, Priscila and Crissaff, Lhaylla and Cataldi, Marcio and Miranda, Fabio and Lage, Marcos},year={2022},journal={Computers & Graphics},volume={104},pages={162--172},doi={https://doi.org/10.1016/j.cag.2022.01.007},issn={0097-8493},url={https://www.sciencedirect.com/science/article/pii/S0097849322000073},keywords={Visual analytics, Weather visualization, Ensemble visualization},}
Chengkang Shen, Peiyan Wang, and Wei Tang,
Exploiting appearance transfer and multi-scale context for efficient person image generation,
Pattern Recognition,
2022
Pose guided person image generation means to generate a photo-realistic person image conditioned on an input person image and a desired pose. This task requires spatial manipulation of the source image according to the target pose. However, convolutional neural networks (CNNs) are inherently limited to geometric transformations due to the fixed geometric structures in their building modules, i.e., convolution, pooling and unpooling, which cannot handle large motion and occlusions caused by large pose transform. This paper introduces a novel two-stream context-aware appearance transfer network to address these challenges. It is a three-stage architecture consisting of a source stream and a target stream. Each stage features an appearance transfer module, a multi-scale context module and two-stream feature fusion modules. The appearance transfer module handles large motion by finding the dense correspondence between the two-stream feature maps and then transferring the appearance information from the source stream to the target stream. The multi-scale context module handles occlusion via contextual modeling, which is achieved by atrous convolutions of different sampling rates. Both quantitative and qualitative results indicate the proposed network can effectively handle challenging cases of large pose transform while retaining the appearance details. Compared with state-of-the-art approaches, it achieves comparable or superior performance using much fewer parameters while being significantly faster.
@article{Shen2022,title={Exploiting appearance transfer and multi-scale context for efficient person image generation},author={Shen, Chengkang and Wang, Peiyan and Tang, Wei},year={2022},journal={Pattern Recognition},volume={124},pages={108451},doi={https://doi.org/10.1016/j.patcog.2021.108451},issn={0031-3203},url={https://www.sciencedirect.com/science/article/pii/S0031320321006270},keywords={Person image generation, Appearance transfer, Multi-scale context, Efficient image generation},}
Daria Tsoupikova, Jo Cattell, Andrew Johnson, Lance Long, Arthur Nishimoto, and Sai Priya Jyothula,
Hummingbird: A Collaborative Live Theater and Virtual Reality Adventure,
In ACM SIGGRAPH 2022 Immersive Pavilion,
2022
Hummingbird: is a modern, innovative performance merging live theater and interactive virtual reality by bringing a group of active participants into a shared space for a live performance. The performance premiered as part of Chicago’s Tony Award-winning Goodman Theatre New Stages Festival showcasing innovative and ground-breaking theater works in December 2021. This project bridges art, science and live theater through a collaborative research effort between computer science and design faculty and students at the University of Illinois Chicago (UIC) Electronic Visualization Laboratory (EVL) and Chicago theater directors, actors, videographers and producers. Hummingbird’s story celebrates courage and coming of age through the eyes of a gutsy teen who must outsmart her mother’s narcissistic boss and survive dangerous new technology in a live, immersive adventure. Hummingbird extends traditional live theater and makes virtual reality art accessible to a broader audience, demonstrating how virtual reality can transform theatrical storytelling.
@inproceedings{Tsoupikova2022,title={Hummingbird: A Collaborative Live Theater and Virtual Reality Adventure},author={Tsoupikova, Daria and Cattell, Jo and Johnson, Andrew and Long, Lance and Nishimoto, Arthur and Jyothula, Sai Priya},year={2022},booktitle={ACM SIGGRAPH 2022 Immersive Pavilion},location={Vancouver, BC, Canada},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Siggraph '22},doi={10.1145/3532834.3536213},isbn={9781450393690},url={https://doi.org/10.1145/3532834.3536213},articleno={5},numpages={2},keywords={Multi-user, Performance, Storytelling, Theater, Virtual reality},}
Sanjana Srabanti, Carolina Veiga, Edcley Silva, Marcos Lage, Nivan Ferreira, and Fabio Miranda,
A Comparative Study of Methods for the Visualization of Probability Distributions of Geographical Data,
Multimodal Technologies and Interaction,
2022
Probability distributions are omnipresent in data analysis. They are often used to model the natural uncertainty present in real phenomena or to describe the properties of a data set. Designing efficient visual metaphors to convey probability distributions is, however, a difficult problem. This fact is especially true for geographical data, where conveying the spatial context constrains the design space. While many different alternatives have been proposed to solve this problem, they focus on representing data variability. However, they are not designed to support spatial analytical tasks involving probability quantification. The present work aims to adapt recent non-spatial approaches to the geographical context, in order to support probability quantification tasks. We also present a user study that compares the efficiency of these approaches in terms of both accuracy and usability.
@article{Srabanti2022,title={A Comparative Study of Methods for the Visualization of Probability Distributions of Geographical Data},author={Srabanti, Sanjana and Veiga, Carolina and Silva, Edcley and Lage, Marcos and Ferreira, Nivan and Miranda, Fabio},year={2022},journal={Multimodal Technologies and Interaction},volume={6},number={7},doi={10.3390/mti6070053},issn={2414-4088},url={https://www.mdpi.com/2414-4088/6/7/53},article-number={53},}
Roderick Tabalba, Nurit Kirshenbaum, Jason Leigh, Abari Bhatacharya, Andrew Johnson, Veronica Grosso, Barbara Di Eugenio, and Moira Zellner,
Articulate+ : An Always-Listening Natural Language Interface for Creating Data Visualizations,
In Proceedings of the 4th Conference on Conversational User Interfaces,
2022
Natural Language Interfaces and Voice User Interfaces for expressing data visualizations face ambiguities, such as, speech disfluency, under-specification, and abbreviations. In this paper, we describe Articulate+, an Artificial Intelligence Agent that is always listening, built to disambiguate requests while also spontaneously presenting informative visualizations. We conducted a preliminary user study to gain insight into the issues involved in providing an always-listening interface for data visualization. Our early results suggest that by leveraging Articulate+’s always-listening interface, users are able to obtain their desired visualizations with fewer queries while also being able to benefit from spontaneous visualizations generated by the system.
@inproceedings{Tabalba22,title={Articulate+ : An Always-Listening Natural Language Interface for Creating Data Visualizations},author={Tabalba, Roderick and Kirshenbaum, Nurit and Leigh, Jason and Bhatacharya, Abari and Johnson, Andrew and Grosso, Veronica and Di Eugenio, Barbara and Zellner, Moira},year={2022},booktitle={Proceedings of the 4th Conference on Conversational User Interfaces},location={Glasgow, United Kingdom},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Cui '22},doi={10.1145/3543829.3544534},isbn={9781450397391},url={https://doi.org/10.1145/3543829.3544534},articleno={38},numpages={6},keywords={AI, AIA, Articulate, Articulate+, NLP, always listening, always-listening, artificial intelligent agent, charts, conversational, data, natural language processing, overhearing, visualization},}
Rafael Vescovi, Ryan Chard, Nickolaus D. Saint, Ben Blaiszik, Jim Pruyne, Tekin Bicer, Alex Lavens, Zhengchun Liu, Michael E. Papka, Suresh Narayanan, Nicholas Schwarz, Kyle Chard, and Ian T. Foster,
Linking scientific instruments and computation: Patterns, technologies, and experiences,
Patterns,
2022
Summary Powerful detectors at modern experimental facilities routinely collect data at multiple GB/s. Online analysis methods are needed to enable the collection of only interesting subsets of such massive data streams, such as by explicitly discarding some data elements or by directing instruments to relevant areas of experimental space. Thus, methods are required for configuring and running distributed computing pipelines—what we call flows—that link instruments, computers (e.g., for analysis, simulation, artificial intelligence [AI] model training), edge computing (e.g., for analysis), data stores, metadata catalogs, and high-speed networks. We review common patterns associated with such flows and describe methods for instantiating these patterns. We present experiences with the application of these methods to the processing of data from five different scientific instruments, each of which engages powerful computers for data inversion,model training, or other purposes. We also discuss implications of such methods for operators and users of scientific facilities.
@article{Vescovi2022,title={Linking scientific instruments and computation: Patterns, technologies, and experiences},author={Vescovi, Rafael and Chard, Ryan and Saint, Nickolaus D. and Blaiszik, Ben and Pruyne, Jim and Bicer, Tekin and Lavens, Alex and Liu, Zhengchun and Papka, Michael E. and Narayanan, Suresh and Schwarz, Nicholas and Chard, Kyle and Foster, Ian T.},year={2022},journal={Patterns},volume={3},number={10},pages={100606},doi={https://doi.org/10.1016/j.patter.2022.100606},issn={2666-3899},url={https://www.sciencedirect.com/science/article/pii/S2666389922002318},keywords={Experiment automation, workflow, Globus, synchrotron light source, big data, machine learning, data fabric, computing fabric, trust fabric, scientific facility},bdsk-url-1={https://www.sciencedirect.com/science/article/pii/S2666389922002318},bdsk-url-2={https://doi.org/10.1016/j.patter.2022.100606},}
Yuping Fan, Boyang Li, Dustin Favorite, Naunidh Singh, Taylor Childers, Paul Rich, William Allcock, Michael E. Papka, and Zhiling Lan,
DRAS: Deep Reinforcement Learning for Cluster Scheduling in High Performance Computing,
IEEE Transactions on Parallel and Distributed Systems,
2022
@article{Fan2022,title={DRAS: Deep Reinforcement Learning for Cluster Scheduling in High Performance Computing},author={Fan, Yuping and Li, Boyang and Favorite, Dustin and Singh, Naunidh and Childers, Taylor and Rich, Paul and Allcock, William and Papka, Michael E. and Lan, Zhiling},year={2022},journal={IEEE Transactions on Parallel and Distributed Systems},volume={33},number={12},pages={4903--4917},doi={10.1109/tpds.2022.3205325},keywords={Processor scheduling;Dynamic scheduling;Runtime;Neural networks;Training;Q-learning;Production;High-performance computing;cluster scheduling;deep reinforcement learning;job starvation;backfilling;resource reservation;OpenAI Gym},}
Zhongyi Chen, Luc Renambot, Lance Long, Maxine Brown, and Andrew E. Johnson,
Moving from Composable to Programmable,
In 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW),
2022
@inproceedings{Chen2022,title={Moving from Composable to Programmable},author={Chen, Zhongyi and Renambot, Luc and Long, Lance and Brown, Maxine and Johnson, Andrew E.},year={2022},booktitle={2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},pages={1215--1220},doi={10.1109/ipdpsw55747.2022.00209},keywords={Symbiosis;Codes;Shape;Graphics processing units;Ethernet;User interfaces;Software;distributed systems;testbed implementation and deployment;composable infrastructure;deep learning;visualization;infrastructure as code},}
Lance Long, Timothy Bargo, Luc Renambot, Maxine Brown, and Andrew E. Johnson,
Composable Infrastructures for an Academic Research Environment: Lessons Learned,
In 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW),
2022
@inproceedings{Long2022,title={Composable Infrastructures for an Academic Research Environment: Lessons Learned},author={Long, Lance and Bargo, Timothy and Renambot, Luc and Brown, Maxine and Johnson, Andrew E.},year={2022},booktitle={2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},pages={1209--1214},doi={10.1109/ipdpsw55747.2022.00208},keywords={Training;Industries;Codes;Graphics processing units;Metals;Containers;Writing;composable infrastructure;deep learning;visualization;resource management;workload management;user workflow;composable co-location;infrastructure as code},}
2021
Krishna Bharadwaj, Andrew Burks, Andrew Johnson, Lance Long, Luc Renambot, Maxine Brown, Dylan Kobayashi, Mahdi Belcaid, Nurit Kirshenbaum, Roderick Tabalba, Ryan Theriot, and Jason Leigh,
Securing Collaborative Work in Wide-band Display Environments,
In 2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC),
2021
@inproceedings{Bharadwaj2021,title={Securing Collaborative Work in Wide-band Display Environments},author={Bharadwaj, Krishna and Burks, Andrew and Johnson, Andrew and Long, Lance and Renambot, Luc and Brown, Maxine and Kobayashi, Dylan and Belcaid, Mahdi and Kirshenbaum, Nurit and Tabalba, Roderick and Theriot, Ryan and Leigh, Jason},year={2021},booktitle={2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)},pages={26--34},doi={10.1109/cic52973.2021.00014},keywords={Access control;Conferences;Computational modeling;Collaboration;Collaborative work;Internet;security;access control;collaborative environment;wide-band displays},}
2020
Andrew Burks, Luc Renambot, and Andrew Johnson,
VisSnippets: A Web-Based System for Impromptu Collaborative Data Exploration on Large Displays,
In Practice and Experience in Advanced Research Computing 2020: Catch the Wave,
2020
The VisSnippets system is designed to facilitate effective collaborative data exploration. VisSnippets leverages SAGE2 middleware that enables users to manage the display of digital media content on large displays, thereby providing collaborators with a high-resolution common workspace. Based in JavaScript, VisSnippets provides users with the flexibility to implement and/or select visualization packages and to quickly access data in the cloud. By simplifying the development process, VisSnippets removes the need to scaffold and integrate interactive visualization applications by hand. Users write reusable blocks of code called “snippets” for data retrieval, transformation, and visualization. By composing dataflows from the group’s collective snippet pool, users can quickly execute and explore complementary or contrasting analyses. By giving users the ability to explore alternative scenarios, VisSnippets facilitates parallel work for collaborative data exploration leveraging large-scale displays. We describe the system, its design and implementation, and showcase its flexibility through two example applications.
@inproceedings{Burks2020,title={VisSnippets: A Web-Based System for Impromptu Collaborative Data Exploration on Large Displays},author={Burks, Andrew and Renambot, Luc and Johnson, Andrew},year={2020},booktitle={Practice and Experience in Advanced Research Computing 2020: Catch the Wave},location={Portland, OR, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Pearc '20},pages={144–151},doi={10.1145/3311790.3396666},isbn={9781450366892},url={https://doi.org/10.1145/3311790.3396666},numpages={8},keywords={collaborative visual analytics, information visualization, visual data science},}
2019
Jason Leigh, Dylan Kobayashi, Nurit Kirshenbaum, Troy Wooton, Alberto Gonzalez, Luc Renambot, Andrew Johnson, Maxine Brown, Andrew Burks, Krishna Bharadwaj, Arthur Nishimoto, Lance Long, Jason Haga, John Burns, Francis Cristobal, Jared McLean, Roberto Pelayo, and Mahdi Belcaid,
Usage Patterns of Wideband Display Environments In e-Science Research, Development and Training,
In 2019 15th International Conference on eScience (eScience),
2019
@inproceedings{LeKoKi19,title={Usage Patterns of Wideband Display Environments In e-Science Research, Development and Training},author={Leigh, Jason and Kobayashi, Dylan and Kirshenbaum, Nurit and Wooton, Troy and Gonzalez, Alberto and Renambot, Luc and Johnson, Andrew and Brown, Maxine and Burks, Andrew and Bharadwaj, Krishna and Nishimoto, Arthur and Long, Lance and Haga, Jason and Burns, John and Cristobal, Francis and McLean, Jared and Pelayo, Roberto and Belcaid, Mahdi},year={2019},booktitle={2019 15th International Conference on eScience (eScience)},pages={301--310},doi={10.1109/eScience.2019.00041},keywords={tiled display wall, immersive analytics, visualization, human centered computing, computer supported cooperative work},}
Maxine Brown, Luc Renambot, Lance Long, Timothy Bargo, and Andrew E. Johnson,
COMPaaS DLV: Composable Infrastructure for Deep Learning in an Academic Research Environment,
In 2019 IEEE 27th International Conference on Network Protocols (ICNP),
2019
@inproceedings{BrReLo19,title={COMPaaS DLV: Composable Infrastructure for Deep Learning in an Academic Research Environment},author={Brown, Maxine and Renambot, Luc and Long, Lance and Bargo, Timothy and Johnson, Andrew E.},year={2019},booktitle={2019 IEEE 27th International Conference on Network Protocols (ICNP)},pages={1--2},doi={10.1109/icnp.2019.8888070},keywords={Deep learning;Data visualization;Graphics processing units;Switches;Hardware;Computer architecture;Computer science;distributed systems;testbed implementation & deployment;composable infrastructure;deep learning;visualization},}
2017
George Legrady, and Angus Graeme Forbes,
Data in Context: Conceptualizing Site-Specific Visualization Projects,
Leonardo,
Apr,
2017
Site-specific data visualization installations have distinct conditions of data collection, data analysis, audience interaction and data archiving. This article describes features of five data visualization projects related to their successful staging within different contexts.
@article{Legrady2017,title={{Data in Context: Conceptualizing Site-Specific Visualization Projects}},author={Legrady, George and Forbes, Angus Graeme},year={2017},month=apr,journal={Leonardo},volume={50},number={2},pages={200--204},doi={10.1162/LEON_a_01228},issn={0024-094x},url={https://doi.org/10.1162/LEON\_a\_01228},eprint={https://direct.mit.edu/leon/article-pdf/50/2/200/1577979/leon\_a\_01228.pdf},}
Michael J. Lewis, George K. Thiruvathukal, Venkatram Vishwanath, Michael E. Papka, and Andrew Johnson,
A distributed graph approach for pre-processing linked RDF data using supercomputers,
In Proceedings of The International Workshop on Semantic Big Data,
2017
Efficient RDF, graph based queries are becoming more pertinent based on the increased interest in data analytics and its intersection with large, unstructured but connected data. Many commercial systems have adopted distributed RDF graph systems in order to handle increasing dataset sizes and complex queries. This paper introduces a distribute graph approach to pre-processing linked data. Instead of traversing the memory graph, our system indexes pre-processed join elements that are organized in a graph structure. We analyze the Dbpedia data-set (derived from the Wikipedia corpus) and compare our access method to the graph traversal access approach which we also devise. Results show from our experiments that the distributed, pre-processed graph approach to accessing linked data is faster than the traversal approach over a specific range of linked queries.
@inproceedings{Lewis2017,title={A distributed graph approach for pre-processing linked RDF data using supercomputers},author={Lewis, Michael J. and Thiruvathukal, George K. and Vishwanath, Venkatram and Papka, Michael E. and Johnson, Andrew},year={2017},booktitle={Proceedings of The International Workshop on Semantic Big Data},location={Chicago, Illinois},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Sbd '17},doi={10.1145/3066911.3066913},isbn={9781450349871},url={https://doi.org/10.1145/3066911.3066913},articleno={6},numpages={6},keywords={RDF, distributed algorithms, high performance computing},}
Thomas Marrinan, Jason Leigh, Luc Renambot, Angus Forbes, Steve Jones, and Andrew E. Johnson,
Mixed Presence Collaboration using Scalable Visualizations in Heterogeneous Display Spaces,
In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing,
2017
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronized sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.
@inproceedings{Marrinan2017,title={Mixed Presence Collaboration using Scalable Visualizations in Heterogeneous Display Spaces},author={Marrinan, Thomas and Leigh, Jason and Renambot, Luc and Forbes, Angus and Jones, Steve and Johnson, Andrew E.},year={2017},booktitle={Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing},location={Portland, Oregon, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Cscw '17},pages={2236–2245},doi={10.1145/2998181.2998346},isbn={9781450343350},url={https://doi.org/10.1145/2998181.2998346},numpages={10},keywords={multi- user interaction, mixed presence collaboration, large-scale displays., data-conferencing},}
2016
Daria Tsoupikova, Scott Rettberg, Roderick Coover, and Arthur Nishimoto,
The battle for hearts and minds: interrogation and torture in the age of war: an adaptation for oculus rift,
In SIGGRAPH ASIA 2016 VR Showcase,
2016
Hearts and Minds: The Interrogations Project is a Virtual Reality art installation developed using a novel method for direct output of the Unity-based virtual reality projects into CAVE2™ [Febretti et al. 2013] environment. This artwork incorporates original research and technological innovation in an adaptation of veterans’ testimonies detailing US military interrogations in Iraq during the American counter-insurgency campaign in the early 2000s. It uses VR technology to immerse participants in the minds of people who experienced torture and interrogation during the war to understand its current social and psychological consequences. The powerful content of this artwork focuses on the impact of war and trauma on veterans, and utilizes the power of VR as a medium to evoke empathy, understanding and awareness. This work was developed at the Electronic Visualization Lab in Chicago through a unique cross-disciplinary international collaboration between artists, scientists, and researchers from five different Universities. The methods developed for this project allow hands-on education of virtual reality by letting students create their own virtual environments and exhibit them in the CAVE2 quickly. These methods have been recently adapted by Design and Computer Science courses at the University of Illinois at Chicago.
@inproceedings{Tsoupikova2016,title={The battle for hearts and minds: interrogation and torture in the age of war: an adaptation for oculus rift},author={Tsoupikova, Daria and Rettberg, Scott and Coover, Roderick and Nishimoto, Arthur},year={2016},booktitle={SIGGRAPH ASIA 2016 VR Showcase},location={Macau},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Sa '16},doi={10.1145/2996376.2996383},isbn={9781450345422},url={https://doi.org/10.1145/2996376.2996383},articleno={5},numpages={2},keywords={virtual reality, storytelling, interaction, art, CAVE2™},}
Marco Cavallo, Geoffrey Alan Rhodes, and Angus Graeme Forbes,
Riverwalk: Incorporating Historical Photographs in Public Outdoor Augmented Reality Experiences,
In 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct),
2016
@inproceedings{Cavallo2016,title={Riverwalk: Incorporating Historical Photographs in Public Outdoor Augmented Reality Experiences},author={Cavallo, Marco and Rhodes, Geoffrey Alan and Forbes, Angus Graeme},year={2016},booktitle={2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},pages={160--165},doi={10.1109/ISMAR-Adjunct.2016.0068},keywords={Cameras;Sensors;Two dimensional displays;Rivers;Three-dimensional displays;Augmented reality;Mobile communication;H.5.1 [Information interfaces and presentation (e.g. HCI)]: Multimedia Information Systems—Artificial;augmented and virtual realities},}
Abhinav Kumar, Jillian Aurisano, Barbara Di Eugenio, Andrew Johnson, Alberto Gonzalez, and Jason Leigh,
Towards a dialogue system that supports rich visualizations of data,
In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue,
Sep,
2016
@inproceedings{Kumar2016,title={Towards a dialogue system that supports rich visualizations of data},author={Kumar, Abhinav and Aurisano, Jillian and Di Eugenio, Barbara and Johnson, Andrew and Gonzalez, Alberto and Leigh, Jason},year={2016},month=sep,booktitle={Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue},publisher={Association for Computational Linguistics},address={Los Angeles},pages={304--309},doi={10.18653/v1/W16-3639},url={https://aclanthology.org/W16-3639},editor={Fernandez, Raquel and Minker, Wolfgang and Carenini, Giuseppe and Higashinaka, Ryuichiro and Artstein, Ron and Gainer, Alesia},}
Francesco Paduano, Ronak Etemadpour, and Angus G. Forbes,
BranchingSets: Interactively Visualizing Categories on Node-Link Diagrams,
In Proceedings of the 9th International Symposium on Visual Information Communication and Interaction,
2016
Node-link diagrams are widely used for visualizing relational data in a wide range of fields. However, in many situations it is useful to provide set membership information for elements in networks. We present BranchingSets, an interactive visualization technique that uses visual encodings similar to Kelp Diagrams in order to augment traditional node-link diagrams with information about the categories that both nodes and links belong to. BranchingSets introduces novel user-driven methods to procedurally navigate the graph topology and to interactively inspect complex, hierarchical data associated with individual nodes. Results indicate that users find the technique engaging and easy to use. This is further confirmed by a quantitative study that compares the effectiveness of the visual encodings used in BranchingSets to other techniques for displaying set membership within node-link diagrams, finding our technique more accurate and more efficient for facilitating interactive queries on networks containing nodes that belong to multiple sets.
@inproceedings{Paduano2016,title={BranchingSets: Interactively Visualizing Categories on Node-Link Diagrams},author={Paduano, Francesco and Etemadpour, Ronak and Forbes, Angus G.},year={2016},booktitle={Proceedings of the 9th International Symposium on Visual Information Communication and Interaction},location={Dallas, TX, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Vinci '16},pages={9–16},doi={10.1145/2968220.2968229},isbn={9781450341493},url={https://doi.org/10.1145/2968220.2968229},numpages={8},}
Anil Çamcı,
Imagining through Sound: An experimental analysis of narrativity in electronic music,
Organised Sound,
2016
@article{Camci2016,title={Imagining through Sound: An experimental analysis of narrativity in electronic music},author={Çamcı, Anil},year={2016},journal={Organised Sound},volume={21},number={3},pages={179–191},doi={10.1017/s1355771816000169},}
G. Elisabeta Marai, Angus G. Forbes, and Andrew Johnson,
Interdisciplinary immersive analytics at the electronic visualization laboratory: Lessons learned and upcoming challenges,
In 2016 Workshop on Immersive Analytics (IA),
2016
@inproceedings{MaFoJo16,title={Interdisciplinary immersive analytics at the electronic visualization laboratory: Lessons learned and upcoming challenges},author={Marai, G. Elisabeta and Forbes, Angus G. and Johnson, Andrew},year={2016},booktitle={2016 Workshop on Immersive Analytics (IA)},pages={54--59},doi={10.1109/immersive.2016.7932384},keywords={Three-dimensional displays;Vegetation;Lakes;Corporate acquisitions;Data visualization;Virtual reality;Two dimensional displays;K.6.1 [Immersive Analytics]: Virtual Reality-Interdisciplinary collaborations;K.7.m [Technology]: Displays-CAVE2},}
Mengqi Xing, Olusola Ajilore, Ouri E Wolfson, Christopher Abbott, Annmarie MacNamara, Reza Tadayonnejad, Angus Forbes, K Luan Phan, Heide Klumpp, and Alex Leow,
Thought chart: Tracking dynamic EEG brain connectivity with unsupervised manifold learning,
In Brain Informatics and Health: International Conference, BIH 2016, Omaha, NE, USA, October 13-16, 2016 Proceedings,
Oct,
2016
@inproceedings{Xing2016,title={Thought chart: Tracking dynamic EEG brain connectivity with unsupervised manifold learning},author={Xing, Mengqi and Ajilore, Olusola and Wolfson, Ouri E and Abbott, Christopher and MacNamara, Annmarie and Tadayonnejad, Reza and Forbes, Angus and Phan, K Luan and Klumpp, Heide and Leow, Alex},year={2016},month=oct,day={13},booktitle={Brain Informatics and Health: International Conference, BIH 2016, Omaha, NE, USA, October 13-16, 2016 Proceedings},location={Omaha, NE},publisher={Springer international Publishing},pages={149--157},organization={Springer},}
Luc Renambot, Thomas Marrinan, Jillian Aurisano, Arthur Nishimoto, Victor Mateevitsi, Krishna Bharadwaj, Lance Long, Andy Johnson, Maxine Brown, and Jason Leigh,
SAGE2: A collaboration portal for scalable resolution displays,
Future Generation Computer Systems,
2016
In this paper, we present SAGE2, a software framework that enables local and remote collaboration on Scalable Resolution Display Environments (SRDE). An SRDE can be any configuration of displays, ranging from a single monitor to a wall of tiled flat-panel displays. SAGE2 creates a seamless ultra-high resolution desktop across the SRDE. Users can wirelessly connect to the SRDE with their own devices in order to interact with the system. Many users can simultaneously utilize a drag-and-drop interface to transfer local documents and show them on the SRDE, use a mouse pointer and keyboard to interact with existing content that is on the SRDE and share their screen so that it is viewable to all. SAGE2 can be used in many configurations and is able to support many communities working with various types of media and high-resolution content, from research meetings to creative session to education. SAGE2 is browser-based, utilizing a web server to host content, WebSockets for message passing and HTML with JavaScript for rendering and interaction. Recent web developments, with the emergence of HTML5, have allowed browsers to use advanced rendering techniques without requiring plug-ins (canvas drawing, WebGL 3D rendering, native video player, etc.). One major benefit of browser-based software is that there are no installation requirements for users and it is inherently cross-platform. A user simply needs a web browser on the device he/she wishes to use as an interaction tool for the SRDE. This lowers considerably the barrier of entry to engage in meaningful collaboration sessions.
@article{Renambot2016,title={SAGE2: A collaboration portal for scalable resolution displays},author={Renambot, Luc and Marrinan, Thomas and Aurisano, Jillian and Nishimoto, Arthur and Mateevitsi, Victor and Bharadwaj, Krishna and Long, Lance and Johnson, Andy and Brown, Maxine and Leigh, Jason},year={2016},journal={Future Generation Computer Systems},volume={54},pages={296--305},doi={https://doi.org/10.1016/j.future.2015.05.014},issn={0167-739x},url={https://www.sciencedirect.com/science/article/pii/S0167739X15001892},keywords={Scalable Resolution Display Environments, Window manager, Web-based, Collaboration, Multi-user interaction, Large-scale displays, Application development},}
Thomas Marrinan, Luc Renambot, Jason Leigh, Angus Forbes, Steve Jones, and Andrew E. Johnson,
Synchronized Mixed Presence Data-Conferencing Using Large-Scale Shared Displays,
In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces,
2016
Real world group-to-group collaboration often occurs between partially distributed interdisciplinary teams, with each discipline working in a unique environment suited for its needs. Groupware must be flexible so that it can be incorporated into a variety of workspaces in order to successfully facilitate this type of mixed presence collaboration. We have developed two new techniques for sharing and synchronizing multi-user applications between heterogeneous large-scale shared displays. The first new technique partitions displays into a perfectly mirrored public space and a local private space. The second new technique enables user-controlled partial synchronization, where different attributes of an application can be synchronized or controlled independently. This paper presents two main contributions of our work: 1) identifying deficiencies in current groupware for interacting with data during mixed presence collaboration, and 2) developing two multi-user data synchronization techniques to address these deficiencies and extend current collaborative infrastructure for large-scale shared displays.
@inproceedings{Marrinan2016,title={Synchronized Mixed Presence Data-Conferencing Using Large-Scale Shared Displays},author={Marrinan, Thomas and Renambot, Luc and Leigh, Jason and Forbes, Angus and Jones, Steve and Johnson, Andrew E.},year={2016},booktitle={Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces},location={Niagara Falls, Ontario, Canada},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Iss '16},pages={355–360},doi={10.1145/2992154.2996780},isbn={9781450342483},url={https://doi.org/10.1145/2992154.2996780},numpages={6},keywords={multi-user interaction, mixed presence collaboration, large-scale displays, data synchronization., computer-supported cooperative work},}
Huy Bui, Eun-Sung Jung, Venkatram Vishwanath, Andrew Johnson, Jason Leigh, and Michael E. Papka,
Improving sparse data movement performance using multiple paths on the Blue Gene/Q supercomputer,
Parallel Computing,
2016
Special Issue on Parallel Programming Models and SystemsSoftware for High-End Computing
In situ analysis has been proposed as a promising solution to glean faster insights and reduce the amount of data to storage. A critical challenge here is that the reduced dataset is typically located on a subset of the nodes and needs to be written out to storage. Data coupling in multiphysics codes also exhibits a sparse data movement pattern wherein data movement occurs among a subset of nodes. We evaluate the performance of data movement for sparse data patterns on the IBM Blue Gene/Q supercomputing system “Mira” and identify performance bottlenecks. We propose a multipath data movement algorithm for sparse data patterns based on an adaptation of a maximum flow algorithm together with breadth-first search that fully exploits all the underlying data paths and I/O nodes to improve data movement. We demonstrate the efficacy of our solutions through a set of microbenchmarks and application benchmarks on Mira scaling up to 131,072 compute cores. The results show that our approach achieves up to 5× improvement in achievable throughput compared with the default mechanisms.
@article{Bui2016,title={Improving sparse data movement performance using multiple paths on the Blue Gene/Q supercomputer},author={Bui, Huy and Jung, Eun-Sung and Vishwanath, Venkatram and Johnson, Andrew and Leigh, Jason and Papka, Michael E.},year={2016},journal={Parallel Computing},volume={51},pages={3--16},doi={https://doi.org/10.1016/j.parco.2015.09.002},issn={0167-8191},url={https://www.sciencedirect.com/science/article/pii/S0167819115001167},note={Special Issue on Parallel Programming Models and SystemsSoftware for High-End Computing},keywords={Multiple paths, Sparse data movement, Topology-aware aggregation, Data-intensive, Blue Gene/Q},bdsk-url-1={https://www.sciencedirect.com/science/article/pii/S0167819115001167},bdsk-url-2={https://doi.org/10.1016/j.parco.2015.09.002},}
Khairi Reda, Andrew E. Johnson, Michael E. Papka, and Jason Leigh,
Modeling and evaluating user behavior in exploratory visual analysis,
Information Visualization,
2016
Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This article presents a methodology for modeling and evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.
@article{Reda2016,title={Modeling and evaluating user behavior in exploratory visual analysis},author={Reda, Khairi and Johnson, Andrew E. and Papka, Michael E. and Leigh, Jason},year={2016},journal={Information Visualization},volume={15},number={4},pages={325--339},doi={10.1177/1473871616638546},url={https://doi.org/10.1177/1473871616638546},eprint={https://doi.org/10.1177/1473871616638546},bdsk-url-1={https://doi.org/10.1177/1473871616638546},}
Thomas Marrinan, Arthur Nishimoto, Joseph A. Insley, Silvio Rizzi, Andrew Johnson, and Michael E. Papka,
Interactive Multi-Modal Display Spaces for Visual Analysis,
In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces,
2016
Classic visual analysis relies on a single medium for displaying and interacting with data. Large-scale tiled display walls, virtual reality using head-mounted displays or CAVE systems, and collaborative touch screens have all been utilized for data exploration and analysis. We present our initial findings of combining numerous display environments and input modalities to create an interactive multi-modal display space that enables researchers to leverage various pieces of technology that will best suit specific sub-tasks. Our main contributions are 1) the deployment of an input server that interfaces with a wide array of interaction devices to create a single uniform stream of data usable by custom visual applications, and 2) three real-world use cases of leveraging multiple display environments in conjunction with one another to enhance scientific discovery and data dissemination.
@inproceedings{Marrinan2018,title={Interactive Multi-Modal Display Spaces for Visual Analysis},author={Marrinan, Thomas and Nishimoto, Arthur and Insley, Joseph A. and Rizzi, Silvio and Johnson, Andrew and Papka, Michael E.},year={2016},booktitle={Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces},location={Niagara Falls, Ontario, Canada},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Iss '16},pages={421–426},doi={10.1145/2992154.2996792},isbn={9781450342483},url={https://doi.org/10.1145/2992154.2996792},numpages={6},keywords={collaboration, input devices, large-scale displays, motion capture., multi-touch screens, multi-user interaction, multiple display environments, virtual reality},}
2015
Khairi Reda, Andrew E. Johnson, Michael E. Papka, and Jason Leigh,
Effects of Display Size and Resolution on User Behavior and Insight Acquisition in Visual Exploration,
In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems,
2015
Large high-resolution displays are becoming increasingly common in research settings, providing data scientists with visual interfaces for the analysis of large datasets. Numerous studies have demonstrated unique perceptual and cognitive benefits afforded by these displays in visual analytics and information visualization tasks. However, the effects of these displays on knowledge discovery in exploratory visual analysis are still poorly understood. We present the results of a small-scale study to better understand how display size and resolution affect insight. Analyzing participants’ verbal statements, we find preliminary evidence that larger displays with more pixels can significantly increase the number of discoveries reported during visual exploration, while yielding broader, more integrative insights. Furthermore, we find important differences in how participants performed the same visual exploration task using displays of varying sizes. We tie these results to extant work and propose explanations by considering the cognitive and interaction costs associated with visual exploration.
@inproceedings{Reda2015,title={Effects of Display Size and Resolution on User Behavior and Insight Acquisition in Visual Exploration},author={Reda, Khairi and Johnson, Andrew E. and Papka, Michael E. and Leigh, Jason},year={2015},booktitle={Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems},location={Seoul, Republic of Korea},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Chi '15},pages={2759–2768},doi={10.1145/2702123.2702406},isbn={9781450331456},url={https://doi.org/10.1145/2702123.2702406},numpages={10},keywords={visualization, large high-resolution displays, exploratory visual analysis, cognitive biases},}
Huy Bui, Preeti Malakar, Venkatram Vishwanath, Todd S. Munson, Eun-Sung Jung, Andrew Johnson, Michael E. Papka, and Jason Leigh,
Improving Communication Throughput by Multipath Load Balancing on Blue Gene/Q,
In 2015 IEEE 22nd International Conference on High Performance Computing (HiPC),
2015
@inproceedings{Bui2015,title={Improving Communication Throughput by Multipath Load Balancing on Blue Gene/Q},author={Bui, Huy and Malakar, Preeti and Vishwanath, Venkatram and Munson, Todd S. and Jung, Eun-Sung and Johnson, Andrew and Papka, Michael E. and Leigh, Jason},year={2015},booktitle={2015 IEEE 22nd International Conference on High Performance Computing (HiPC)},pages={115--124},doi={10.1109/HiPC.2015.44},keywords={Routing;Supercomputers;Throughput;Load modeling;Multiprocessor interconnection;Optimization;Data transfer;load balancing;optimization;heuristic;multipath},}
Sungwon Nam, Khairi Reda, Luc Renambot, Andrew Johnson, and Jason Leigh,
Multiuser-centered resource scheduling for collaborative display wall environments,
Future Generation Computer Systems,
2015
The popularity of large-scale, high-resolution display walls, as visualization endpoints in eScience infrastructure, is rapidly growing. These displays can be connected to distributed computing resources over high-speed network, providing effective means for researchers to visualize, interact with, and understand large volumes of datasets. Typically large display walls are built by tiling multiple physical displays together and running a tiled display wall required a cluster of computers. With the advent of advanced graphics hardware, a single computer can now drive over a dozen displays, thereby greatly reducing the cost of ownership and maintenance of a tiled display wall system. This in turn enables a broader user base to take advantage of such technologies. Since tiled display walls are also well suited to collaborative work, users tend to launch and operate multiple applications simultaneously. To ensure that applications maintain a high degree of responsiveness to the users even under heavy use loads, the display wall must now ensure that the limited system resources are prioritized to maximize interactivity rather than thread-level fair sharing or overall job-completion throughput. In this paper, we present a new resource scheduling scheme that is specifically designed to prioritize responsiveness in collaborative large display wall environments where multiple users can interact with multiple applications simultaneously. We evaluate our scheduling scheme with a user study involving groups of users interacting simultaneously on a tiled display wall with multiple applications. Results show that our scheduling framework provided a higher frame-rate for applications, which led to a significantly higher user performance (approx. 25%) in a target acquisition test when compared against traditional operating system scheduling scheme.
@article{Nam2015,title={Multiuser-centered resource scheduling for collaborative display wall environments},author={Nam, Sungwon and Reda, Khairi and Renambot, Luc and Johnson, Andrew and Leigh, Jason},year={2015},journal={Future Generation Computer Systems},volume={45},pages={162--175},doi={https://doi.org/10.1016/j.future.2014.08.012},issn={0167-739x},url={https://www.sciencedirect.com/science/article/pii/S0167739X14001605},keywords={Scheduling, Human factors, Algorithms, Interactive systems, Distributed graphics},}
Sangyoon Lee, Andrew E. Johnson, Jason Leigh, Luc Renambot, Steve Jones, and Barbara Di Eugenio,
Emotionally Augmented Storytelling Agent,
In Intelligent Virtual Agents,
2015
The study presented in this paper focuses on a dimensional theory to augment agent nonverbal behavior including emotional facial expression and head gestures to evaluate subtle differences in fine-grained conditions in the context of emotional storytelling. The result of a user study in which participants rated perceived naturalness for seven different conditions showed significantly higher preference for the augmented facial expression whereas the head gesture model received mixed ratings: significant preference in high arousal cases (happy) but not significant in low arousal cases (sad).
@inproceedings{Lee2015,title={Emotionally Augmented Storytelling Agent},author={Lee, Sangyoon and Johnson, Andrew E. and Leigh, Jason and Renambot, Luc and Jones, Steve and Di Eugenio, Barbara},year={2015},booktitle={Intelligent Virtual Agents},publisher={Springer International Publishing},address={Cham},pages={483--487},isbn={978-3-319-21996-7},editor={Brinkman, Willem-Paul and Broekens, Joost and Heylen, Dirk},}
Huy Bui, Robert Jacob, Preeti Malakar, Venkatram Viswanath, Andrew Johnson, Micheal E. Papka, and Jason Leigh,
Multipath Load Balancing for M × N Communication Patterns on the Blue Gene/Q Supercomputer Interconnection Network,
In 2015 IEEE International Conference on Cluster Computing,
2015
@inproceedings{Bui2017,title={Multipath Load Balancing for M × N Communication Patterns on the Blue Gene/Q Supercomputer Interconnection Network},author={Bui, Huy and Jacob, Robert and Malakar, Preeti and Viswanath, Venkatram and Johnson, Andrew and Papka, Micheal E. and Leigh, Jason},year={2015},booktitle={2015 IEEE International Conference on Cluster Computing},pages={833--840},doi={10.1109/cluster.2015.140},keywords={Routing;Throughput;Supercomputers;Optimization;Data transfer;Load modeling;Mathematical model;multi-path data movement;BG/Q;optimization;heuristic;interconnection network;communication patterns;network load balancing},}
2014
Huy Bui, Jason Leigh, Eun-Sung Jungy, Venkatram Vishwanathy, and Michael E. Papka,
Improving Data Movement Performance for Sparse Data Patterns on the Blue Gene/Q Supercomputer,
In 2014 43rd International Conference on Parallel Processing Workshops,
2014
@inproceedings{Bui2014,title={Improving Data Movement Performance for Sparse Data Patterns on the Blue Gene/Q Supercomputer},author={Bui, Huy and Leigh, Jason and Jungy, Eun-Sung and Vishwanathy, Venkatram and Papka, Michael E.},year={2014},booktitle={2014 43rd International Conference on Parallel Processing Workshops},pages={302--311},doi={10.1109/icppw.2014.47},keywords={Routing;Receivers;Data transfer;Throughput;Bandwidth;Heuristic algorithms;Supercomputers;multiple paths;sparse data movement;topologyaware},}
Khairi Reda, Andrew E. Johnson, Jason Leigh, and Michael E. Papka,
Evaluating user behavior and strategy during visual exploration,
In Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization,
2014
Visualization practitioners have traditionally focused on evaluating the outcome of the visual analytic process, as opposed to studying how that process unfolds. Since user strategy would likely influence the outcome of visual analysis and the nature of insights acquired, it is important to understand how the analytic behavior of users is shaped by variations in the design of the visualization interface. This paper presents a technique for evaluating user behavior in exploratory visual analysis scenarios. We characterize visual exploration as a fluid activity involving transitions between mental and interaction states. We show how micro-patterns in these transitions can be captured and analyzed quantitatively to reveal differences in the exploratory behavior of users, given variations in the visualization interface.
@inproceedings{Reda2014,title={Evaluating user behavior and strategy during visual exploration},author={Reda, Khairi and Johnson, Andrew E. and Leigh, Jason and Papka, Michael E.},year={2014},booktitle={Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization},location={Paris, France},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Beliv '14},pages={41–45},doi={10.1145/2669557.2669575},isbn={9781450332095},url={https://doi.org/10.1145/2669557.2669575},numpages={5},keywords={exploratory visual analysis, insight-based evaluation},}
Huy Bui, Hal Finkel, Venkatram Vishwanath, Salma Habib, Katrin Heitmann, Jason Leigh, Michael Papka, and Kevin Harms,
Scalable Parallel I/O on a Blue Gene/Q Supercomputer Using Compression, Topology-Aware Data Aggregation, and Subfiling,
In 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing,
2014
@inproceedings{Bui2018,title={Scalable Parallel I/O on a Blue Gene/Q Supercomputer Using Compression, Topology-Aware Data Aggregation, and Subfiling},author={Bui, Huy and Finkel, Hal and Vishwanath, Venkatram and Habib, Salma and Heitmann, Katrin and Leigh, Jason and Papka, Michael and Harms, Kevin},year={2014},booktitle={2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing},pages={107--111},doi={10.1109/pdp.2014.60},keywords={Bandwidth;Data compression;Libraries;Benchmark testing;Network topology;Supercomputers;Writing;topology-aware data movement;subfiling;compression},}
Alessandro Febretti, Arthur Nishimoto, Victor Mateevitsi, Luc Renambot, Andrew Johnson, and Jason Leigh,
Omegalib: A multi-view application framework for hybrid reality display environments,
In 2014 IEEE Virtual Reality (VR),
Mar,
2014
In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.
@inproceedings{FeNiMa14,title={Omegalib: A multi-view application framework for hybrid reality display environments},author={Febretti, Alessandro and Nishimoto, Arthur and Mateevitsi, Victor and Renambot, Luc and Johnson, Andrew and Leigh, Jason},year={2014},month=mar,booktitle={2014 IEEE Virtual Reality (VR)},pages={9--14},doi={10.1109/vr.2014.6802043},issn={2375-5334},keywords={Three-dimensional displays;Runtime;Rendering (computer graphics);Collaboration;Visualization;Operating systems;Multi-view;Tiled Displays;Cluster;Immersive Environments;Middleware},}
Thomas Marrinan, Jillian Aurisano, Arthur Nishimoto, Krishna Bharadwaj, Victor Mateevitsi, Luc Renambot, Lance Long, Andrew Johnson, and Jason Leigh,
SAGE2: A New Approach for Data Intensive Collaboration Using Scalable Resolution Shared Displays,
In ,
Nov,
2014
@inproceedings{MaAuNi14,title={SAGE2: A New Approach for Data Intensive Collaboration Using Scalable Resolution Shared Displays},author={Marrinan, Thomas and Aurisano, Jillian and Nishimoto, Arthur and Bharadwaj, Krishna and Mateevitsi, Victor and Renambot, Luc and Long, Lance and Johnson, Andrew and Leigh, Jason},year={2014},month=nov,publisher={Ieee},doi={10.4108/icst.collaboratecom.2014.257337},proceedings={10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing},proceedings_a={Collaboratecom},keywords={large displays co-located collaboration remote collaboration window manager cloud technologies multi-user interaction computer supported cooperative work},}
2013
Khairi Reda, Alessandro Febretti, Aaron Knoll, Jillian Aurisano, Jason Leigh, Andrew Johnson, Michael E. Papka, and Mark Hereld,
Visualizing Large, Heterogeneous Data in Hybrid-Reality Environments,
IEEE Computer Graphics and Applications,
2013
@article{Reda2013,title={Visualizing Large, Heterogeneous Data in Hybrid-Reality Environments},author={Reda, Khairi and Febretti, Alessandro and Knoll, Aaron and Aurisano, Jillian and Leigh, Jason and Johnson, Andrew and Papka, Michael E. and Hereld, Mark},year={2013},journal={IEEE Computer Graphics and Applications},volume={33},number={4},pages={38--48},doi={10.1109/mcg.2013.37},keywords={Data visualization;Visualization;Stereo image processing;Monitoring;Visual analytics;Three-dimensional displays;Data visualization;Visualization;Stereo image processing;Monitoring;Navigation;Three-dimensional displays;Educational institutions;3D visualization;large high-resolution displays;integrative visualization;immersive visualization;hybrid-reality environments;computer graphics},}
Khairi Reda, Aaron Knoll, Ken-ichi Nomura, Michael E. Papka, Andrew E. Johnson, and Jason Leigh,
Visualizing large-scale atomistic simulations in ultra-resolution immersive environments,
In 2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV),
2013
@inproceedings{Reda2017,title={Visualizing large-scale atomistic simulations in ultra-resolution immersive environments},author={Reda, Khairi and Knoll, Aaron and Nomura, Ken-ichi and Papka, Michael E. and Johnson, Andrew E. and Leigh, Jason},year={2013},booktitle={2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)},pages={59--65},doi={10.1109/ldav.2013.6675159},keywords={Rendering (computer graphics);Visualization;Solid modeling;Macrocell networks;Computational modeling;Graphics processing units;Transfer functions},}
Jason Leigh, Andrew Johnson, Luc Renambot, Tom Peterka, Byungil Jeong, Daniel Sandin, Jonas Talandis, Ratko Jagodic, Sungwon Nam, Hyejung Hur, and Yiwen Sun,
Scalable Resolution Display Walls,
Proceedings of the IEEE,
Jan,
2013
@article{LeJoRe13,title={Scalable Resolution Display Walls},author={Leigh, Jason and Johnson, Andrew and Renambot, Luc and Peterka, Tom and Jeong, Byungil and Sandin, Daniel and Talandis, Jonas and Jagodic, Ratko and Nam, Sungwon and Hur, Hyejung and Sun, Yiwen},year={2013},month=jan,journal={Proceedings of the IEEE},volume={101},pages={115--129},doi={10.1109/jproc.2012.2191609},}
Alessandro Febretti, Arthur Nishimoto, Terrance Thigpen, Jonas Talandis, Lance Long, J. D. Pirtle, Tom Peterka, Alan Verlo, Maxine Brown, Dana Plepys, Dan Sandin, Luc Renambot, Andrew Johnson, and Jason Leigh,
CAVE2: a hybrid reality environment for immersive simulation and information analysis,
In The Engineering Reality of Virtual Reality 2013,
Mar,
2013
@inproceedings{Febretti2013,title={{CAVE2: a hybrid reality environment for immersive simulation and information analysis}},author={{Febretti}, Alessandro and {Nishimoto}, Arthur and {Thigpen}, Terrance and {Talandis}, Jonas and {Long}, Lance and {Pirtle}, J.~D. and {Peterka}, Tom and {Verlo}, Alan and {Brown}, Maxine and {Plepys}, Dana and {Sandin}, Dan and {Renambot}, Luc and {Johnson}, Andrew and {Leigh}, Jason},year={2013},month=mar,booktitle={The Engineering Reality of Virtual Reality 2013},series={Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series},volume={8649},pages={864903},doi={10.1117/12.2005484},editor={{Dolinsky}, Margaret and {McDowall}, Ian E.},eid={864903},adsurl={https://ui.adsabs.harvard.edu/abs/2013SPIE.8649E..03F},adsnote={Provided by the SAO/NASA Astrophysics Data System},}
2012
Tevfik Kosar, Jason Leigh, Andrew Johnson, Luc Renambot, Venkatram Vishwanath, Tom Peterka, and Nicholas Schwarz,
Visualization of Large-Scale Distributed Data,
Jan,
2012
@inbook{Kosar2012,title={Visualization of Large-Scale Distributed Data},author={Kosar, Tevfik and Leigh, Jason and Johnson, Andrew and Renambot, Luc and Vishwanath, Venkatram and Peterka, Tom and Schwarz, Nicholas},year={2012},month=jan,pages={242--274},doi={10.4018/978-1-61520-971-2.ch011},isbn={9781615209729},}
2011
Thomas A. Defanti, Daniel Acevedo, Richard A. Ainsworth, Maxine D. Brown, Steven Cutchin, Gregory Dawe, Kai-Uwe Doerr, Andrew Johnson, Chris Knox, Robert Kooima, Falko Kuester, Jason Leigh, Lance Long, Peter Otto, Vid Petrovic, Kevin Ponto, Andrew Prudhomme, Ramesh Rao, Luc Renambot, Daniel J. Sandin, Jurgen P. Schulze, Larry Smarr, Madhu Srinivasan, Philip Weber, and Gregory Wickham,
The future of the CAVE,
Central European Journal of Engineering,
Mar,
2011
@article{Defanti2011,title={{The future of the CAVE}},author={{Defanti}, Thomas A. and {Acevedo}, Daniel and {Ainsworth}, Richard A. and {Brown}, Maxine D. and {Cutchin}, Steven and {Dawe}, Gregory and {Doerr}, Kai-Uwe and {Johnson}, Andrew and {Knox}, Chris and {Kooima}, Robert and {Kuester}, Falko and {Leigh}, Jason and {Long}, Lance and {Otto}, Peter and {Petrovic}, Vid and {Ponto}, Kevin and {Prudhomme}, Andrew and {Rao}, Ramesh and {Renambot}, Luc and {Sandin}, Daniel J. and {Schulze}, Jurgen P. and {Smarr}, Larry and {Srinivasan}, Madhu and {Weber}, Philip and {Wickham}, Gregory},year={2011},month=mar,journal={Central European Journal of Engineering},volume={1},number={1},pages={16--37},doi={10.2478/s13531-010-0002-5},keywords={CAVE, Computer-supported collaborative work (CSCW), Graphics packages, Image displays, Immersive environments, Interactive environments, Sonification, Tele-immersion, Virtual reality, Scalable multi-tile displays},adsurl={https://ui.adsabs.harvard.edu/abs/2011CEJE....1...16D},adsnote={Provided by the SAO/NASA Astrophysics Data System},}
Ratko Jagodic, Luc Renambot, Andrew Johnson, Jason Leigh, and Sachin Deshpande,
Enabling multi-user interaction in large high-resolution distributed environments,
Future Generation Computer Systems,
2011
CineGrid: Super high definition media over optical networks
As the amount and the resolution of collected scientific data increase, scientists are realizing the potential benefits that large high-resolution displays can have in assimilating this incoming data. Often this data has to be processed on powerful remote computing and storage resources, converted to high-resolution digital media and yet visualized on a local tiled-display. This is the basic premise behind the OptIPuter model. While the streaming middleware to enable this kind of work exists and the optical networking infrastructure is becoming more widely available, enabling multi-user interaction in such environments is still a challenge. In this paper, we present an interaction system we developed to support collaborative work on large high-resolution displays using multiple interaction devices and scalable, distributed user interface widgets. This system allows multiple users to simultaneously interact with local or remote data, media and applications, through a variety of physical interaction devices on large high-resolution displays. Finally, we present our experiences with using the system over the past two years. Most importantly, having an actual working system based on the OptIPuter model allows us to focus our research efforts to better understand how to make such high-resolution environments more user-friendly and usable in true real-world collaborative scenarios as opposed to constrained laboratory settings.
@article{Jagodic2010,title={Enabling multi-user interaction in large high-resolution distributed environments},author={Jagodic, Ratko and Renambot, Luc and Johnson, Andrew and Leigh, Jason and Deshpande, Sachin},year={2011},journal={Future Generation Computer Systems},volume={27},number={7},pages={914--923},doi={https://doi.org/10.1016/j.future.2010.11.018},issn={0167-739x},url={https://www.sciencedirect.com/science/article/pii/S0167739X10002384},note={CineGrid: Super high definition media over optical networks},keywords={Human factors, Input devices and strategies, Interactive systems, Distributed systems, Distributed graphics, Collaborative computing},}
2010
Sungwon Nam, Sachin Deshpande, Venkatram Vishwanath, Byungil Jeong, Luc Renambot, and Jason Leigh,
Multi-application inter-tile synchronization on ultra-high-resolution display walls,
In Proceedings of the First Annual ACM SIGMM Conference on Multimedia Systems,
2010
Ultra-high-resolution tiled-display walls are typically driven by a cluster of computers. Each computer may drive one or more displays. Synchronization between the computers is necessary to ensure that animated imagery displayed on the wall appears seamless. Most tiled-display middleware systems are designed around the assumption that only a single application instance is running in the tiled display at a time. Therefore synchronization can be achieved with a simple solution such as a networked barrier. When a tiled display has to support multiple applications at the same time, however, the simple networked barrier approach does not scale. In this paper we propose and experimentally validate two synchronization algorithms to achieve low-latency, intertile synchronization for multiple applications with independently varying frame rates. The two-phase algorithm is more generally applicable to various highresolution tiled display systems. The one-phase algorithm provides superior results but requires support for the Network Time Protocol and is more CPU-intensive.
@inproceedings{Nam2010,title={Multi-application inter-tile synchronization on ultra-high-resolution display walls},author={Nam, Sungwon and Deshpande, Sachin and Vishwanath, Venkatram and Jeong, Byungil and Renambot, Luc and Leigh, Jason},year={2010},booktitle={Proceedings of the First Annual ACM SIGMM Conference on Multimedia Systems},location={Phoenix, Arizona, USA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={MMSys '10},pages={145–156},doi={10.1145/1730836.1730854},isbn={9781605589145},url={https://doi.org/10.1145/1730836.1730854},numpages={12},keywords={tiled display, frame synchronization, cluster computing},}
Yu-Chung Chen, Sangyoon Lee, Hyejung Hur, Jason Leigh, Andrew Johnson, and Luc Renambot,
Case study: Designing an advanced visualization system for geological core drilling expeditions,
In ,
Apr,
2010
@inproceedings{Chen2010,title={Case study: Designing an advanced visualization system for geological core drilling expeditions},author={Chen, Yu-Chung and Lee, Sangyoon and Hur, Hyejung and Leigh, Jason and Johnson, Andrew and Renambot, Luc},year={2010},month=apr,journal={Conference on Human Factors in Computing Systems - Proceedings},pages={4645--4660},doi={10.1145/1753846.1754206},}
Byungil Jeong, Jason Leigh, Andrew Johnson, Luc Renambot, Maxine Brown, Ratko Jagodic, Sungwon Nam, and Hyejung Hur,
Ultrascale Collaborative Visualization Using a Display-Rich Global Cyberinfrastructure,
IEEE Computer Graphics and Applications,
May,
2010
The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.
@article{Jeong2010,title={Ultrascale Collaborative Visualization Using a Display-Rich Global Cyberinfrastructure},author={Jeong, Byungil and Leigh, Jason and Johnson, Andrew and Renambot, Luc and Brown, Maxine and Jagodic, Ratko and Nam, Sungwon and Hur, Hyejung},year={2010},month=may,journal={IEEE Computer Graphics and Applications},volume={30},number={3},pages={71--83},doi={10.1109/mcg.2010.45},issn={1558-1756},keywords={International collaboration;Visualization;Graphics;Middleware;High-speed networks;Displays;visualization systems and software;distributed and networked graphics;remote systems;computer graphics;graphics and multimedia},}
Sangyoon Lee, Gordon Carlson, Steve Jones, Andrew Johnson, Jason Leigh, and Luc Renambot,
Designing an Expressive Avatar of a Real Person,
In Intelligent Virtual Agents,
2010
The human ability to express and recognize emotions plays an important role in face-to-face communication, and as technology advances it will be increasingly important for computer-generated avatars to be similarly expressive. In this paper, we present the detailed development process for the Lifelike Responsive Avatar Framework (LRAF) and a prototype application for modeling a specific individual to analyze the effectiveness of expressive avatars. In particular, the goals of our pilot study (n = 1,744) are to determine whether the specific avatar being developed is capable of conveying emotional states (Ekmanös six classic emotions) via facial features and whether a realistic avatar is an appropriate vehicle for conveying the emotional states accompanying spoken information. The results of this study show that happiness and sadness are correctly identified with a high degree of accuracy while the other four emotional states show mixed results.
@inproceedings{Lee2010,title={Designing an Expressive Avatar of a Real Person},author={Lee, Sangyoon and Carlson, Gordon and Jones, Steve and Johnson, Andrew and Leigh, Jason and Renambot, Luc},year={2010},booktitle={Intelligent Virtual Agents},publisher={Springer Berlin Heidelberg},address={Berlin, Heidelberg},pages={64--76},isbn={978-3-642-15892-6},editor={Allbeck, Jan and Badler, Norman and Bickmore, Timothy and Pelachaud, Catherine and Safonova, Alla},}
2005
Xun Luo, T. Kline, H.C. Fischer, K.A. Stubblefield, R.V. Kenyon, and D.G. Kamper,
Integration of Augmented Reality and Assistive Devices for Post-Stroke Hand Opening Rehabilitation,
In 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference,
2005
@inproceedings{LuKlFi05,title={Integration of Augmented Reality and Assistive Devices for Post-Stroke Hand Opening Rehabilitation},author={Luo, Xun and Kline, T. and Fischer, H.C. and Stubblefield, K.A. and Kenyon, R.V. and Kamper, D.G.},year={2005},booktitle={2005 IEEE Engineering in Medicine and Biology 27th Annual Conference},pages={6855--6858},doi={10.1109/iembs.2005.1616080},keywords={Augmented reality;Fingers;Virtual reality;Layout;Neodymium;USA Councils;Haptic interfaces;Displays;Biomedical engineering;Electromyography;Stroke;Hand Rehabilitation;Augmented Reality;Assistive Device;Feedback Control},}
2003
Pat Banerjee, and Debraj Basu-Mallick,
Measuring the Effectiveness of Presence and Immersive Tendencies on the Conceptual Design Review Process,
Journal of Computing and Information Science in Engineering,
Jun,
2003
This is the second and final publication analyzing presence, immersive tendencies of individuals, and design comprehension. The effectiveness measurement problem is described in this article by means of a methodology to find the best design review process subject to certain criteria. The effectiveness measurement problem originates from our inability to logically understand the impact of a process or media differences on design comprehension; and it reconciles such differences in strengths of media that cannot be effectively covered by the relationship problem described in the first technical note. We illustrate the methodology with design review experiments where Computer Aided Virtual Environment (CAVE™2) is investigated as a possible virtual environment for product design reviews. Based on the measurable usability attributes of the design review process, the CAVE was considered to be a good solution achieving maximum utility. However, costs may hinder the ability to use CAVE as a tool.
@article{Banerjee2003,title={{Measuring the Effectiveness of Presence and Immersive Tendencies on the Conceptual Design Review Process}},author={Banerjee, Pat and Basu-Mallick, Debraj},year={2003},month=jun,journal={Journal of Computing and Information Science in Engineering},volume={3},number={2},pages={166--169},doi={10.1115/1.1578500},issn={1530-9827},url={https://doi.org/10.1115/1.1578500},eprint={https://asmedigitalcollection.asme.org/computingengineering/article-pdf/3/2/166/5773187/166\_1.pdf},}
Eric He, Javid Alimohideen, Josh Eliason, Naveen K. Krishnaprasad, Jason Leigh, Oliver Yu, and Thomas A. DeFanti,
Quanta: a toolkit for high performance data delivery over photonic networks,
Future Generation Computer Systems,
2003
3rd biennial International Grid applications-driven testbed event, Amsterdam, The Netherlands, 23-26 September 2002
Quanta is a cross-platform adaptive networking toolkit for supporting the data delivery requirements of interactive and bandwidth intensive applications, such as Amplified Collaboration Environments. One of the unique goals of Quanta is to provide applications with the ability to provision optical pathways (commonly referred to as Lambdas) in dedicated photonic networks. This paper will introduce Quanta’s architecture and capabilities, with particular attention given to its aggressive and predictable high performance data transport scheme called Reliable Blast UDP (RBUDP). We provide an analytical model to predict RBUDP’s performance and compare the results of our model against experimental results performed over a high speed wide-area network.
@article{He2003,title={Quanta: a toolkit for high performance data delivery over photonic networks},author={He, Eric and Alimohideen, Javid and Eliason, Josh and Krishnaprasad, Naveen K. and Leigh, Jason and Yu, Oliver and DeFanti, Thomas A.},year={2003},journal={Future Generation Computer Systems},volume={19},number={6},pages={919--933},doi={https://doi.org/10.1016/S0167-739X(03)00071-2},issn={0167-739x},url={https://www.sciencedirect.com/science/article/pii/S0167739X03000712},note={3rd biennial International Grid applications-driven testbed event, Amsterdam, The Netherlands, 23-26 September 2002},keywords={Quanta, Photonic network, Reliable Blast UDP, High performance data transfer, Light path provision},}
2001
Tom DeFanti, Dan Sandin, Maxine Brown, Dave Pape, Josephine Anstey, Mike Bogucki, Greg Dawe, Andy Johnson, and Thomas S. Huang,
Technologies for Virtual Reality/Tele-Immersion Applications: Issues of Research in Image Display and Global Networking,
2001
The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has developed an aggressive program over the past decade to partner with scores of computational scientists and engineers all over the world. The focus of this effort has been to create visualization and virtual reality (VR) devices and applications for collaborative exploration of scientific and engineering data. Since 1995, our research and development activities have incorporated emerging high-bandwidth networks like the vBNS and its international connection point STAR TAP, in an effort now calledtele-immersion.
@inbook{DeFanti2001,title={Technologies for Virtual Reality/Tele-Immersion Applications: Issues of Research in Image Display and Global Networking},author={DeFanti, Tom and Sandin, Dan and Brown, Maxine and Pape, Dave and Anstey, Josephine and Bogucki, Mike and Dawe, Greg and Johnson, Andy and Huang, Thomas S.},year={2001},booktitle={Frontiers of Human-Centered Computing, Online Communities and Virtual Environments},publisher={Springer London},address={London},pages={137--159},doi={10.1007/978-1-4471-0259-5_10},isbn={978-1-4471-0259-5},url={https://doi.org/10.1007/978-1-4471-0259-5_10},editor={Earnshaw, Rae A. and Guedj, Richard A. and Dam, Andries van and Vince, John A.},}
1999
Maria Roussos, Andrew Johnson, Thomas Moher, Jason Leigh, Christina Vasilakis, and Craig Barnes,
Learning and Building Together in an Immersive Virtual World,
Presence,
1999
@article{Roussou1999,title={Learning and Building Together in an Immersive Virtual World},author={Roussos, Maria and Johnson, Andrew and Moher, Thomas and Leigh, Jason and Vasilakis, Christina and Barnes, Craig},year={1999},journal={Presence},publisher={MIT Press},volume={8},number={3},pages={247--263},doi={10.1162/105474699566215},}
Andrew Johnson, Thomas Moher, Stellan Ohlsson, and Mark Gillingham,
The Round Earth Project: deep learning in a collaborative virtual world,
In Proceedings IEEE Virtual Reality (Cat. No. 99CB36316),
1999
@inproceedings{Johnson1999,title={The Round Earth Project: deep learning in a collaborative virtual world},author={Johnson, Andrew and Moher, Thomas and Ohlsson, Stellan and Gillingham, Mark},year={1999},booktitle={Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)},location={Houston, TX},pages={164--171},doi={10.1109/vr.1999.756947},keywords={Earth;Collaboration;Virtual reality;Cognitive science;Geoscience;Computer science;Computer science education;Psychology;Electric breakdown;Space technology},}
Kyoung Shin Park, and Robert V. Kenyon,
Effects of network characteristics on human performance in a collaborative virtual environment,
In Proceedings IEEE Virtual Reality (Cat. No. 99CB36316),
1999
@inproceedings{Kyoung1999,title={Effects of network characteristics on human performance in a collaborative virtual environment},author={Park, Kyoung Shin and Kenyon, Robert V.},year={1999},booktitle={Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)},location={Houston, TX},pages={104--111},doi={10.1109/vr.1999.756940},keywords={Humans;Intelligent networks;Collaboration;Virtual environment;Delay;Jitter;Quality of service;Collaborative work;Optical design;Wide area networks},}
Amarnath Banerjee, Prashant Banerjee, Nong Ye, and Fred Dech,
Assembly planning effectiveness using virtual reality,
Presence: Teleoperators and Virtual Environments,
1999
@article{Banerjee1999,title={Assembly planning effectiveness using virtual reality},author={Banerjee, Amarnath and Banerjee, Prashant and Ye, Nong and Dech, Fred},year={1999},journal={Presence: Teleoperators and Virtual Environments},publisher={MIT Press},volume={8},number={2},pages={204--217},}
Marek Czernuszenko, Daniel Sandin, Andrew Johnson, and Thomas DeFanti,
Modeling 3D scenes from video,
The Visual Computer,
1999
@article{Czernuszenko1999,title={Modeling 3D scenes from video},author={Czernuszenko, Marek and Sandin, Daniel and Johnson, Andrew and DeFanti, Thomas},year={1999},journal={The Visual Computer},publisher={Springer},volume={15},number={7},pages={341--348},}
Andrew Johnson, Jason Leigh, Thomas A DeFanti, Daniel J Sandin, Maxine D Brown, and Greg Dawe,
Next-generation tele-immersive devices for desktop transoceanic collaboration,
In Visual Communications and Image Processing’99,
Jan,
1999
@inproceedings{Johnson2000,title={Next-generation tele-immersive devices for desktop transoceanic collaboration},author={Johnson, Andrew and Leigh, Jason and DeFanti, Thomas A and Sandin, Daniel J and Brown, Maxine D and Dawe, Greg},year={1999},month=jan,day={25},booktitle={Visual Communications and Image Processing'99},location={San Jose, CA},volume={3653},pages={1420--1429},organization={Spie},}
Jason Leigh, Andrew E. Johnson, Thomas A. DeFanti, Maxine Brown, Mohammed Dastagir Ali, Stuart Bailey, Amarnath Banerjee, Prashant Benerjee, Jim Chen, Kevin Curry, Jim Curtis, Fred Dech, Brian Dodds, Ian Foster, Sarah Fraser, Karik Ganeshan, Dennis Glen, Robert Grossman, Randy Heiland, John Hicks, Alan D. Hudson, Tomoko Imai, Mohammed Ali Khan, Abhinav Kapoor, Robert V. Kenyon, John Kelso, Ron Kriz, Cathy Lascara, Xiaoyan Liu, Yalu Lin, Theodore Mason, Alan Millman, Kukimoto Nobuyuki, Kyoung Park, Bill Parod, Paul J. Rajlich, Mary Rasmussen, Maggie Rawlings, Daniel H. Robertson, Samroeng Thongrong, Robert J. Stein, Kent Swartz, Steve Tuecke, Harlan Wallach, Hong Yee Wong, and Glen H. Wheless,
A review of tele-immersive applications in the CAVE research network,
In Proceedings IEEE Virtual Reality (Cat. No. 99CB36316),
1999
@inproceedings{Leigh1999,title={A review of tele-immersive applications in the CAVE research network},author={Leigh, Jason and Johnson, Andrew E. and DeFanti, Thomas A. and Brown, Maxine and Ali, Mohammed Dastagir and Bailey, Stuart and Banerjee, Amarnath and Benerjee, Prashant and Chen, Jim and Curry, Kevin and Curtis, Jim and Dech, Fred and Dodds, Brian and Foster, Ian and Fraser, Sarah and Ganeshan, Karik and Glen, Dennis and Grossman, Robert and Heiland, Randy and Hicks, John and Hudson, Alan D. and Imai, Tomoko and Khan, Mohammed Ali and Kapoor, Abhinav and Kenyon, Robert V. and Kelso, John and Kriz, Ron and Lascara, Cathy and Liu, Xiaoyan and Lin, Yalu and Mason, Theodore and Millman, Alan and Nobuyuki, Kukimoto and Park, Kyoung and Parod, Bill and Rajlich, Paul J. and Rasmussen, Mary and Rawlings, Maggie and Robertson, Daniel H. and Thongrong, Samroeng and Stein, Robert J. and Swartz, Kent and Tuecke, Steve and Wallach, Harlan and Wong, Hong Yee and Wheless, Glen H.},year={1999},booktitle={Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)},location={Houston, TX},pages={180--187},doi={10.1109/vr.1999.756949},keywords={Intelligent networks;Laboratories;Visualization;Virtual reality;Collaborative work;Virtual environment;Image databases;Industrial training;Computer networks;Sea measurements},}
C. Lascara, G. Wheless, D. Cox, R. Patterson, S. Levy, A. Johnson, and J. Leigh,
TeleImmersive Virtual Environments for Collaborative Knowledge Discovery,
Proceedings of the Advanced Simulation Technologies Conference ’99,
Apr,
1999
@article{Lascara1999,title={TeleImmersive Virtual Environments for Collaborative Knowledge Discovery},author={Lascara, C. and Wheless, G. and Cox, D. and Patterson, R. and Levy, S. and Johnson, A. and Leigh, J.},year={1999},month=apr,day={11},journal={Proceedings of the Advanced Simulation Technologies Conference ’99},location={San Diego, CA},date={April 11th, 1999},}
Glen H. Wheless, Cathy M. Lascara, Donna Cox, Robert Patterson, Stuart Levy, Andrew Johnson, Jason Leigh, and Ahbinov Kapoor,
Use of collaborative virtual environments in the mine countermeasures mission,
In Information Systems for Navy Divers and Autonomous Underwater Vehicles Operating in Very Shallow Water and Surf Zone Regions,
1999
We describe our work on the development and use of collaborative virtual environments to support planing, rehearsal, and execution of tactical operations conducted as part of mine countermeasures missions (MCM). Utilizing our VR-based visual analysis tool, Cave5D, we construct interactive virtual environments based on graphical representations of bathymetry/topography, above-surface imags, in-water objects, and environmental conditions. The data sources may include archived data stores and real-time inputs from model simulations or advanced observational platforms. The Cave5D application allows users to view, navigate, and interact with time-varying data in a fully 3D context, thus preserving necessary geospatial relationships crucial for intuitive analysis. Collaborative capabilities have been integrated into Cave5D to enable users at many distributed sites to interact in near real-time with each other and with the data in a many-to-many session. The ability to rapidly configure scenario-based missions in a shared virtual environment has the potential to change the way mission critical information is used by the MCM community.
@inproceedings{Wheless1999,title={{Use of collaborative virtual environments in the mine countermeasures mission}},author={Wheless, Glen H. and Lascara, Cathy M. and Cox, Donna and Patterson, Robert and Levy, Stuart and Johnson, Andrew and Leigh, Jason and Kapoor, Ahbinov},year={1999},booktitle={Information Systems for Navy Divers and Autonomous Underwater Vehicles Operating in Very Shallow Water and Surf Zone Regions},publisher={Spie},volume={3711},pages={203 -- 209},doi={10.1117/12.354656},url={https://doi.org/10.1117/12.354656},editor={Wood-Putnam, Jody L.},organization={International Society for Optics and Photonics},}
N. Ye, P. Banerjee, A. Banerjee, and F. Dech,
A comparative study of assembly planning in traditional and virtual environments,
IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews),
1999
@article{Ye1999,title={A comparative study of assembly planning in traditional and virtual environments},author={Ye, N. and Banerjee, P. and Banerjee, A. and Dech, F.},year={1999},journal={IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)},volume={29},number={4},pages={546--555},doi={10.1109/5326.798768},keywords={Assembly;Virtual reality;Tellurium;Computer aided manufacturing;Production planning;Design engineering;Virtual environment;Costs;Product safety;Design automation},}
V. Giallorenzo, P. Banerjee, L. Conroy, and J. Franke,
Application of virtual reality in hospital facilities design,
Virtual Reality,
1999
The airborne particles present in certain hospital environments, such as the tuberculosis isolation or operating rooms, can be extremely harmful for patients and/or hospital personnel. An important issue during the design of hospital facilities is an efficient airborne particle removal system. A near-optimal setup of the parameters that affect the airflow, and consequently the airborne particle trajectories within the room is desirable. Computational Fluid Dynamics (CFD) is an alternative to tedious and time-consuming experimental investigations during the design phase, when a large number of alternatives need to be evaluated. The main limitations of CFD application in building design are the high level of skill required, the complexity of the setup phase, and the difficulty of output data interpretation using common 2D (two-dimensional) display devices. A virtual reality (VR) environment can help in overcoming some of these limitations. A CFD/VR procedure for design of contaminant-free hospital facilities is presented in this paper. By means of a VR preprocessing step, inferior solutions can be discharged to drastically reduce the number of configurations to investigate. Then, a CFD/VR tool is used to explore the restricted set of room layouts. The 3D (three-dimensional), immersive visualisation of an indoor space and of the particle motion inside it allows the user to really see the particle flows and consequently understand the effects of room parameters on particle motion throughout the room. In this way a close-to-optimal configuration of the room layout and of the ventilation system can be achieved more speedily and more conveniently compared to traditional CFD investigations.
@article{Giallorenzo1999,title={Application of virtual reality in hospital facilities design},author={Giallorenzo, V. and Banerjee, P. and Conroy, L. and Franke, J.},year={1999},journal={Virtual Reality},volume={4},number={3},pages={223--234},doi={10.1007/bf01418158},isbn={1434-9957},url={https://doi.org/10.1007/BF01418158},date={1999/09/01},}
Jason Leigh, Andrew E. Johnson, Thomas A. DeFanti, Stuart Bailey, and Robert Grossman,
A Methodology for Supporting Collaborative Exploratory Analysis of Massive Data Sets in Tele-Immersive Environments,
In Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing,
1999
This paper proposes a methodology for employing collaborative, immersive virtual environments as a high-end visualization interface for massive data-sets. The methodology employs feature detection, partitioning, summarization and decimation to significantly cull massive data-sets. These reduced data-sets are then distributed to the remote CAVEs, ImmersaDesks and desktop workstations for viewing. The paper also discusses novel techniques for collaborative visualization and meta-data creation.
@inproceedings{Leigh2000,title={A Methodology for Supporting Collaborative Exploratory Analysis of Massive Data Sets in Tele-Immersive Environments},author={Leigh, Jason and Johnson, Andrew E. and DeFanti, Thomas A. and Bailey, Stuart and Grossman, Robert},year={1999},booktitle={Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing},publisher={IEEE Computer Society},address={Usa},series={Hpdc '99},pages={8},isbn={0769502873},keywords={collaborative virtual reality, data mining, tele-immersion},}
Ellen Sandor, Janine Fron, Kristine Greiber, Fernando Orellana, Stephan Meyers, Dana Plepys, Margaret Dolinsky, and Mohammed Dastagir Ali,
Collaborative Visualization: New Advances in Documenting Virtual Reality with IGrams ,
In 2013 17th International Conference on Information Visualisation,
Jul,
1999
(art)n Laboratory and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago have collaborated on the development of the first real-time, stereoscopic hardcopy output of virtual reality applications - the ImmersaGram (IGram). The results of this new technology directly address a broad range of information visualization issues along a wide spectrum of disciplines from art, architecture, and science, to medicine, engineering and education.
@inproceedings{Sandor1999,title={{ Collaborative Visualization: New Advances in Documenting Virtual Reality with IGrams }},author={Sandor, Ellen and Fron, Janine and Greiber, Kristine and Orellana, Fernando and Meyers, Stephan and Plepys, Dana and Dolinsky, Margaret and Ali, Mohammed Dastagir},year={1999},month=jul,booktitle={2013 17th International Conference on Information Visualisation},publisher={IEEE Computer Society},address={Los Alamitos, CA, USA},pages={523},doi={10.1109/iv.1999.781607},issn={1093-9547},url={https://doi.ieeecomputersociety.org/10.1109/IV.1999.781607},keywords={Virtual Reality;Art;Science;lenticular;PHSCologram;Autostereography},}
1998
Rade Tesic, and Pat Banerjee,
Design of Virtual Objects for Exact Collision Detection in Virtual Reality Modeling of Manufacturing Processes,
In ,
Sep,
1998
Collision detection becomes a key issue when we want to model interactions between general, nonconvex objects in virtual reality applications which arise in manufacturing process domain. Despite significant progress which has been made in developing efficient, exact collision detection algorithms for convex objects, limited and slow progress has been reported in developing collision detection algorithms for general, nonconvex objects. To narrow this gap we introduce a concept of virtual objects which extends applicability of exact collision detection algorithms to nonconvex objects. This paper presents a methodology to encapsulate into virtual objects the surface patches of interest for collision detection as well as the automatic procedures for creation of virtual objects and for partitioning them into convex pieces.The collision detection technique described in this paper is best suited for interactive simulation and animation applications where high accuracy of object contact modeling is required. Examples include virtual assembly; mobile robot simulation; and simulation of manufacturing processes where accurate modeling of near-miss detection is essential, e.g. robotic painting, robotic welding, and NC machining operations.
@inproceedings{TeBa98,title={Design of Virtual Objects for Exact Collision Detection in Virtual Reality Modeling of Manufacturing Processes},author={Tesic, Rade and Banerjee, Pat},year={1998},month=sep,series={International Design Engineering Technical Conferences and Computers and Information in Engineering Conference},volume={4},pages={V004t04a019},doi={10.1115/detc98/dfm-5733},url={https://doi.org/10.1115/DETC98/DFM-5733},eprint={https://asmedigitalcollection.asme.org/IDETC-CIE/proceedings-pdf/DETC98/80340/V004T04A019/6635402/v004t04a019-detc98-dfm-5733.pdf},}
1996
Thomas A. DeFanti, Ian Foster, Michael E. Papka, Rick Stevens, and Tim Kuhfuss,
Overview of the I-Way: Wide-Area Visual Supercomputing,
The International Journal of Supercomputer Applications and High Performance Computing,
1996
This paper discusses the I-WAY project and provides an overview of the papers in this issue of IJSA. The I-WAY is an experimental environment for building distributed vir tual reality applications and for exploring issues of distrib uted wide-area resource management and scheduling. The goal of the I-WAY project is to enable researchers to use multiple internetworked supercomputers and ad vanced visualization systems to conduct very large scale computations. By connecting 12 ATM testbeds, 17 super computer centers, 5 virtual reality research sites, and over 60 applications groups, the I-WAY project has created an extremely diverse wide-area environment for exploring advanced applications. This environment has provided a glimpse of the future for advanced scientific and engineer ing computing.
@article{DeFoPa96,title={Overview of the I-Way: Wide-Area Visual Supercomputing},author={DeFanti, Thomas A. and Foster, Ian and Papka, Michael E. and Stevens, Rick and Kuhfuss, Tim},year={1996},journal={The International Journal of Supercomputer Applications and High Performance Computing},volume={10},number={2},pages={123--131},doi={10.1177/109434209601000201},url={https://doi.org/10.1177/109434209601000201},eprint={https://doi.org/10.1177/109434209601000201},}
1994
Andrew E. Johnson, and Farshad Fotouhi,
The SANDBOX: a virtual reality interface to scientific databases,
In Seventh International Working Conference on Scientific and Statistical Database Management,
1994
@inproceedings{Johnson1994,title={The SANDBOX: a virtual reality interface to scientific databases},author={Johnson, Andrew E. and Fotouhi, Farshad},year={1994},booktitle={Seventh International Working Conference on Scientific and Statistical Database Management},pages={12--21},doi={10.1109/ssdm.1994.336966},keywords={Virtual reality;Instruments;Visual databases;Feedback;Database languages;Computer science;Two dimensional displays;Information retrieval},}
Sumit Das, Terry Franguiadakis, Michael E. Papka, Thomas A. DeFanti, and Daniel J. Sandin,
A genetic programming application in virtual reality,
In Proceedings of 1994 IEEE 3rd International Fuzzy Systems Conference,
1994
@inproceedings{Das1994,title={A genetic programming application in virtual reality},author={Das, Sumit and Franguiadakis, Terry and Papka, Michael E. and DeFanti, Thomas A. and Sandin, Daniel J.},year={1994},booktitle={Proceedings of 1994 IEEE 3rd International Fuzzy Systems Conference},pages={1985--1989 vol.3},doi={10.1109/fuzzy.1994.343536},keywords={Genetic programming;Virtual reality;Shape;Genetic algorithms;Timbre;Image generation;Instruments;Visualization;Virtual environment;Algorithm design and analysis},}
1993
Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti,
Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE,
1993
This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH ’92 and Supercomputing ’92 for computational science data is also mentioned.
@inbook{Cruz-Neira93,title={Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE},author={Cruz-Neira, Carolina and Sandin, Daniel J. and DeFanti, Thomas A.},year={1993},booktitle={Seminal Graphics Papers: Pushing the Boundaries, Volume 2},publisher={Association for Computing Machinery},address={New York, NY, USA},isbn={9798400708978},url={https://doi.org/10.1145/3596711.3596718},edition={1},articleno={6},numpages={8},}
Thomas A. DeFanti, Daniel J. Sandin, and Carolina Cruz-Neira,
A “room” with a “view”,
IEEE Spectr.,
Oct,
1993
An immersive virtual reality system called the CAVE is described. To match virtual reality to real tasks, researchers built this smoothly functioning walk-in system mostly from off-the-shelf components. The CAVE represents a new model for the design of virtual reality systems, one that offers several advantages over existing models. CAVE users do not need to wear helmets, don bulky gloves and heavy electronics packs, or be pushed about by movement restricting platforms. Instead, they put on a pair of lightweight glasses and walk into the Cave, a 27 m3 room with an open side and no ceiling. The Cave is in fact a partial cube, with the top and one vertical side missing. The three vertical sides are 3 m by 3 m rear projection screens facing the viewer, and the floor is a front projection screen. The glasses trick a user’s mind into seeing the screen images as three-dimensional objects
@article{DeFant1993,title={A “room” with a “view”},author={DeFanti, Thomas A. and Sandin, Daniel J. and Cruz-Neira, Carolina},year={1993},month=oct,journal={IEEE Spectr.},publisher={IEEE Press},volume={30},number={10},pages={30–33},doi={10.1109/6.237582},issn={0018-9235},url={https://doi.org/10.1109/6.237582},numpages={4},}
Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti,
Surround-screen projection-based virtual reality: the design and implementation of the CAVE,
In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques,
1993
This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH ’92 and Supercomputing ’92 for computational science data is also mentioned.
@inproceedings{Cruz-Neira1993,title={Surround-screen projection-based virtual reality: the design and implementation of the CAVE},author={Cruz-Neira, Carolina and Sandin, Daniel J. and DeFanti, Thomas A.},year={1993},booktitle={Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques},location={Anaheim, CA},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Siggraph '93},pages={135–142},doi={10.1145/166117.166134},isbn={0897916018},url={https://doi.org/10.1145/166117.166134},numpages={8},keywords={virtual reality, stereoscopic display, real-time manipulation, projection paradigms, head-tracking},}
Carolina Cruz-Neira, Jason Leigh, Michael E. Papka, Craig Barnes, Steven M. Cohen, Sumit Das, Roger Engelmann, Randy Hudson, Trina Roy, Lewis Siegel, Christina Vasilakis, Thomas A. DeFanti, and Daniel J. Sandin,
Scientists in wonderland: A report on visualization applications in the CAVE virtual reality environment,
In Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium,
1993
@inproceedings{Cruz-Neira1994,title={Scientists in wonderland: A report on visualization applications in the CAVE virtual reality environment},author={Cruz-Neira, Carolina and Leigh, Jason and Papka, Michael E. and Barnes, Craig and Cohen, Steven M. and Das, Sumit and Engelmann, Roger and Hudson, Randy and Roy, Trina and Siegel, Lewis and Vasilakis, Christina and DeFanti, Thomas A. and Sandin, Daniel J.},year={1993},booktitle={Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium},pages={59--66},doi={10.1109/vrais.1993.378262},keywords={Virtual reality;Data visualization;Application software;Virtual environment;Computer graphics;Laboratories;Image resolution;Head;Audio systems;Feedback},}
1992
Carolina Cruz-Neira, Daniel J. Sandin, Thomas A. DeFanti, Robert V. Kenyon, and John C. Hart,
The CAVE: audio visual experience automatic virtual environment,
Commun. ACM,
Jun,
1992
@article{Cruz-Neira1992,title={The CAVE: audio visual experience automatic virtual environment},author={Cruz-Neira, Carolina and Sandin, Daniel J. and DeFanti, Thomas A. and Kenyon, Robert V. and Hart, John C.},year={1992},month=jun,journal={Commun. ACM},publisher={Association for Computing Machinery},address={New York, NY, USA},volume={35},number={6},pages={64–72},doi={10.1145/129888.129892},issn={0001-0782},url={https://doi.org/10.1145/129888.129892},numpages={9},}
John C. Hart,
The object instancing paradigm for linear fractal modeling,
In Proceedings of the Conference on Graphics Interface ’92,
1992
@inproceedings{Hart1992,title={The object instancing paradigm for linear fractal modeling},author={Hart, John C.},year={1992},booktitle={Proceedings of the Conference on Graphics Interface '92},location={Vancouver, British Columbia, Canada},publisher={Morgan Kaufmann Publishers Inc.},address={San Francisco, CA, USA},pages={224–231},isbn={0969533810},numpages={8},keywords={recurrent iterated function system, object instancing, linear fractal, constructive solid geometry, L-system},}
1991
John C. Hart, and Thomas A. DeFanti,
Efficient antialiased rendering of 3-D linear fractals,
In Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques,
1991
Object instancing is the efficient method of representing an hierarchical object with a directed graph instead of a tree. If this graph contains a cycle then the object it represents is a linear fractal. Linear fractals are difficult to render for three specific reasons: (1) ray-fractal intersection is not trivial, (2) surface normals are undefined and (3) the object aliases at all sampling resolutions.Ray-fractal intersections are efficiently approximated to sub-pixel accuracy using procedural bounding volumes and a careful determination of the size of a pixel, giving the perception that the surface is infinitely detailed. Furthermore, a surface normal for these non-differentiable surfaces is defined and analyzed. Finally, the concept of antialiasing "covers" is adapted and used to solve the problem of sampling fractal surfaces.An initial bounding volume estimation method is also described, allowing a linear fractal to be rendered given only its iterated, function system. A parallel implementation of these methods is described and applications of these results to the rendering of other fractal models are given.
@inproceedings{Hart1991,title={Efficient antialiased rendering of 3-D linear fractals},author={Hart, John C. and DeFanti, Thomas A.},year={1991},booktitle={Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Siggraph '91},pages={91–100},doi={10.1145/122718.122728},isbn={0897914368},url={https://doi.org/10.1145/122718.122728},numpages={10},keywords={covers, fractal, object instancing, procedural modeling, ray tracing},}
1990
John C. Hart, Louis H. Kauffman, and Daniel J. Sandim,
Interactive visualization of quaternion Julia sets,
In Proceedings of the First IEEE Conference on Visualization: Visualization ‘90,
Oct,
1990
@inproceedings{Hart1990,title={Interactive visualization of quaternion Julia sets},author={Hart, John C. and Kauffman, Louis H. and Sandim, Daniel J.},year={1990},month=oct,day={1},booktitle={Proceedings of the First IEEE Conference on Visualization: Visualization `90},pages={209--218},doi={10.1109/visual.1990.146384},keywords={Quaternions;Fractals;Shape;Clouds;Rendering (computer graphics);Workstations;Ray tracing;Computer graphics;Data visualization;Laboratories},}
1989
Daniel J. Sandin, Ellen Sandor, William T. Cunnally, Mark Resch, Thomas A. DeFanti, and Maxine D. Brown,
Computer-Generated Barrier-Strip Autostereography,
Proceedings of SPIE, Three-Dimensional Visualization and Display Technologies,
Sep,
1989
@article{Sandin1989,title={Computer-Generated Barrier-Strip Autostereography},author={Sandin, Daniel J. and Sandor, Ellen and Cunnally, William T. and Resch, Mark and DeFanti, Thomas A. and Brown, Maxine D.},year={1989},month=sep,day={1},journal={Proceedings of SPIE, Three-Dimensional Visualization and Display Technologies},volume={1083},number={0},pages={65--75},url={www.spie.org},editor={Fisher, Scott S. and Robbins, William E.},}
1984
Thomas A. Defanti,
The Mass Impact of Videogame Technology,
In ,
1984
Publisher Summary This chapter focus on the hardware, software, marketing, legal aspects, and future of coin-operated arcade videogames (“coin-op” or “arcade” games). The chapter presents the history and recent development of videogames. There are many ways to differentiate one videogame from another. The discussion in this chapter attempts to set forth the currently important distinctions of videogame. One way to categorize coin-op games is by the number of players. One and two-player games exist, and although all games allow players to take turns, two-player games allow both to play at the same time: against the machine, against each other, or both. Videogame hardware is examined, starting with the cabinet and proceeding inward to the circuits and logic. The coin-operated videogame is currently available in four physical configurations: full-size upright, mini upright, sit-in, and cocktail table. Animation techniques are rather dependent on the hardware. They can be divided into two types: vector and raster. Vector graphics are drawn random scan and are easily geometrically transformed as wire-frame objects in real time. Raster images can be moved only in x and y directions and possibly rotated by 90\,^∘; animation is done as in cartoons by flipping different images at least 12 times a second.
@incollection{DeFanti1984,title={The Mass Impact of Videogame Technology},author={Defanti, Thomas A.},year={1984},publisher={Elsevier},series={Advances in Computers},volume={23},pages={93--140},doi={https://doi.org/10.1016/S0065-2458(08)60463-5},issn={0065-2458},url={https://www.sciencedirect.com/science/article/pii/S0065245808604635},editor={Yovits, Marshall C.},bdsk-url-1={https://www.sciencedirect.com/science/article/pii/S0065245808604635},bdsk-url-2={https://doi.org/10.1016/S0065-2458(08)60463-5},}
1976
Thomas A. DeFanti,
The digital component of the circle graphics habitat,
In Proceedings of the June 7-10, 1976, National Computer Conference and Exposition,
1976
This real-time interactive computer graphics system derives from the author’s dissertation at the Ohio State University (National Science Foundation Grant GJ-204, Charles A. Csuri, project director). The system, called "The Graphics Symbiosis System" or "Grass" was first designed to help artists interactively explore computer art without the constant companionship of a programmer. Over the past three years, it has been expanded at the University of Illinois at Chicago Circle (Figure 1) and is now the image generation portion of a short-order full-color animated videotape production facility called "The Circle Graphics Habitat." Combined with Dan Sandin’s Image Processor, the system is sufficiently powerful and flexible to be used in real-time performance context here at UICC.
@inproceedings{DeFanti1976,title={The digital component of the circle graphics habitat},author={DeFanti, Thomas A.},year={1976},booktitle={Proceedings of the June 7-10, 1976, National Computer Conference and Exposition},location={New York, New York},publisher={Association for Computing Machinery},address={New York, NY, USA},series={Afips '76},pages={195–203},doi={10.1145/1499799.1499829},isbn={9781450379175},url={https://doi.org/10.1145/1499799.1499829},numpages={9},}