1 to 6 of 6 Results
Jul 25, 2023 - PN 6-4
Schäfer, Noel; Tilli, Pascal; Munz-Körner, Tanja; Künzel, Sebastian; Vidyapu, Sandeep; Vu, Ngoc Thang; Weiskopf, Daniel, 2023, "Model Parameters and Evaluation Data for our Visual Analysis System for Scene-Graph-Based Visual Question Answering", https://doi.org/10.18419/darus-3597, DaRUS, V1
Pretrained model parameters and pregenerated evaluation data for our visual analysis system for scene-graph-based visual question answering (https://doi.org/10.18419/darus-3589). |
Jul 25, 2023 - PN 6-4
Munz-Körner, Tanja; Künzel, Sebastian; Weiskopf, Daniel, 2023, "Supplemental Material for "Visual-Explainable AI: The Use Case of Language Models"", https://doi.org/10.18419/darus-3456, DaRUS, V1
Supplemental material for the poster "Visual-Explainable AI: The Use Case of Language Models" published at the International Conference on Data-Integrated Simulation Science 2023. Collection of videos and images showing our interactive visualization systems for exploring language... |
Jul 25, 2023 - PN 6-4
Schäfer, Noel; Tilli, Pascal; Munz-Körner, Tanja; Künzel, Sebastian; Vidyapu, Sandeep; Vu, Ngoc Thang; Weiskopf, Daniel, 2023, "Visual Analysis System for Scene-Graph-Based Visual Question Answering", https://doi.org/10.18419/darus-3589, DaRUS, V1
Source code of our visual analysis system to explore scene-graph-based visual question answering. This approach is built on top of the state-of-the-art GraphVQA framework which was trained on the GQA dataset. Instructions on how to use our system can be found in the README. |
Jun 12, 2023 - EXC IntCDC Research Project 28 'Visual Support for Architectural Design and Construction'
Pathmanathan, Nelusa; Öney, Seyda; Becher, Michael; Sedlmair, Michael; Weiskopf, Daniel; Kurzhals, Kuno, 2023, "Replication Data for: Been There, Seen That: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments", https://doi.org/10.18419/darus-3383, DaRUS, V1
The file contains a Unity project, which allows to test the desktop-based visualization techniques introduced in the Paper "Been there, Seen that: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments". It allows you to analyze 3D gaze and movement data... |
Jun 12, 2023 - EXC IntCDC Research Project 10 'Co-Design from Architectural, Social and Computational Perspectives'
Öney, Seyda; Pathmanathan, Nelusa; Becher, Michael; Sedlmair, Michael; Weiskopf, Daniel; Kurzhals, Kuno, 2023, "Replication Data for: Visual Gaze Labeling for Augmented Reality Studies", https://doi.org/10.18419/darus-3384, DaRUS, V1
This .zip file contains the Unity project of the visualization tool described in the paper "Visual Gaze Labeling for Augmented Reality Studies" which was accepted at EuroVis 2023 Conference. This tool can be used to annotate gaze data from Augmented Reality (AR) scenarios to perf... |
Apr 3, 2023 - Visualisierungsinstitut der Universität Stuttgart
Vriend, Sita; Vidyapu, Sandeep; Rama, Amer; Chen, Kun-Ting; Weiskopf, Daniel, 2023, "Supplemental Material for "Which Experimental Design is Better Suited for VQA Tasks? - Eye Tracking Study on Cognitive Load, Performance, and Gaze Allocations"", https://doi.org/10.18419/darus-3380, DaRUS, V1, UNF:6:mnjCJZLU4zgl+uXCgQV5fA== [fileUNF]
We investigated the effect of stimulus-question ordering and modality in which the question is presented of a user study with visual question answering (VQA) tasks. In an eye-tracking user study (N=13), we tested 5 conditions within-subjects. The conditions were counter-balanced... |