1 to 3 of 3 Results
May 29, 2024
Koch, Maurice; Kurzhals, Kuno; Ngo, Quynh Quang; Weiskopf, Daniel, 2024, "Dataset for NMF-based Analysis of Mobile Eye-Tracking Data", https://doi.org/10.18419/darus-4023, DaRUS, V1, UNF:6:Ejbvxe7W2i/4uT/GMXGVbQ== [fileUNF]
This mobile eye-tracking dataset consists of 27 recordings of three participants (all authors) walking through a small art gallery. Participants were instructed to attend individual paintings in specific orders, resulting in five distinct scanpath patterns. The recordings' durati... |
May 16, 2024
Koch, Maurice; Pathmanathan, Nelusa; Weiskopf, Daniel; Kurzhals, Kuno, 2024, "Dataset for "How Deep Is Your Gaze? Leveraging Distance in Image-Based Gaze Analysis"", https://doi.org/10.18419/darus-4141, DaRUS, V1, UNF:6:sGarZdQENrgDcSTnOSbSeA== [fileUNF]
This dataset was recorded in an AR environment comprised of three physical and three virtual scene objects. Four participants were instructed to gaze at the six objects from different depth levels (50cm, 150cm, 300cm) in two orders (left-to-right, right-to-left). There are seven... |
Apr 3, 2023
Vriend, Sita; Vidyapu, Sandeep; Rama, Amer; Chen, Kun-Ting; Weiskopf, Daniel, 2023, "Supplemental Material for "Which Experimental Design is Better Suited for VQA Tasks? - Eye Tracking Study on Cognitive Load, Performance, and Gaze Allocations"", https://doi.org/10.18419/darus-3380, DaRUS, V1, UNF:6:mnjCJZLU4zgl+uXCgQV5fA== [fileUNF]
We investigated the effect of stimulus-question ordering and modality in which the question is presented of a user study with visual question answering (VQA) tasks. In an eye-tracking user study (N=13), we tested 5 conditions within-subjects. The conditions were counter-balanced... |