1 to 10 of 1,117 Results
Mar 3, 2025
Sood, Ekta; Kögel, Fabian; Bulling, Andreas, 2024, "VQA-MHUG", https://doi.org/10.18419/DARUS-4428, DaRUS, V2
We present VQA-MHUG - a novel 49-participant dataset of multimodal human gaze on both images and questions during visual question answering (VQA), collected using a high-speed eye tracker. To the best of our knowledge, this is the first resource containing multimodal human gaze data over a textual question and the corresponding image. Our corpus en... |
Mar 3, 2025 -
VQA-MHUG
Markdown Text - 4.2 KB -
MD5: b8367a799da2dbfbb26a6ec4ebc67a5b
Dataset information and usage description |
Nov 22, 2024 - SFB-TRR 161 A07 "Visual Attention Modeling for Optimization of Information Visualizations"
Wang, Yao, 2024, "SalChartQA: Question-driven Saliency on Information Visualisations (Dataset and Reproduction Data)", https://doi.org/10.18419/DARUS-3884, DaRUS, V2
Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a q... |
Nov 22, 2024 -
SalChartQA: Question-driven Saliency on Information Visualisations (Dataset and Reproduction Data)
ZIP Archive - 730.9 MB -
MD5: aecb5a6a4029bcadb4fe1b0ff78a6345
npy files for dataloader |
Oct 28, 2024 -
VQA-MHUG
Unknown - 60.8 KB -
MD5: 1cc24a7c924c3a46b44546f983ac7c79
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] participant answers to VQA question after viewing both stimuli |
Oct 28, 2024 -
VQA-MHUG
Unknown - 128.5 KB -
MD5: 3a5462b2f96d6b0a4b0ad9ac0c595d5f
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] bounding box coordinates of text (words) and image |
Oct 28, 2024 -
VQA-MHUG
Unknown - 4.7 MB -
MD5: b72e6f16af7c4be11138db20927f5d64
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] fixation data of both eyes on both stimuli |
Oct 28, 2024 -
VQA-MHUG
Unknown - 58.6 KB -
MD5: c0d9f958466c90843b9115fff873aef6
[pickled pandas dataframe] [AiR stimuli] [joint presentation] participant answers to VQA question after viewing joint stimulus |
Oct 28, 2024 -
VQA-MHUG
Unknown - 88.8 KB -
MD5: fede2879e467f90ae38975dc1dd0ea63
[pickled pandas dataframe] [AiR stimuli] [joint presentation] bounding box coordinates of text (words) and image |
Oct 28, 2024 -
VQA-MHUG
Unknown - 3.1 MB -
MD5: 05d93d0152c9f418e7cd8c36a868712b
[pickled pandas dataframe] [AiR stimuli] [joint presentation] fixation data of both eyes on joint stimulus |