1 to 10 of 1,097 Results
Mar 3, 2025 -
VQA-MHUG
Markdown Text - 4.2 KB -
MD5: b8367a799da2dbfbb26a6ec4ebc67a5b
Dataset information and usage description |
Nov 22, 2024 -
SalChartQA: Question-driven Saliency on Information Visualisations (Dataset and Reproduction Data)
ZIP Archive - 730.9 MB -
MD5: aecb5a6a4029bcadb4fe1b0ff78a6345
npy files for dataloader |
Oct 28, 2024 -
VQA-MHUG
Unknown - 60.8 KB -
MD5: 1cc24a7c924c3a46b44546f983ac7c79
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] participant answers to VQA question after viewing both stimuli |
Oct 28, 2024 -
VQA-MHUG
Unknown - 128.5 KB -
MD5: 3a5462b2f96d6b0a4b0ad9ac0c595d5f
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] bounding box coordinates of text (words) and image |
Oct 28, 2024 -
VQA-MHUG
Unknown - 4.7 MB -
MD5: b72e6f16af7c4be11138db20927f5d64
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] fixation data of both eyes on both stimuli |
Oct 28, 2024 -
VQA-MHUG
Unknown - 58.6 KB -
MD5: c0d9f958466c90843b9115fff873aef6
[pickled pandas dataframe] [AiR stimuli] [joint presentation] participant answers to VQA question after viewing joint stimulus |
Oct 28, 2024 -
VQA-MHUG
Unknown - 88.8 KB -
MD5: fede2879e467f90ae38975dc1dd0ea63
[pickled pandas dataframe] [AiR stimuli] [joint presentation] bounding box coordinates of text (words) and image |
Oct 28, 2024 -
VQA-MHUG
Unknown - 3.1 MB -
MD5: 05d93d0152c9f418e7cd8c36a868712b
[pickled pandas dataframe] [AiR stimuli] [joint presentation] fixation data of both eyes on joint stimulus |
Oct 28, 2024 -
VQA-MHUG
Unknown - 78.0 KB -
MD5: d24af997f7d7583dfe36b5ecc3ba5d96
[pickled pandas dataframe] difficulty scores calculated for all question-image pairs |
Oct 28, 2024 -
VQA-MHUG
Python Source Code - 7.8 KB -
MD5: 3b05d7b4d9d4baa17b84ad27ccef146c
script to generate fixation maps and scanpaths from fixation data |