11 to 20 of 9,986 Results
Feb 24, 2026 -
VQA-MHUG
Unknown - 49.0 MB -
MD5: adb4cc0da1318911775480564b0dd7bd
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] fixation data of both eyes on both stimuli |
Feb 24, 2026 -
VQA-MHUG
Unknown - 10.2 KB -
MD5: 0a999724fa8e88670afd12b357c9cf3e
[pickled pandas dataframe] [AiR stimuli] [joint presentation] participant answers to VQA question after viewing joint stimulus |
Feb 24, 2026 -
VQA-MHUG
Unknown - 93.2 KB -
MD5: 6e8318518930db4784dcd1564838a227
[pickled pandas dataframe] [AiR stimuli] [joint presentation] bounding box coordinates of text (words) and image |
Feb 24, 2026 -
VQA-MHUG
Unknown - 2.8 MB -
MD5: 6cd4347565f02a9651729c592c2836b3
[pickled pandas dataframe] [AiR stimuli] [joint presentation] fixation data of both eyes on joint stimulus |
Feb 24, 2026 -
VQA-MHUG
Unknown - 51.8 KB -
MD5: cc75e69cc01d76df475e9920e4288d8e
[pickled pandas dataframe] [VQA stimuli] [joint presentation] participant answers to VQA question after viewing joint stimulus |
Feb 24, 2026 -
VQA-MHUG
Unknown - 425.6 KB -
MD5: a4f459a8685ad9c7888f10ece0671f7d
[pickled pandas dataframe] [VQA stimuli] [joint presentation] bounding box coordinates of text (words) and image |
Feb 24, 2026 -
VQA-MHUG
Unknown - 11.2 MB -
MD5: 3f864520c25e07135b571255b10cf6a6
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] fixation data of both eyes on joint stimulus |
Nov 17, 2025 -
SalChartQA: Question-driven Saliency on Information Visualisations (Dataset and Reproduction Data)
ZIP Archive - 627.9 MB -
MD5: 452ca568bce7b8e9f341e6e31a8c2acf
The SalChartQA dataset, containing fixationByVis, image_questions.json, raw_img, saliency_all, saliency_ans and unified_approved.csv |
Nov 22, 2024 -
SalChartQA: Question-driven Saliency on Information Visualisations (Dataset and Reproduction Data)
ZIP Archive - 730.9 MB -
MD5: aecb5a6a4029bcadb4fe1b0ff78a6345
npy files for dataloader |
May 22, 2024 -
InteRead
Jupyter Notebook - 798.4 KB -
MD5: 37c5c07b6073e1e336477ee98e1786b0
|
