1 to 10 of 9,986 Results
Feb 24, 2026 -
VQA-MHUG
Unknown - 77.9 KB -
MD5: ca16888f3058978738fc5b9cc5f7e80f
[pickled pandas dataframe] difficulty scores calculated for all question-image pairs |
Feb 24, 2026 -
VQA-MHUG
Python Source Code - 10.5 KB -
MD5: d40a07143b7e3c28adb58e5617cac3de
script to generate fixation maps and scanpaths from fixation data |
Feb 24, 2026 -
VQA-MHUG
Markdown Text - 6.3 KB -
MD5: f7b5548fdcf70fed8894e40b5f72db38
Dataset information and usage description |
Feb 24, 2026 -
VQA-MHUG
Unknown - 85.9 KB -
MD5: 6f754082c2efb34ea8c14b382b279a9a
[pickled pandas dataframe] reasoning type for all question-image pairs |
Feb 24, 2026 -
VQA-MHUG
Python Source Code - 3.0 KB -
MD5: e8da34592737b60024a227ca9745f4be
additional tools to process dataset |
Feb 24, 2026 -
VQA-MHUG
Unknown - 12.5 KB -
MD5: d92f3db0685bd7aa14cb1fb1bb0830d7
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] participant answers to VQA question after viewing both stimuli |
Feb 24, 2026 -
VQA-MHUG
Unknown - 135.6 KB -
MD5: 3f5e1f8129381d3f79833321da75b248
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] bounding box coordinates of text (words) and image |
Feb 24, 2026 -
VQA-MHUG
Unknown - 4.2 MB -
MD5: 2e2a319cd19731b2d11e3a0908619123
[pickled pandas dataframe] [AiR stimuli] [sequential presentation] fixation data of both eyes on both stimuli |
Feb 24, 2026 -
VQA-MHUG
Unknown - 247.8 KB -
MD5: 6619f923c4a30772e6dc7e89e5185ee4
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] participant answers to VQA question after viewing both stimuli |
Feb 24, 2026 -
VQA-MHUG
Unknown - 1.5 MB -
MD5: babf3d958ab5b6560c14f1009bac63a8
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] bounding box coordinates of text (words) and image |
