The Institute for Visualization and Interactive Systems (VIS) at the University of Stuttgart is an institute of the department of computer science in the faculty of computer science, electrical engineering and information technology. Around 70 people are conducting research and teaching in the areas of visualisation and computer graphics, human-computer interaction and cognitive systems, computer vision and pattern recognition as well as augmented and virtual reality. This DataVerse contains the research data produced by the institute in these fields. Large-scale projects – like collaborative research centres – the institute is participating in might have additional DataVerses containing data produced at VIS. Furthermore, we recommend also visiting the DataVerse of VISUS, our closely related central research institute for the area of visualisation.
Featured Dataverses

In order to use this feature you must have at least one published or linked dataverse.

Publish Dataverse

Are you sure you want to publish your dataverse? Once you do so it must remain published.

Publish Dataverse

This dataverse cannot be published because the dataverse it is in has not been published.

Delete Dataverse

Are you sure you want to delete your dataverse? You cannot undelete this dataverse.

Advanced Search

11 to 20 of 10,013 Results
Oct 28, 2024 - VQA-MHUG
Unknown - 78.0 KB - MD5: d24af997f7d7583dfe36b5ecc3ba5d96
[pickled pandas dataframe] difficulty scores calculated for all question-image pairs
Oct 28, 2024 - VQA-MHUG
Python Source Code - 7.8 KB - MD5: 3b05d7b4d9d4baa17b84ad27ccef146c
script to generate fixation maps and scanpaths from fixation data
Oct 28, 2024 - VQA-MHUG
Unknown - 97.8 KB - MD5: 3f139187c1914e9385c784b3cab890e9
[pickled pandas dataframe] reasoning type for all question-image pairs
Oct 28, 2024 - VQA-MHUG
Unknown - 261.8 KB - MD5: fb4e3673787c5e2d4de73ee03b0171ef
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] participant answers to VQA question after viewing both stimuli
Oct 28, 2024 - VQA-MHUG
Unknown - 1.5 MB - MD5: 02724ac0685c3d0edb905eeafe03b94e
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] bounding box coordinates of text (words) and image
Oct 28, 2024 - VQA-MHUG
Unknown - 54.0 MB - MD5: 211f7a3d94500d907cec79069d05000a
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] fixation data of both eyes on both stimuli
Oct 28, 2024 - VQA-MHUG
Unknown - 94.2 KB - MD5: e91c530e707cc7bd93d97ad03c2b046a
[pickled pandas dataframe] [VQA stimuli] [joint presentation] participant answers to VQA question after viewing joint stimulus
Oct 28, 2024 - VQA-MHUG
Unknown - 426.8 KB - MD5: 817a7f9dfea962c8534d5c292cdd917f
[pickled pandas dataframe] [VQA stimuli] [joint presentation] bounding box coordinates of text (words) and image
Oct 28, 2024 - VQA-MHUG
Unknown - 12.2 MB - MD5: 81216e1ca95c0617367fb891e419feca
[pickled pandas dataframe] [VQA stimuli] [sequential presentation] fixation data of both eyes on joint stimulus
May 22, 2024 - Collaborative Artificial Intelligence
Zermiani, Francesca, 2024, "InteRead", https://doi.org/10.18419/DARUS-4091, DaRUS, V1, UNF:6:peWc+ExRsnPhsVEeOyMu0w== [fileUNF]
The InteRead dataset is designed to explore the impact of interruptions on reading behavior. It includes eye-tracking data from 50 adults with normal or corrected-to-normal eyesight and proficiency in English (native or C1 level). The dataset encompasses a self-paced reading task of an English fictional text, with participants encountering interrup...
Add Data

Log in to create a dataverse or add a dataset.

Share Dataverse

Share this dataverse on your favorite social media networks.

Link Dataverse
Reset Modifications

Are you sure you want to reset the selected metadata fields? If you do this, any customizations (hidden, required, optional) you have done will no longer appear.