The Institute for Visualization and Interactive Systems (VIS) at the University of Stuttgart is an institute of the department of computer science in the faculty of computer science, electrical engineering and information technology. Around 70 people are conducting research and teaching in the areas of visualisation and computer graphics, human-computer interaction and cognitive systems, computer vision and pattern recognition as well as augmented and virtual reality. This DataVerse contains the research data produced by the institute in these fields. Large-scale projects – like collaborative research centres – the institute is participating in might have additional DataVerses containing data produced at VIS. Furthermore, we recommend also visiting the DataVerse of VISUS, our closely related central research institute for the area of visualisation.
Featured Dataverses

In order to use this feature you must have at least one published or linked dataverse.

Publish Dataverse

Are you sure you want to publish your dataverse? Once you do so it must remain published.

Publish Dataverse

This dataverse cannot be published because the dataverse it is in has not been published.

Delete Dataverse

Are you sure you want to delete your dataverse? You cannot undelete this dataverse.

Advanced Search

1 to 10 of 30 Results
Mar 3, 2025 - Collaborative Artificial Intelligence
Sood, Ekta; Kögel, Fabian; Bulling, Andreas, 2024, "VQA-MHUG", https://doi.org/10.18419/DARUS-4428, DaRUS, V2
We present VQA-MHUG - a novel 49-participant dataset of multimodal human gaze on both images and questions during visual question answering (VQA), collected using a high-speed eye tracker. To the best of our knowledge, this is the first resource containing multimodal human gaze data over a textual question and the corresponding image. Our corpus en...
Nov 22, 2024 - SFB-TRR 161 A07 "Visual Attention Modeling for Optimization of Information Visualizations"
Wang, Yao, 2024, "SalChartQA: Question-driven Saliency on Information Visualisations (Dataset and Reproduction Data)", https://doi.org/10.18419/DARUS-3884, DaRUS, V2
Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a q...
May 22, 2024 - Collaborative Artificial Intelligence
Zermiani, Francesca, 2024, "InteRead", https://doi.org/10.18419/DARUS-4091, DaRUS, V1, UNF:6:peWc+ExRsnPhsVEeOyMu0w== [fileUNF]
The InteRead dataset is designed to explore the impact of interruptions on reading behavior. It includes eye-tracking data from 50 adults with normal or corrected-to-normal eyesight and proficiency in English (native or C1 level). The dataset encompasses a self-paced reading task of an English fictional text, with participants encountering interrup...
May 16, 2024 - Collaborative Artificial Intelligence
Bulling, Andreas, 2024, "InvisibleEye", https://doi.org/10.18419/DARUS-3288, DaRUS, V1
We recorded a dataset of more than 280,000 close-up eye images with ground truth annotation of the gaze location. A total of 17 participants were recorded, covering a wide range of appearances: Gender: Five (29%) female and 12 (71%) male Nationality: Seven (41%) German, seven (41%) Indian, one (6%) Bangladeshi, one (6%) Iranian, and one (6%) Greek...
Jan 29, 2024
Franke, Max, 2024, "Shaded relief WebMercator 'slippy map' tiles based on NASA Shuttle Radar Topography Mission Global 1 arc second V003 topographic height data", https://doi.org/10.18419/DARUS-3837, DaRUS, V1
This dataset contains WebMercator tiles which contain gray-scale shaded relief (hill shades), and nothing else. The tiles have a resolution of 256×256px, suitable for web mapping libraries such as Leaflet. The hill shades are generated from SRTM altitude data, which cover the land area between 60° northern and 58° southern latitude, and which lies...
Jun 16, 2023
Keller, Christine, 2023, "Ontologies and Rules for Context-Aware Assessment of the Usability of Output Devices and Modalities in Ubiquitous Mobility Systems", https://doi.org/10.18419/DARUS-3385, DaRUS, V1
This dataset contains OWL ontologies and SWRL that model the usage context and usability of output devices and modalities for ubiquitous mobility systems. It also contains a usability ontology modeling usability attributes relevant in ubiquitous mobility systems. The SWRL rule set contains assessment rules supporting a usability assessment of outpu...
May 26, 2023
Hirsch, Alexandra; Franke, Max; Koch, Steffen, 2023, "Source code for "Comparative Study on the Perception of Direction in Animated Map Transitions Using Different Map Projections"", https://doi.org/10.18419/DARUS-3540, DaRUS, V1
This repository contains the source code related to an OSF pre-registration for an online study. The goal of the study was to evaluate how well participants can determine the geographical direction of an animated map transition. In our between-subject online study, each of three groups is shown map transitions in one map projection: Mercator, azimu...
May 26, 2023
Hirsch, Alexandra; Franke, Max; Koch, Steffen, 2023, "Stimulus Data for "Comparative Study on the Perception of Direction in Animated Map Transitions Using Different Map Projections"", https://doi.org/10.18419/DARUS-3463, DaRUS, V1
We compare how well participants can determine the geographical direction of an animated map transition. In our between-subject online study, each of three groups is shown map transitions in one map projection: Mercator, azimuthal equidistant projection, or two-point equidistant projection. The distances of the start and end point are varied. Map t...
Mar 14, 2023 - Collaborative Artificial Intelligence
Bulling, Andreas, 2023, "MPIIFaceGaze", https://doi.org/10.18419/DARUS-3240, DaRUS, V1
We present the MPIIFaceGaze dataset which is based on the MPIIGaze dataset, with the additional human facial landmark annotation and the face regions available. We added additional facial landmark and pupil center annotations for 37,667 face images. Facial landmarks annotations were conducted in a semi-automatic manner as running facial landmark de...
Mar 8, 2023 - Collaborative Artificial Intelligence
Bulling, Andreas, 2023, "Labeled pupils in the wild (LPW)", https://doi.org/10.18419/DARUS-3237, DaRUS, V1
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover p...
Add Data

Log in to create a dataverse or add a dataset.

Share Dataverse

Share this dataverse on your favorite social media networks.

Link Dataverse
Reset Modifications

Are you sure you want to reset the selected metadata fields? If you do this, any customizations (hidden, required, optional) you have done will no longer appear.