Talk BA-Talk: Automatic patch sampling for GANs through image retrieval
Speaker(s): Reiko Lettmoden
Talk Invited Talks: Layered Weighted Blended Order-Independent Transparency and AR Carcassonne
Translated with www.DeepL.com/Translator (free version)
Talk Disputation: Computer Graphics from a Bio-Signal Perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery
Speaker(s): Jan-Philipp Tauscher
Talk BA-Talk: Leistungsanalyse und Vergleich differenzierbarer Renderingsysteme
- 28.03.2022 14:00
Speaker(s): Domenik Jaspers
Talk BA-Talk: Erkennung und Einordnung von Emotionen in den semantischen Raum mittels Deep Learning
- 28.03.2022 13:30
Speaker(s): Bill Matthias Thang
Talk BA-Talk: Neural Radiance Fields: Eine systematische Übersichtsarbeit und Ausblick auf weitere Entwicklungen
Speaker(s): Lars Christian Lund
Talk MA-Talk: Visualisierung wissenschaftlicher Daten in Multi-User Augmented Reality
Speaker(s): Jan Wulkop
Talk BA-Talk: Evaluation von Open-Source Experiment-Management- Systemen zur Unterstützung der universitären Forschung
Talk Promotions-Vorvortrag: Computer graphics from a bio-signal perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery
JP Tauscher is presenting his dissertation pre-talk Computer graphics from a bio-signal perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery on Friday, July, 30. at 1pm.
The impact of graphics on our perception is usually measured by asking users to complete self-assessment questionnaires. These psycho-physical rating scales and questionnaires reflect a subjective opinion by conscious responses but may be (in)voluntarily biased and do usually not provide real-time feedback. Subjects may also have difficulties communicating their opinion because a rating scale may not reflect their intrinsic perception or may be biased by external factors such as mood, expectation, past experience or even problems of task definition and understanding.
In this thesis, we investigate how the human body reacts involuntarily to computer-generated as well as real-world image content. Here, we add a whole new range of modalities to our perception quantification apparatus to abstract from subjective ratings towards objective bodily measures. These include electroencephalography (EEG), eye tracking, galvanic skin response (GSR), and cardiac and respiratory data. We seek to explore the gap between what humans consciously see and what they implicitly perceive when consuming generated and natural content. We include different display technologies ranging from traditional monitors to virtual reality (VR) devices commonly used to present computer graphical content.
This thesis shows how the human brain and the autonomous nervous system react to visual stimuli and how these bio-signals can be reliably measured to analyse and quantify the immediate physiological reactions towards certain aspects of generated and natural graphical content. We advance the current frontiers in the context of perceptual graphics towards novel measurement and analysis methods for immediate and involuntary physiological reactions.
Talk MA-Talk: Video Objekt Segmentierung für Omnidirektionale Stereo Panoramen
Speaker(s): Fan Song
Talk MA-Talk: Functional Volumetric Rendering for Industrial Applications
Speaker(s): Jan-Christopher Schmidt
Talk BA-Talk: Bekämpfung von Motion Sickness in VR durch dynamische bipolare Galvanisch Vestibuläre Stimulation
Speaker(s): Max Hattenbach
Talk BA-Talk: Neuronales Rendering - Wahrnehmungsbasierte Auswertung der Tiefenwirkung in VR
Speaker(s): Yannic Rühl
Talk BA-Talk: Entwurf einer interaktiven Simulation zum Erlernen von Sternenkonstellationen in öffentlichen Planetarien
Speaker(s): Lars Richard
Talk Promotions-V-Vg: Guiding Visual Attention in Immersive Environments
Growing popularity of virtual reality (VR) technology, presenting content virtually all around a user, creates new challenges for digital content creators and presentation systems. In this dissertation we investigate how to support viewers to not miss important information when exploring unknown virtual environments. We examine different visual stimuli to guide viewers' attention towards predetermined target regions of surrounding environments. To best possibly maintain the original visual appearance of scenes, we aim for subtle visual modifications that operate as close as possible to viewers' perception threshold, while still providing effective guidance.
In a first approach, we identify issues of existing visual guidance stimuli to be effective in VR environments. For use in large field of view (FOV) head-mounted displays (HMDs), we derive techniques to handle perspective distortions, degradation of visual acuity in the peripheral visual field and target regions outside the initial FOV. An existing visual stimulus, originally conceived for desktop environments, is adapted accordingly and successfully evaluated in a perceptual study.
Subsequently the generalizability of these extending techniques is investigated, regarding different guidance methods and VR devices. For this, additional methods from related work are re-implemented and updated accordingly. Two comparable perceptual studies are conducted to evaluate their effectiveness within a consumer-grade HMD and in an immersive dome projection system covering almost the full human visual field. Regardless of the actual success rates, all of the tested methods show a measurable effect on participants' viewing behavior, indicating general applicability of our modification techniques for various guiding methods and VR systems.
Finally, a novel visual guidance method (SIBM) is created, specifically designed for immersive systems. It builds on contrary manipulations of the two stereoscopic frames in VR rendering systems, turning the inevitable overhead of double (per eye) rendering into an advantage that is not available in monocular systems. Moreover, exploiting our visual system's sensitivity for discrepancies in binocular visual input, it allows to noticeably reduce the required per-image contrast of the actual stimulus well below previous state-of-the-art.
Talk SEP-Abschluss: Massively distributed collaborative crowd input system for dome environments
Dome (Aufnahmestudio & Visualisierungslabor)
Präsentation der Ergebnisse des studentischen Softwareentwicklungspraktikums (SEP).
Talk BA-Talk: Implementing Dynamic Stimuli in VR Environments for Visual Perception Research
Dome (Aufnahmestudio & Visualisierungslabor)
Speaker(s): Mai Hellmann