Computer Graphics
TU Braunschweig

Events


Talk Promotion: Ego-Motion Aware Immersive Rendering from Real-World Recorded Panorama Videos

31.01.2025 10:00 - 31.01.2025 12:00
TBA

Speaker(s): Moritz Mühlhausen

In this talk, we will explore how we can enhance the immersive experience in virtual reality (VR) by integrating natural motion effects — specifically, ego-motion-aware parallax effects — into real-world panoramic videos.

Traditional panoramic video allows users to view a scene in all directions, but it still limits the sense of presence, especially in VR, where true immersion requires not just looking around but also feeling as though you're moving within the space. This is where parallax comes in: the natural shift in perspective that occurs when we move our heads, which adds depth and realism to our surroundings.

The first part of this talk will focus on how we can use multiple panoramic images to simulate this motion effect. By applying image-warping techniques, we can approximate parallax, making the VR experience feel more dynamic and lifelike. Although this method doesn’t fully replicate the real-world motion, it significantly improves immersion.

Secondly, we will introduce a simpler but powerful approach using a single recording from a single stationary omnidirectional stereo (ODS) camera. This camera captures images for both the left and right eye simultaneously, providing built-in depth perception without the need for multiple cameras. This not only simplifies the capturing process but also allows for a more efficient creation of immersive VR content.

This talk will demonstrate how these methods, whether using multiple cameras or a single ODS camera, can improve depth perception and realism in VR applications. These innovations can make VR experiences — from gaming to education — more engaging and lifelike by offering an experience that feels more connected to real-world motion.

Talk MA-Talk: 4D Diffusion Priors for Robust Dynamic View Synthesis from Monocular Video

20.12.2024 13:00 - 20.12.2024 14:00
IZ G30

Speaker(s): Timon Scholz

Talk Promotions-Vor-Vortrag: Neural Reconstruction and Rendering of Dynamic Real-World Content from Monocular Video

06.12.2024 13:00 - 06.12.2024 14:00
IZ G30

Speaker(s): Moritz Kappel

Dynamic video content is ubiquitous in our daily lives, with countless recordings shared across the globe. But how can we unlock the full potential of these casual captures? The challenge lies in reconstructing immersive representations from monocular videos, a task made difficult by the inherent lack of depth information. This talk explores three machine learning approaches that address this challenge through distinct rendering techniques and strategies to resolve monocular motion and depth ambiguities.

The initial method leverages image translation networks to synthesize human shape, structure, and appearance from pose and motion inputs, enabling temporally consistent human motion transfer with fine clothing dynamics. We will then explore neural radiance fields trained on monocularized multi-view videos for efficient single object reconstruction. Finally, we examine dynamic neural point clouds that incorporate learned priors, such as monocular depth estimation and object segmentation, to resolve ambiguities and enable fast, high-quality scene reconstruction and rendering.

Together, these techniques demonstrate how monocular videos can be transformed into immersive digital experiences, advancing the possibilities of video-based scene reconstruction.

Talk Neural Point-based Radiance Field Rendering for VR

02.12.2024 14:00 - 02.12.2024 15:00
IZ G30

Speaker(s): Linus Franke

Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency, recently showcased with 3D Gaussian Splatting.

This talk will cover similar point-based representations with small neural splats, which allow for reconstruction with very fine detail. This concept is extended for VR rendering, exploiting the human perceptual system for acceleration.

Furthermore, the presentation will explore scene self-refinement capabilities, including techniques for point cloud completion, pose correction and photometric parameter optimization. These techniques address common issues in real-world capturing and significantly enhance rendering quality and temporal stability.

Talk MA-Talk: Exploration and Analysis of Flow Data in Augmented Reality

08.11.2024 13:00
IZ G30

Speaker(s): Anna-Lena Ehmer

Talk BA-Talk: Accelerated Rendering of Implicit Neural Point Clouds through Hardware Rasterization

28.10.2024 13:00
IZ G30

Speaker(s): Tim Stuppenhagen

Talk BA-Talk: Two-Plane-Parameterized Beam Acceleration Data Structure

24.10.2024 13:00
IZ G30

Speaker(s): Marius Werkmeister

Talk BA-Talk: Extension of the Unreal Engine to Generate Image Datasets with a Physically Plausible Range of Light Intensity Values

02.10.2024 14:00
IZ G30

Speaker(s): Maximilian Giller

Talk MA-Talk: Voice in Focus: Debunking and Identifying Audio Deepfakes in Forensic Scenarios

27.09.2024 13:00
IZ G30

Speaker(s): Maurice Semren

In today's media-dominated world, the use of Voice Conversion systems and manipulated audio samples (deep fakes) is becoming increasingly widespread. However, these methods can often spread misinformation and cause confusion. Although there are systems that can identify these fakes, as of now, there is no technology that can accurately identify the source speaker. Developing such systems could greatly assist law enforcement and discourage the misuse of this technology. This work focuses on identifying the original speaker in Voice Conversion deepfakes using a specific list of potential suspects. We examine various Voice Conversion systems, comparing their overall quality, how closely they resemble the target speaker, and how well they disguise the original speaker. Additionally, we compare results from a human perception experiment with machine-based metrics derived from Speaker Verification tools.
The artificial perception appears to yield more accurate identification results on average, even when the human participants and the speaker are familiar with each other.

Talk BA-Talk: Learned Initialization of Neural Rendering Networks for Point-Based Novel View Synthesis

16.09.2024 13:00
G30

Speaker(s): Leon Overkämping

Talk MA-Talk: Investigating horizon mapping for real-time soft-shadows on planetary datasets

30.08.2024 13:00
IZ G30

Speaker(s): Jonathan Fritsch

Talk MA-Talk: Research, optimization and evaluation of brightness estimation for panoramic images based on deep learning models

15.08.2024 11:00
IZ G30

Speaker(s): Jiankun Zhou

Talk Breaking the Limits of Display and Fabrication using Perception-aware Optimizations

26.07.2024 14:00 - 26.07.2024 15:00
Raum G30

Speaker(s): Piotr Didyk

Novel display devices and fabrication techniques enable highly tangible ways of creating, experiencing, and interacting with digital content. The capabilities offered by these new output devices, such as virtual and augmented reality head-mounted displays and new multi-material 3D printers, make them real game-changers in many fields. At the same time, the new possibilities offered by these devices impose many challenges for content creation techniques regarding quality and computational efficiency. This talk will discuss the concept of perception-aware optimizations, which incorporate insights from human perception into computational methods to optimize content according to the capabilities of different output devices, e.g., displays, 3D printers, and requirements of the human sensory system. As demonstrated, the key advantage of such strategies is that tailoring computation to perceptually relevant aspects of the content often reduces the computational cost related to the content creation or overcomes certain limitations of output devices. Besides discussing the general concept, the talk will present several specific applications where perception-aware optimization has been proven beneficial. The examples include methods for optimizing visual content for novel display devices that focus on perceived quality and new computational fabrication techniques for manufacturing objects that look and feel like real ones.

Talk Promotion: Perception-Based Techniques to Enhance User Experience in Virtual Reality

26.07.2024 10:00 - 26.07.2024 12:00
PK 4.122 (Altgebäude, 1. OG.)

Speaker(s): Colin Groth

Virtual Reality (VR) ushered in a new era of immersive content viewing with vast potential for entertainment, design, medicine, and other fields.
However, the willingness of users to practically apply the technology is bound to the quality of the virtual experience. In this dissertation, we describe the development and investigation of novel techniques to reduce negative influences on the user experience in VR applications.
Our methods not only include substantial technical improvements but also consider important characteristics of human perception that are exploited to make the applications more effective and subtle. Mostly, we are focused on visual perception, since we deal with visual stimuli, but we also consider the vestibular sense which is a key component for the occurrence of negative symptoms in VR, referred to as cybersickness. In this dissertation, our techniques are designed for three groups of VR applications, characterized by the degree of freedom to apply adjustments. The first set of techniques addresses the extension of VR systems with stimulation hardware. By adjusting common techniques from the medical field, we artificially induce human body signals to create immersive experiences that reduce common mismatches between perceptual information. The second group focuses on applications that use common hardware and allow adjustments of the full render pipeline. Here, especially immersive video content is notable, where the frame rates and quality of the presentations are often not in line with the high requirements of VR systems to satisfy a decent user experience. To address the display problems, we present a novel video codec based on wavelet compression and perceptual features of the visual system. Finally, the third group of applications is the most restrictive and does not allow modifications of the rendering pipeline. Here, our techniques consist of post-processing manipulations in screen space after rendering the image, without knowledge of the 3D scene. To allow techniques in this group to be subtle, we exploit fundamental properties of human peripheral vision and apply spatial masking as well as gaze-contingent motion scaling in our methods. 

Talk MA-Talk: Synthetic Data Set Generation for Autonomous Driving using Neural Rendering and Machine Learning

28.06.2024 13:00
IZ G30

Speaker(s): Jonas Penshorn

Talk Functional Programming in C++

18.06.2024 08:00
SN 19.1

Speaker(s): Jonathan Müller

Am 18.06.2024, 08:00 Uhr in SN 19.1

begrüßen wir Jonathan Müller für einen Gastvortrag zum Thema „Functional Programming in C++“. Funktionale Programmierung hat sich in immer mehr Bereichen als sichere und vorteilhafte Variante des Programmierens herausgestellt, so beispielsweise in der parallelen Programmierung. John Carmack, Erfinder des Ego-Shooters und Games wie Doom, Quake und Wolfenstein3D, sagte einmal über funktionale Programmierung: „No matter what language you work in, programming in a functional style provides benefits. You should do it whenever it is convenient, and you should think hard about the decision when it isn’t convenient."

Jonathan ist ein C++-Bibliotheksentwickler bei think-cell, hält Vorträge auf Konferenzen und ist Mitglied des C++-Standardisierungsausschusses.

Er ist der Autor von Open-Source-Projekten wie type_safe, einer Bibliothek von Sicherheitshilfsprogrammen, foonathan/memory, einer Speicherzuweisungsbibliothek, und cppast, einem C++-Reflection-Tool. In letzter Zeit hat er sich für Programmiersprachen und Compiler interessiert und lexy, eine C++-Parser-Bibliothek, und lauf, einen Bytecode-Interpreter, veröffentlicht.
Er bloggt auch unter foonathan.net.

Trotz der frühen Stunde freuen wir uns über interessierte Zuschauer.

Talk Globale, visuelle Lokalisierung durch Abgleich von Punkt- und Linienmerkmalen in Bildern mit bekannten, hochgenauen Geodaten

17.05.2024 13:00
IZ G30

Speaker(s): Junbo Li

Heutzutage spielt die Lokalisierung in vielen Gebieten eine sehr wichtige Rolle, wie autonomes Fliegen und autonomes Fahren. Die gebräuchlichste Methode dazu ist die satellitengestützte Lokalisierung, bei deren Anwendung in Städten jedoch das Problem besteht, dass die Lokalisierungsgenauigkeit aufgrund der Behinderung von Gebäuden erheblich abnimmt. Daher ist die Entwicklung einer auf anderen Informationen basierenden Lokalisierung als Ergänzung oder Ersetzung zur satellitengestützten Lokalisierung in städtischen Anwendungsszenarien zu einem Forschungsschwerpunkt geworden. In dieser Arbeit wird hauptsächlich eine Ende-zu-Ende globale visuelle Lokalisierungspipeline entwickelt, die auf dem Abgleich von Punkt- und Linienmerkmalen in Inferenzbildern mit vorprozessierter Datenbank, die im Voraus einmalig anhand des Lokalisierungsgebiets unter Verwendung bekannter hochpräziser Geodaten erstellt wurde. Bei Tests mit den Daten aus sehr komplexer realer städtischer Umgebung kann die mediane Genauigkeit etwa 1 Meter erreichen.

Talk BA-Talk: Evaluation of Methods for Learned Point Spread Functions through Camera-In-The-Loop Optimization

19.04.2024 11:00
IZ G30

Speaker(s): Karl Ritter

Talk Promotions-Vor-Vortrag: Perception-Based Techniques to Enhance User Experience in Virtual Reality

15.03.2024 13:00
IZ G30

Speaker(s): Colin Groth

Talk MA-Talk: An Investigation on the Practicality of Neural Radiance Field Reconstruction from in-the-wild Multi-View Panorama Recordings

22.12.2023 13:00
IZ G30

Speaker(s): Yannic Rühl

Talk Colloquium on AI in Interactive Systems

07.12.2023 10:00 - 08.12.2023 22:00
IZ161

Talk BA-Talk: Partial Face Swaps

09.10.2023 13:00
G30

Speaker(s): Carlotta Harms

Conference Vision, Modeling, and Visualization

27.09.2023 13:00 - 29.09.2023 12:00
Braunschweig, Germany

Chair(s): Marcus Magnor, Martin Eisemann, Susana Castillo

(conference website)

Vision, Modeling, and Visualization

Talk BA-Talk: Kostengünstige integrierte Steuerung und Überwachung von FDM Druckern mittels digitaler Zwillinge

26.09.2023 13:00
G30

Speaker(s): Marc Majohr

In dieser Arbeit wurde ein integriertes Steuerungs- und Überwachungssystem für Consumer FDM Drucker konzipiert und entwickelt .
Ein Fokus liegt auf universeller Anwendbarkeit auf verschiedenen FDM Druckern (L1), sowie auf minimalem Eingriff in den Druckprozess (L5).

Talk Computer Vision from the Perspective of Surveying

28.08.2023 13:00
IZ G30

Speaker(s): Anita Sellent