Point-Based Neural Rendering
Abstract
For decades, computer graphic designers have endeavored to reconstruct our tangible world in order to facilitate the creation of novel virtual environments. The current research in this field is developing methods that utilize neural networks to produce realistic three-dimensional reconstructions, which are generated through a computationally intensive optimization process.
While there are methods that employ, for instance, triangles, voxels, or other implicit representations to model geometry, this project is centered on the utilization of point clouds. Due to their flexibility, they are well suited to representing even complex scenes with a high level of detail. The employment of neural networks facilitates further enhancement of the quality. Consequently, this line of inquiry concentrates principally on two domains. Initially, there is an emphasis on optimizing the efficiency and quality of image generation from point clouds. Secondly, there is an emphasis on improving current network architectures. The long-term objective is to utilize the reconstructed models in interactive real-time applications. For instance, it is conceivable that smartphones could be used not only to capture and view simple images but also detailed 3D reconstructions.
Publications
INPC: Implicit Neural Point Clouds for Radiance Field Rendering
in International Conference on 3D Vision, IEEE, to appear.
PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis
in Proc. Vision, Modeling and Visualization (VMV), The Eurographics Association, pp. 53-61, September 2023.
Related Projects
Neural Reconstruction and Rendering of Dynamic Real-World Scenes
The photorealistic reconstruction and representation of real-world scenes has always been an integral field of research in computer graphics, and includes traditional rendering as well as interdisciplinary techniques from computer vision and machine learning. In addition to conventional applications in photogrammetry, detailed reconstructions from camera or smartphone images have recently also enabled the automated integration of real, photorealistic content in multimedia applications such as virtual reality.
A large number of current methods focus on the 3D representation of static content. In practice, however, many scenes are subject to temporal deformation and therefore require an additional reconstruction of the temporal dimension. At the ICG, we develop technologies for the reconstruction and visualization of dynamic scenes from monocular video recordings. The methods we have developed allow not only the real-time display of new, high-resolution camera views but also the and manipulation of temporal sequences, such as the “bullet time” effect the “bullet time” effect known from the movie "Matrix". In the future, the resulting models will enable exciting new applications, such as the immersive reproduction of experiences in virtual reality.