Virtual Video Camera
The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).
Virtual Video Camera
Image-Based Free Viewpoint Video
"Who Cares", a stereoscopic free-viewpoint music video by Symbiz Sound.
Project SummaryThe Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).
While controlling the location and time of the viewpoint, the user should not be able to distinguish synthetically rendered images from the ones originally recorded. Our key idea is to employ an image interpolation scheme that is based on dense pixel correspondences. This separates us from attempts that rely on depth/geometry reconstruction. We are convinced that a strict enforcement of any geometric model will ultimately fail, since failure cases can easily be constructed for these approaches.
Our goal is to provide and constantly improve a complete end-to-end system that includes algorithms on image correspondence estimation, (real-time) rendering, special-effects creation, camera calibration and quality assessment.
Correspondence and Depth-Image Based Rendering: a Hybrid Approach for Free-Viewpoint Video
in IEEE Trans. Circuits and Systems for Video Technology (T-CSVT), vol. 24, no. 6, pp. 942-951, June 2014.
Virtual Video Camera: a System for Free Viewpoint Video of Arbitrary Dynamic Scenes
PhD thesis, TU Braunschweig, June 2013.
Detail Hallucinated Image Interpolation
Master's thesis, TU Braunschweig, May 2013.
A Framework for Image-Based Stereoscopic View Synthesis from Asynchronous Multi-View Data
in Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering, Wiley, ISBN 978-1-118-35511-4, pp. 249-270, May 2013.
High Resolution Image Correspondences for Video Post-Production
in Journal of Virtual Reality and Broadcasting (JVRB), vol. 9.2012, no. 8, pp. 1-12, December 2012.
Making of ”Who Cares?” HD Stereoscopic Free Viewpoint Video
in Proc. European Conference on Visual Media Production (CVMP), vol. 8, pp. 1-10, November 2011.
The virtual video camera: Simplified 3DTV acquisition and processing
in Proc. 3DTV-CON, IEEE Computer Society, pp. 1-4, May 2011.
Multi-Image Interpolation based on Graph-Cuts and Symmetric Optical Flow
in Proc. Vision, Modeling and Visualization (VMV), Eurographics Association, pp. 115-122, November 2010.
Flexible Stereoscopic 3D Content Creation of Real World Scenes
Technical Report, Computer Graphics Lab, TU Braunschweig, November 2010.
High Resolution Image Correspondences for Video Post-Production
in Proc. European Conference on Visual Media Production (CVMP), vol. 7, IEEE Computer Society, pp. 33-39, November 2010.
Reconstructing Shape and Motion from Asynchronous Cameras
in Proc. Vision, Modeling and Visualization (VMV), pp. 171-177, November 2010.
Belief propagation optical flow for high-resolution image morphing
in Proc. SIGGRAPH, ACM, p. 1, August 2010.
SIGGRAPH '10: ACM SIGGRAPH 2010 Posters
3-D Cinematography with approximate and no geometry
in Rémi Ronfard and Gabriel Taubin (Eds.): Image and Geometry Processing for 3-D Cinematography, Springer, ISBN 978-3-642-12391-7, pp. 259-284, July 2010.
Multi-view Coding with Dense Correspondence Fields
in Proc. IEEE International Symposium on Consumer Electronics (ISCE), pp. 117-120, June 2010.
Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings
in Proc. 3D Data Processing, Visualization and Transmission (3DPVT), pp. 1-6, May 2010.
Integration of visual effects into the Virtual Video Camera system
Master's thesis, Institut für Computergraphik, TU Braunschweig, December 2009.
Spacetime Tetrahedra: Image-Based Viewpoint Navigation through Space and Time
Technical Report no. 12-9, Institut für Computergraphik, TU Braunschweig, December 2008.
Subframe Temporal Alignment of Non-Stationary Cameras
in Proc. British Machine Vision Conference (BMVC), September 2008.
We present a novel multi-view, projective texture mapping technique. While previous multi-view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (``floats'') projected textures during run-time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real-time frame rates. The method is very generally applicable and can be used in combination with many image-based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free-viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. In a nutshell, the notion of Floating Textures is to correct for local texture misalignments by determining the optical flow between projected textures and warping the textures accordingly in the rendered image domain. Both steps, optical flow estimation and multi-texture warping, can be efficiently implemented on graphics hardware to achieve interactive to real-time performance.
The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.
This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.
Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.
Multi-view video camera setups record many images that capture nearly the same scene at nearly the same instant in time. Neighboring images in a multi-video setup restrict the solution space between two images: correspondences between one pair of images must be in accordance with the correspondences to the neighboring images.
The concept of accordance or consistency for correspondences between three neighboring images can be employed in the estimation of dense optical flow and in the matching of sparse features between three images.
This work has been funded in parts by the ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.
We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.
Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.
Official music video "Who Cares" by Symbiz Sound; the first major production using our Virtual Video Camera.
Dubstep, spray cans, brush and paint join forces and unite with the latest digital production techniques. All imagery depicts live action graffiti and performance. Camera motion added in post production using the Virtual Video Camera.