Image-space Editing of 3D Content
Abstract
The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.
This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.
Publications
Interactive Scene Flow Editing for Improved Image-based Rendering and Virtual Spacetime Navigation
in Proc. ACM Multimedia, ACM, pp. 631-640, October 2015.
Interactive Spacetime Reconstruction in Computer Graphics
PhD thesis, TU Braunschweig, July 2015.
Cost Volume-based Interactive Depth Editing in Stereo Post-processing
in Proc. European Conference on Visual Media Production (CVMP), vol. 10, pp. 1-6, November 2013.
A Framework for Image-Based Stereoscopic View Synthesis from Asynchronous Multi-View Data
in Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering, Wiley, ISBN 978-1-118-35511-4, pp. 249-270, May 2013.
Integrating Approximate Depth Data into Dense Image Correspondence Estimation
in Proc. European Conference on Visual Media Production (CVMP), vol. 9, pp. 1-6, December 2012.
Improving Dense Image Correspondence Estimation with Interactive User Guidance
in Proc. ACM Multimedia, ACM, pp. 1129-1132, October 2012.
A Loop-Consistency Measure for Dense Correspondences in Multi-View Video
in Journal of Image and Vision Computing, vol. 30, no. 9, pp. 641-654, June 2012.
A Toolchain for Capturing and Rendering Stereo and Multi-View Datasets
in Proc. The International Conference on 3D Imaging (IC3D), pp. 1-7, December 2011.
Making of ”Who Cares?” HD Stereoscopic Free Viewpoint Video
in Proc. European Conference on Visual Media Production (CVMP), vol. 8, pp. 1-10, November 2011.
Two Algorithms for Motion Estimation from Alternate Exposure Images
in Cremers, D. and Magnor, M. and Oswald, M.R. and Zelnik-Manor, L. (Eds.): Video Processing and Computational Video, Springer, ISBN 978-3-642-24869-6, pp. 25-51, October 2011.
Object-aware Gradient-Domain Image Compositing
in Proc. Vision, Modeling and Visualization (VMV), pp. 65-71, October 2011.
Flowlab - an interactive tool for editing dense image correspondences
in Proc. European Conference on Visual Media Production (CVMP), August 2011.
Motion Field Estimation from Alternate Exposure Images
in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 33, no. 8, pp. 1577-1589, August 2011.
Stereoscopic 3D view synthesis from unsynchronized multi-view video
in Proc. European Signal Processing Conference (EUSIPCO), pp. 1904-1909, May 2011.
Edge-Constrained Image Compositing
in Proc. Graphics Interface (GI), pp. 191-198, May 2011.
The virtual video camera: Simplified 3DTV acquisition and processing
in Proc. 3DTV-CON, IEEE Computer Society, pp. 1-4, May 2011.
Perception-motivated interpolation of image sequences
in ACM Transactions on Applied Perception, vol. 8, no. 2, pp. 1-25, February 2011.
Robust Feature Matching in General Multi-Image Setups
in Journal of WSCG, vol. 19, pp. 1-8, February 2011.
Virtual Video Camera: Image-Based Viewpoint Navigation Through Space and Time
in Computer Graphics Forum, vol. 29, no. 8, pp. 2555-2568, December 2010.
Multi-Image Interpolation based on Graph-Cuts and Symmetric Optical Flow
in Proc. Vision, Modeling and Visualization (VMV), Eurographics Association, pp. 115-122, November 2010.
High Resolution Image Correspondences for Video Post-Production
in Proc. European Conference on Visual Media Production (CVMP), vol. 7, IEEE Computer Society, pp. 33-39, November 2010.
http://doi.ieeecomputersociety.org/10.1109/CVMP.2010.12
Space-Time Visual Effects as a Post-Production Process
in ACM Multimedia 2010 Workshop - 1st International Workshop on 3D Video Processing (3DVP), vol. 1, pp. 1-6, October 2010.
Consistent Optical Flow for Stereo Video
in Proc. IEEE International Conference on Image Processing (ICIP), pp. 1-4, September 2010.
Multi-image interpolation based on graph-cuts and symmetric optical flow
in Proc. SIGGRAPH, ACM, p. 1, August 2010.
SIGGRAPH '10: ACM SIGGRAPH 2010 Posters
Multi-view Coding with Dense Correspondence Fields
in Proc. IEEE International Symposium on Consumer Electronics (ISCE), pp. 117-120, June 2010.
Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings
in Proc. 3D Data Processing, Visualization and Transmission (3DPVT), pp. 1-6, May 2010.
Related Projects
Traditional optic flow algorithms rely on consecutive short-exposure images. In contrast, long-exposed images contain integrated motion information directly in form of motion blur. In this project, we use the additional information provided by a long exposure image to improve robustness and accuracy of motion field estimation. Furthermore, the long exposure image can be used to determine the moment of occlusion for the pixels in any of the short exposure images that are occluded or disoccluded.
This work has been funded by the German Science Foundation, DFG MA2555/4-1
Multi-view video camera setups record many images that capture nearly the same scene at nearly the same instant in time. Neighboring images in a multi-video setup restrict the solution space between two images: correspondences between one pair of images must be in accordance with the correspondences to the neighboring images.
The concept of accordance or consistency for correspondences between three neighboring images can be employed in the estimation of dense optical flow and in the matching of sparse features between three images.
This work has been funded in parts by the ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.
Perception-motivated Interpolation of Image Sequences
We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.
Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.
The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).
Official music video "Who Cares" by Symbiz Sound; the first major production using our Virtual Video Camera.
Dubstep, spray cans, brush and paint join forces and unite with the latest digital production techniques. All imagery depicts live action graffiti and performance. Camera motion added in post production using the Virtual Video Camera.