Computer Graphics
TU Braunschweig

Events


Talk Turning Reality into Virtual Reality

30.11.2018 11:00
FhG Heinrich Hertz Institut Berlin

Speaker(s): Marcus Magnor

Current endeavors towards immersive visual entertainment are still almost entirely based on 3D graphics-generated content, limiting application scenarios to virtual worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, two essential ingredients for genuine visual immersion perception, the scene must be rendered in real-time from arbitrary vantage points. While this can be easily accomplished with 3D graphics via standard GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events. In my talk I will outline avenues of research toward enabling the immersive experience of real-world recordings and how to enhance the immersive viewing experience by taking perceptual issues into account.

Talk MA-Talk: Learning Optical Flow from Long-Exposure Images

23.11.2018 11:00
Seminarraum G30

Speaker(s): Moritz Kappel

Recently, Convolutional Neural Networks (CNNs) have been applied to a wide range of tasks including classification, detection, segmentation
and optical flow. Despite convincing results the overall capability of CNNs depends on the utilized training data for supervised learning. This
is especially restricting for optical flow estimation as obtaining ground truth optical flow is infeasible for the bulk number of real world scenes
required for training. As a consequence approaches for unsupervised learning gained more interest.


In this thesis, a CNN was designed to learn the Optical Flow between two images, consisting of the backward and forward motion for
every pixel. Furthermore, an occlusion moment was learned, at which a pixel becomes visible in the target image. For simplicity only
linear motion was considered, as this is sufficient to approximate more complex motion with small time-steps. Unlike traditional supervised Optical Flow networks, the CNN in this thesis was not trained with ground truth optical flow but with long-exposure images between the two images.

Talk BA-Talk: GPU-basierte Bildverarbeitung zur Analyse neuronaler Aktivität in Zebra fischlarven

05.11.2018 13:00
Seminarraum G30

Speaker(s): Immanuel Becker

Bei der Analyse neuronaler Aktivität von lebenden Zebra schlarven mittels Lichtblattmikroskopie werden Auflösungen erreicht, durch die einzelne Neuronen sichtbar werden. Für jede Aufnahme durch das Mikroskop wird der Fisch schichtweise durchleuchtet, sodass ein 3D Bild seines Kopfes entsteht.
Viele solcher 3D Aufnahmen über die Zeit bilden eine große Datenmenge deren Verarbeitung viel Zeit beansprucht. Das Ziel dieser Arbeit ist, GPU-basierte Beschleunigungen zu präsentieren, die die Ausführungszeit von kritischen Stellen der Verarbeitungsroutine verbessern.

Talk Digital Personality and The Emotional Onion

19.10.2018 13:00
IZ G30

Speaker(s): Susana Castillo

BTU Cottbus

Communication is inherent to life. During our daily life we convey information verbally and nonverbally. We can learn a lot about someone by watching their facial expressions and body language. Used to this kind of interaction with their interlocutors, people tend to personify machines, thus, harnessing these
aspects of non-verbal communication can lend virtual agents greater depth and realism, by giving them the ability to actually produce social information. But, in order to achieve these goals, we require a sound understanding of the relationship between cognition and expressive behavior.

This talk explores the multi-layered nature of communication and introduces an extended traditional word-based methodology to use both actual videos and motion capture data to extract the semantic/cognitive space of facial expressions. These recovered spaces can capture the full range of facial communication and are very suitable for semantic-driven facial animation.

Nevertheless, to provide virtual agents with full human-like communicative capabilities, natural facial expressions are not enough. We would like to impregnate all emotions with coherent deviations from the ’neutral’ emotions that can be perceived as series of personality traits which will result in the classification
of the agent as a particular kind of individual (e.g. an aggressive character or a cheerful one). Thus, this talk also will explore how to provide the agent with a personality that can be appreciated by the interlocutor.

In summary, this talk introduces how to find a characterization of the dynamic structure of facial expressions in two different levels. The high level focuses on obtaining the higher-level semantic structure underlying the facial expression space. And, the low-level contains the cues that allow us to replicate any expression by defining the particular movements that conform it, so to say, allow us to map the semantic space and the facial expressions. Last but not least, we will comment the implicit indication of the mapping between this semantic space for facial expressions and the personality space.

Talk MA-Talk: Material Parameter Extraction From Photos

12.10.2018 11:00
Seminarraum G30

Speaker(s): Sascha Fricke

The extraction of material parameters from photographs involves a process generally known as inverse rendering. This process is highly ambiguous and thus difficult to solve. Physically based shading and rendering systems allow the synthesis of believable images from such material parameters. Inverting this process directly is unfortunately not possible. Reformulating this into an optimization problem would allow the estimation of these properties by Gradient Descent. Derivatives of the rendering process with respect to the material parameters are however difficult to find due to discontinuous integrals and even if approximate derivatives are found, the ambiguity of the problem still leads to local minima that do not correspond to the desired material properties.


In the last years, convolutional neural network based systems have been shown to be capable of extracting these material parameters under certain restrictions. Unfortunately, the resulting extracted parameters are either in a parametrization, which is not useful for further use in game or film productions, or are not spatially varying and hence don’t contain enough information. Systems, that extract spatially varying parameters in the desired parametrization do exist, but are mostly only able to extract repeating, regular texture patterns from flat surfaces under simple lighting conditions.

In this thesis, a physically based, differentiable rendering system approach is presented and evaluated that is coupled with an initial, coarse estimation from a convolutional neural network. This system is able to improve the quality of estimations of both individual approaches and can be used in a general and messy real world setting to extract reflectance properties from photos that can be used as textures on triangle based geometry in real time or offline rendering systems.

Talk BA-Talk: Improving Training of Convolutional Neural Networks using Visualization Techniques

17.08.2018 13:30
Seminarraum G30

Speaker(s): Jann-Ole Henningson

Talk MA-Talk: Analyzing the Performance of Deep Neural Networks on Synthetic Training Data

17.08.2018 13:00
Seminarraum G30

Speaker(s): Meng Wang

Talk MA-Talk: An Augmented Reality Framework to Support the Implementation of Educational Applications

22.06.2018 13:00 - 22.06.2018 13:30
G30

Speaker(s): Manuel Behlen

Talk BA-Talk: Guiding the Eyes: Development and Implementation of Gaze Guidance Methods for wide-field-of-view Immersive Environments

15.06.2018 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Oliver Urbaniak

Talk BA-Vortrag: Implementierung und Analyse von Fluid-Kontrolle mittels automatischer Differentiation

27.04.2018 10:00
Informatikzentrum, Seminarraum G30

Speaker(s): Martin Busch

Talk Humanness of virtual agents

16.04.2018 13:00
Seminarraum ICG (IZ G30)

Speaker(s): Katharina Legde

BTU Cottbus

Since computers are increasingly being used in all aspects of daily life, it would be of great advantage if we could communicate with them the same way we would interact with other people. The field of affective interfaces is working towards making computers more accessible by giving them natural-language abilities, including using synthesized speech, facial expressions, and virtual body motions. Such virtual characters often have adult bodies. An adult interface agent usually leads users to expect advanced communicational and social skills. Since computer social skills are still under-developed, users tend to be rather intolerant towards such an agent and find them less likable and appealing. This talk will discuss the challenges of creating an agent with human-like communicational and social skills to thereby raise the acceptance towards virtual avatars.

Talk Teamprojekt-Abschluss: World Builder VR Toolkit Continued

26.03.2018 13:00
ICG Lab, Campus Nord

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk BA-Talk: From Chairs to Humans: Specializing FlowNet 2 to non-rigid Human Motion

27.02.2018 14:00
Informatikzentrum, Seminarraum G30

Speaker(s): Maximilian Homann

Talk Das Weltall in Farbe und 3D

21.02.2018 19:00
Planetarium Wolfsburg

Speaker(s): Marcus Magnor

Public lecture at the planetarium Wolfsburg (website)

Ob in Science Fiction-Filmen, Computerspielen oder im Planetarium: das Weltall ist reich an Formen und Farben. Dabei hat wohl noch niemand  mit eigenen Augen einen echten astronomischen Nebel in bunt und 3D gesehen. Wie sähe es wohl vor Ort wirklich aus, wenn wir zum Ring-, Orion- oder Pferdekopfnebel fliegen könnten?

Talk Immersive Digital Reality

12.01.2018 10:15
Tampere University of Technology, Finland

Speaker(s): Marcus Magnor

Keynote presentation at VR Walkthrough Technology Day, TU Tampere, Finland (presentation video)

Since the times of the Lumière brothers, the way we watch movies hasn’t fundamentally changed: whether in movie theaters, on mobile devices, or on TV at home, we still experience movies as outside observers, watching the action through a “peephole” whose size is defined by the angular extent of the screen. As soon as we look away from the screen or turn  around, we are immediately reminded that we are only “voyeurs”. With full field-of-view, head-mounted and tracked displays available now on the consumer market, this outside-observer paradigm of visual entertainment is giving way to a fully immersive experience that encompasses the viewer and is able to draw us in much more than was possible before.

Current endeavors towards immersive visual entertainment, however, are still almost entirely based on 3D graphics-generated content, limiting application scenarios to virtual worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, which are essential for genuine visual immersion perception, the scene must be rendered in real-time from arbitrary vantage points. While this can be easily accomplished for 3D graphics via standard GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events.

In my talk I will outline avenues of research toward enabling the immersive experience of real-world recordings, enhancing the immersive viewing experience by taking perceptual issues into account, and extending visual immersion beyond a single viewer to create a collectively experienceable immersive real-world environment.

Talk MA-Talk: Fast high-resolution GPU-based Computed Tomography

22.12.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Markus Wedekind

Talk MA-Talk: Automatic Infant Face Verification

08.12.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Hangjian Zhang

Talk Solution methods for vector field tomography in different geometries

15.11.2017 14:30
Informatikzentrum, Seminarraum G30

Speaker(s): Thomas Schuster

Vector field tomography has a broad range of applications such as medical diagnosis, oceanography, plasma physics or electron tomography. In the talk, we present an overview of vector field tomography in several different settings with adapted numerical solvers and inversion schemes.

Talk Bsc Talk: A framework for psychophysical experiment design in immersive full-dome environments

30.10.2017 13:00
ICG Dome, Northern Campus

Speaker(s): Jan-Frederick Musiol

This thesis documents the development of a software framework for conducting psychophysical experiments using the dome projection system of the Computer Graphics Lab at the TU Braunschweig (ICG Dome).
The framework adapts the approaches of existing software designed for conventional flat displays to this new immersive environment.

The thesis describes the functionality of the framework and explains the process of building an experiment using an example.
Finally it discusses the suitability of the ICG Dome for psychophysical experimentation.

Bsc Talk: A framework for psychophysical experiment design in immersive full-dome environments

Talk Data-driven Compressed Sensing Tomography

09.10.2017 09:00
Sandia National Labs, USA

Speaker(s): Marcus Magnor

Talk BA-Talk: Evaluation of Skinning Techniques for Skeletal Animation in MonSteR

04.09.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Paul Maximilian Bittner

Talk MA Talk: Facial Texture Generation from Uncontrolled Monocular Video

01.09.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Rudolf Martin

Talk Real Virtual Humans

10.07.2017 13:30
IZ G30

Speaker(s): Gerard Pons-Moll

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies. 

Currently, digital models typically lack realistic soft tissue and clothing dynamics or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data.

I will give an overview of several of our projects in which we build realistic models of human pose and shape, soft-tissue dynamics and clothing. I will also present a recent technique we have developed to capture human movement from only 6 inertial sensors attached at the body limbs. This will enable capturing human motion of every day activities, for example while we are interacting with other people, while we are riding a bike or driving a car. Such recorded motions will be key to learn models that replicate human behaviour. I will conclude the talk outlining the next challenges to build virtual humans that are indistinguishable from real people.

Bio: 

Gerard Pons-Moll obtained his degree in superior Telecommunications Engineering from the Technical University of Catalonia (UPC) in 2008. From 2007 to 2008 he was at Northeastern University in Boston USA with a fellowship from the Vodafone foundation conducting research on medical image analysis. He received his Ph.D. degree (with distinction) from the Leibniz University of Hannover in 2014. In 2012 he was a visiting researcher at the vision group at the University of Toronto. In 2012 he also worked as intern at the computer vision group at Microsoft Research Cambridge. From 11/2013 until 11/2015 he was a postdoc at the Max Planck Institute (MPI) for Intelligent Systems in Tuebingen, Germany. Since 11/2015 he is a research scientist at the MPI.

His work has been published in the major computer vision and computer graphics conferences and journals including Siggraph, Siggraph Asia, CVPR, ICCV, BMVC(Best Paper), Eurographics(Best Paper), IJCV and TPAMI. He serves regularly as a reviewer for TPAMI, IJCV, Siggraph, Siggraph Asia, CVPR, ICCV, ECCV, ACCV and others. He co-organized 3 tutorials at major conferences: 1 tutorial at ICCV 2011 on Looking at People: Model Based Pose Estimation, and 2 tutorials at ICCV 2015 and Siggraph 2016 on Modeling Human Bodies in Motion.

His research interests are 3D modeling of humans and clothing in motion and using machine learning and graphics models to solve vision problems.

Talk MA-Talk: Virtually Increasing the Walkable Area in Room-Scale Immersive Environments

23.06.2017 13:00
Informatikzentrum, Seminarraum G30

Speaker(s): Adrian Wierzbowski

Talk Bildbasiertes Messen und Modellieren der realen Welt

20.06.2017 14:00
PTB-Braunschweig, Seminarzentrum A, Kohlrausch-Bau

Speaker(s): Marcus Magnor

Bilder sind Projektionen der physikalischen Realität: sämtliche leuchtenden und beleuchteten Dinge senden kontinuierlich Bilder von sich aus, in alle Richtungen und über weite Distanzen. Mit Lichtgeschwindigkeit übertragen Bilder reichhaltige Information über ihren Entstehungsort und ihre Entstehungsweise. Damit stellt jedes digitale Bild eine Messung dar, jede Digitalkamera ist ein Messinstrument, das simultan Millionen von Messwerten unserer natürlichen Umgebung erfasst, aus der Distanz und ohne das vermessene System zu beeinflussen.

In meinem Vortrag möchte ich anhand von Beispielen aus der Radioastronomie, Strömungsmechanik, Atmosphärenoptik und Wahrnehmungspsychologie zeigen, wie moderne Bildverarbeitung und Rechnersimulation in der Lage ist, aus Bildern komplexer natürlicher Vorgänge quantitative Informationen zu ermitteln.

[Weitere Informationen]