|Computer Graphics TU Braunschweig|
In this last part of this post I want to show you some applications for our HMD. Foveated Rendering This is an example approach for foveated rendering in order to save render time. We render the left and right view using different resolutions. Only the foveal region gets the highest acuity. We then blend between the regions to achieve a a smooth image.
Here we also lower saturation in the non-foveal region. It doesnt save render time but it shows that you could do every perceptual effect you need, for example to simulate functional defects of the eye. Accomodation Simulation
The next application is accomodation simulation. We achive that by adding depth of field to the computed distance of the gaze vector. To show the effect here we overestimated the depth of field effect. For realistic experiences a lower parameter would be used.
This is another example for accomodation where we added the gaze-vector visually in order to show where the user was looking at. As you can see the user is able to adapt focus for arbitrary distances. It creates an interesting effect and wouldnt be possible without gaze-tracking. Avatar Animation and Telepresence Here you can see real-time avatar animation with gaze-control. This enables deeper immersion and a higher degree of self-expression and and could be used for realistic games or social games, for training applications and for telepresence functionality.
The guy looks a bit sleepy, but it is probably the case that I‘m just not a good artist when it comes to facial animation :-) User Studies and Gaze Analysis in Panoramic Videos The gaze-tracking of course also allows more traditional tasks like user studies, but now in an immersive context. In this example we‘ve recorded gaze data while watching a video in the HMD. In real-time we can visualize the foveal region using a color coded heat-map.
Based on the previous idea we implemented a new framework for gaze analysis of immersive video, which helps to gain insight about attention in VR videos. This is work from our group which is currently being presented at the eye-tracking on visualization workshop in Chicago from my collegue Thomas Löwe. For the shown 360 video we recorded head tracking and gaze data and based on that we derive a multidimensional scaling view, which are the curves in the lower video. If the curves are close together the users have seen the same things, otherwise different things. So this tool can help for analysis and improving storytelling in VR video.
You have seen some applications, however I think we barely scratched the surface yet.