Abstract
Virtual Reality is no distant dream anymore, but the technology has a marketing problem: A view through the eyes of a head-mounted display wearer is bland and loses the usage context. Following the first-person view is hard and it shows twitchy, unnatural motion.
This thesis discusses a general rendering pipeline for runtime 3D engines for
Mixed Reality Media, a form of video composition that places a real person
inside a virtual reality scene. The real world video will be augmented by
multiple video techniques and parameters from the virtual environment.
This allows for example the recreation of light conditions from the virtual
scenery and creates an immersive and inviting view into the virtual scenery.
- PDF Version: MixedRealityMedia-Thesis.pdf (75.4mb)
4.2 Camera Input Lag
After aligning frames the motion in engine and video capture are
in sync
Before video and engine frames have been aligned, there is a
noticable difference in motion
4.7 Light Environment Reproduction
A minor last step is light-reproduction, in which an approximate
lightning setting will be transferred from 3D environment to the video feed
of a VR actor. Assuming that the video footage contains a natural lit,
tint-free and calibrated video signal, it is possible to approximate how a VR
actor would be lit like if he truly is inside the virtual environment
5.4 Edge Cases
Due to the planar projection of the real world feed inside the
engine, any Z- information of the actor is squashed to a fixed depth.
This means that hands are on the same plane. In cases with high z-difference
between actor and actor’s hands, it is possible that hand motion look
unnatural and does not seem like it is to be supposed — the actors hands are
clearly in front of a virtual cube. The produced mixed reality image shows
his arms only behind the cube.
real time chroma keying has problems with motion blur of the
source video material — causing background mixing and incorrect matting.
This is a complex problem that is far beyond the scope of this thesis.