We are looking into techniques for improving quality and control of volumetric video. Last year we have published some of our progress at two conferences: Siggraph and CVMP.
The following advances have enabled us to use volumetric assets closer to the camera and make them easier to work with in VFX pipelines.
Paper, see supplemental material for video.
Many systems1 output mesh with a “combined” texture, which is an average of projections from multiple viewpoints. This isn't ideal since the texture contains shadows and view-dependent effects tend to be averaged out.
With this work we try to separate the individual texture components out as a post process, ideally to be used with physically based shading.
We compare avatars-based approach with volumetric video and present a couple of techniques for improving workflows with volumetric video.