Abstract

Light fields are becoming an increasingly popular method of digital content production for visual effects and virtual/augmented reality as they capture a view dependent representation enabling photo realistic rendering over a range of viewpoints. Light field video is generally captured using arrays of cameras resulting in tens to hundreds of images of a scene at each time instance. An open problem is how to efficiently represent the data preserving the view-dependent detail of the surface in such a way that is compact to store and efficient to render. In this paper we showthat constructing an eigen texture basis representation from the light field using an approximate 3D surface reconstruction as a geometric proxy provides a compact representation that maintains view-dependent realism. We demonstrate that the proposed method is able to reduce storagerequirements by >95% while maintaining the visual quality of the captured data. An efficient view-dependent rendering technique is also proposed which is performed in eigen space allowing smooth continuous viewpoint interpolation through the light field.

Paper

Light Field Compression using Eigen Textures
Marco Volino Armin Mustafa Jean-Yves Guillemaut and Adrian Hilton

International Conference on 3D Vision (3DV) 2019



Citation

    @inproceedings{Volino:3DV:2019,
        AUTHOR = "Volino, Marco and Mustafa, Armin and Guillemaut, Jean-Yves and Hilton, Adrian",
        TITLE = "Light Field Compression using Eigen Textures",
        BOOKTITLE = "International Conference on 3D Vision (3DV)",
        YEAR = "2019",
    }
	    

Acknowledgments

This work was supported by the following sources 'ALIVE: Live action light fields for immersive virtual real-ity experiences’ (InnovateUK 102686), 'Polymersive:Immersive Video Production Tools for Studio and Live Events’(InnovateUK 105168), EPSRC Audio-Visual Media Research Platform Grant (EP/P022529/1) and Royal Academy of Engineering Research Fellowship (RF-201718-17177).